chash
stringlengths
16
16
content
stringlengths
267
674k
1d8e830d76182a90
söndag 17 maj 2015 Tragedy of Modern Physics: Schrödinger and Einstein, or Quantum Mechanics as Dice Game? The story of modern physics is commonly told as a tragedy in which the fathers of the new physics of atomistic quantum mechanics Einstein and Schrödinger, were brutally killed by their descendents Bohr, Heisenberg and Born. This story is told again in a new book by Paul Halpern with the descriptive title: In this story, it is Einstein and Schrödinger who represent the tragedy in their stubborn opposition to the probabilistic Copenhagen interpretation interpretation of the wave function of Schrödinger's equation and their fruitless search for a unified physical field theory free from dice games, which ended in tragical defeat under ridicule from the physics community controled from Copenhagen by Bohr.   But it is possible that Einstein's and Schrödinger's dream of a unified physical field theory will come true one day, and then the tragedy will instead be modern physics based on dice games. In all modesty this is the working hypothesis I have adopted in my search for a version of Schrödinger's equation allowing a realistic physical interpretation without "quantum randomness". Stay tuned for an update of recent advances in this direction... In short, one may say that Einstein and Schrödinger seek a mathematical model of physical reality as a question of ontological realism or existence or  what "is",  while Bohr is only interested in what we can "say" (based on what we can "see") as a question of epistemological idealism. In the quest between realism and idealism in physics, one may argue that idealism is failed realism. The book describes Einstein's and Schrödinger's positions as follows:  • As originally construed, the Schrödinger equation was designed to model the continuous behavior of tangible matter waves, representing electrons in and out of atoms. Much as Maxwell constructed deterministic equations describing light as electro­magnetic waves traveling through space, Schrödinger wanted to create an equation that would detail the steady flow of matter waves.  • He thereby hoped to offer a comprehensive accounting of all of the physical properties of electrons.  • Born shattered the exactitude of Schrödinger’s description, replacing matter waves with probability waves. Instead of physical properties being assessed directly, they needed to be calculated through mathematical manipulations of the probability waves’ values.  • In doing so, he brought the Schrödinger equation in line with Heisenberg’s ideas about indeterminacy. In Heisenberg’s view, certain pairs of physical quantities, such as position and momentum (mass times velocity) could not be measured simultaneously with high precision.  • Aspiring to model the actual substance of electrons and other particles, not just their likelihoods, Schrödinger criticized the intangible elements of the Heisenberg-Born approach. • He similarly eschewed Bohr’s quantum philosophy, called “complementarity,” in which either wavelike or particlelike properties reared their heads, depending on the experimenter’s choice of measuring apparatus. Nature should be visualizable.  • Starting in the late 1920s, one of his primary goals was a deterministic alternative to probabilistic quantum theory, as developed by Niels Bohr, Werner Heisenberg, Max Born, and others.  • Although he (Einstein) realized that quantum theory was experimentally successful, he judged it incomplete. In his heart he felt that “God did not play dice,” as he put it, couching the issue in terms of what an ideal mechanistic creation would be like.  • Agreeing with Spinoza, Einstein sought the invariant rules governing nature’s mechanisms. He was absolutely determined to prove that the world was absolutely determined. • Einstein, who had been a colleague and dear friend in Berlin, stuck by Schrödinger all along and was delighted to correspond with him about their mutual interests in physics and philosophy.  • Together they battled a common villain: sheer randomness, the opposite of natural order. Schooled in the writings of Spinoza, Schopenhauer— for whom the unifying principle was the force of will, connecting all things in nature— and other philosophers, Einstein and Schrödinger shared a dislike for including ambiguities and subjectivity in any fundamental description of the universe.  • While each played a seminal role in the development of quantum mechanics, both were convinced that the theory was incomplete. Though recognizing the theory’s experimental successes, they believed that further theoretical work would reveal a timeless, objective reality. • As Born’s, Heisenberg’s, and Bohr’s ideas became widely accepted among the physics community, melded into what became known as the “Copenhagen interpretation” or orthodox quantum view, Einstein and Schrödinger became natural allies.  • In their later years, each hoped to find a unified field theory that would fill in the gaps of quantum physics and unite the forces of nature. By extending general relativity to include all of the natural forces, such a theory would replace matter with pure geometry— fulfilling the dream of the Pythagoreans, who believed that “all is number.” • The crux of Schrödinger’s rebuttal was to declare that random quantum jumps simply weren’t physical. He argued for a continuous, deterministic explanation instead. continuous, deterministic equation to defend. • .....by late 1926 mutual opposition to the notion of random quantum jumps forced the two of them into the same anti-Copenhagen camp. The alliance would be forged once they realized that they were among the few vocal critics of Born’s reinterpretation of the wave equation. • After returning to Zurich from Copenhagen, Schrödinger continued to defend his disdain for quantum jumps on the basis that atomic physics should be visualizable and logically consistent.  • By the end of 1926, Einstein had drawn a stark line of demarcation between himself and quantum theory.  • Einstein appealed to Born, trying to convince him that quantum physics required deterministic equations, not probabilistic rules. “Quantum mechanics yields much that is very worthy of regard,” Einstein wrote to Born. “But an inner voice tells me that it is not yet the right track. • The theory . . . hardly brings us closer to the Old One’s secrets. I, in any case, am convinced that He does not play dice. • That was not the last time Einstein would make that point. For the rest of his life, in his explanations of why he didn’t believe in quantum uncertainty, he would reiterate again and again, like a mantra, that God does not roll dice. • In 1927, Einstein delivered a talk at the Prussian Academy purporting to prove that Schrödinger’s wave equation implied definitive particle behavior, not just dice- rolling.  • Despite his prominence, Einstein’s entreaties had little impact on the quantum faithful.  • Einstein returned to Berlin a far more isolated figure in the scientific community. While his world fame continued to grow, his reputation among the younger generation of physicists began to sour, as they derided his objections to quantum mechanics.  • With experimental findings continuing to support the unified quantum picture advocated by Bohr, Heisenberg, Born, Dirac, and others, Einstein’s dismissal of their views seemed petty and illogical.  • Schrödinger was one of the few who sympathized with Einstein’s doubts. They kept up a conversation about ways to extend quantum mechanics to make it more complete.  • Einstein complained to him about the dogmatism of the mainstream quantum community.  • For example, he wrote to Schrödinger in May 1928, “The Heisenberg- Born tranquilizing philosophy— or religion?— is so deliberately contrived that, for the time being, it provides a gentle pillow for the true believer from which he cannot very easily be aroused. So let him lie there. But this religion has . . . damned little effect on me.”  • Although the physics community relocated to the realm of probabilistic quantum reality, leaving Einstein the lonely occupant of an isolated castle of determinism, the press still bathed him in glory. He was the wild-haired genius, the celebrity scientist, the miracle worker who had predicted the bending of starlight. He was something like a ceremonial king who had long lost his influence over the course of events; the media were more interested in him than in the lesser- known workers actually changing science. His every proclamation continued to be reported by the press, if largely ignored by his peers.  • the mainstream physics community, who increasingly viewed him as a relic, he remained the darling of the international media... Inga kommentarer: Skicka en kommentar
8a9acd0318ad2515
Oxford Physics Quick links: My website Research group Contact details Quantum Mechanics This course is given to all second year physicists and is examined on paper A3 at the end of the second year. For the academic year 2015/2016, the lecture course begins in Michaelmas Term 2015 (12 lectures) with the remainder of the lectures in Hilary Term 2016 (15 lectures). In this course we introduce the subject of quantum mechanics. Probabilities and probability amplitudes. Interference, state vectors and the bra-ket notation, wavefunctions. Hermitian operators and physical observables, eigenvalues and expectation values. The effect of measurement on a state; collapse of the wave function. Successive measurements and the uncertainty relations. The relation between simultaneous observables, commutators and complete sets of states. The time-dependent Schrödinger equation. Energy eigenstates and the time-independent Schroedinger equation. The time evolution of a system not in an energy eigenstate. Wave packets in position and momentum space. Probability current density. Wave function of a free particle and its relation to de Broglie's hypothesis and Planck's relation. Particle in one-dimensional square-well potentials of finite and infinite depth. Scattering off, and tunnelling through, a one-dimensional square potential barrier. Circumstances in which a change in potential can be idealised as steep; [Non examinable: Use of the WKB approximation.] The simple harmonic oscillator in one dimension by operator methods. Derivation of energy eigenvalues and eigenfunctions and explicit forms of the eigenfunctions for n=0,1 states. Amplitudes and wave functions for a system of two particles. Simple examples of entanglement. Commutation rules for angular momentum operators including raising and lowering operators, their eigenvalues (general derivation of the eigenvalues of L2 and L not required), and explicit form of the spherical harmonics for l=0,1 states. Rotational spectra of simple diatomic molecules. Representation of spin-1/2 operators by Pauli matrices. The magnetic moment of the electron and precession in a homogeneous magnetic field. The Stern-Gerlach experiment. The combination of two spin-1/2 states into S=0,1; [non-examinable: Derivation of states of well-defined total angular momentum using raising and lowering operators]. Rules for combining angular momenta in general (derivation not required). [Non-examinable: Spectroscopic notation.] Hamiltonian for the gross structure of the hydrogen atom. Centre of mass motion and reduced particle. Separation of the kinetic-energy operator into radial and angular parts. Derivation of the allowed energies; principal and orbital angular-momentum quantum numbers; degeneracy of energy levels. Functional forms and physical interpretation of the wavefunctions for n<3. Course-related links • Recommended textbooks are listed in the first course handout. Quantum-related links • to come (Updated: November 2016) Email: s dot blundell at physics.ox.ac.uk
28342aa42c1b8994
It seems that quantum computing is often taken to mean the quantum circuit method of computation, where a register of qubits is acted on by a circuit of quantum gates and measured at the output (and possibly at some intermediate steps). Quantum annealing at least seems to be an altogether different method to computing with quantum resources1, as it does not involve quantum gates. What different models of quantum computation are there? What makes them different? To clarify, I am not asking what different physical implementations qubits have, I mean the description of different ideas of how to compute outputs from inputs2 using quantum resources. 1. Anything that is inherently non-classical, like entanglement and coherence. 2. A process which transforms the inputs (such as qubits) to outputs (results of the computation). The adiabatic model This model of quantum computation is motivated by ideas in quantum many-body theory, and differs substantially both from the circuit model (in that it is a continuous-time model) and from continuous-time quantum walks (in that it has a time-dependent evolution). Adiabatic computation usually takes the following form. 1. Start with some set of qubits, all in some simple state such as $\lvert + \rangle$. Call the initial global state $\lvert \psi_0 \rangle$. 2. Subject these qubits to an interaction Hamiltonian $H_0$ for which $\lvert \psi_0 \rangle$ is the unique ground state (the state with the lowest energy). For instance, given $\lvert \psi_0 \rangle = \lvert + \rangle^{\otimes n}$, we may choose $H_0 = - \sum_{k} \sigma^{(x)}_k$. 3. Choose a final Hamiltonian $H_1$, which has a unique ground state which encodes the answer to a problem you are interested in. For instance, if you want to solve a constraint satisfaction problem, you could define a Hamiltonian $H_1 = \sum_{c} h_c$, where the sum is taken over the constraints $c$ of the classical problem, and where each $h_c$ is an operator which imposes an energy penalty (a positive energy contribution) to any standard basis state representing a classical assignment which does not satisfy the constraint $c$. 4. Define a time interval $T \geqslant 0$ and a time-varying Hamiltonian $H(t)$ such that $H(0) = H_0$ and $H(T) = H_1$. A common but not necessary choice is to simply take a linear interpolation $H(t) = \tfrac{t}{T} H_1 + (1 - \tfrac{t}{T})H_0$. 5. For times $t = 0$ up to $t = T$, allow the system to evolve under the continuously varying Hamiltonian $H(t)$, and measure the qubits at the output to obtain an outcome $y \in \{0,1\}^n$. The basis of the adiabatic model is the adiabatic theorem, of which there are several versions. The version by Ambainis and Regev [ arXiv:quant-ph/0411152 ] (a more rigorous example) implies that if there is always an "energy gap" of at least $\lambda > 0$ between the ground state of $H(t)$ and its first excited state for all $0 \leqslant t \leqslant T$, and the operator-norms of the first and second derivatives of $H$ are small enough (that is, $H(t)$ does not vary too quickly or abruptly), then you can make the probability of getting the output you want as large as you like just by running the computation slowly enough. Furthermore, you can reduce the probability of error by any constant factor just by slowing down the whole computation by a polynomially-related factor. Despite being very different in presentation from the unitary circuit model, it has been shown that this model is polynomial-time equivalent to the unitary circuit model [ arXiv:quant-ph/0405098 ]. The advantage of the adiabatic algorithm is that it provides a different approach to constructing quantum algorithms which is more amenable to optimisation problems. One disadvantage is that it is not clear how to protect it against noise, or to tell how its performance degrades under imperfect control. Another problem is that, even without any imperfections in the system, determining how slowly to run the algorithm to get a reliable answer is a difficult problem — it depends on the energy gap, and it isn't easy in general to tell what the energy gap is for a static Hamiltonian $H$, let alone a time-varying one $H(t)$. Still, this is a model of both theoretical and practical interest, and has the distinction of being the most different from the unitary circuit model of essentially any that exists. Measurement-based quantum computation (MBQC) This is a way to perform quantum computation, using intermediary measurements as a way of driving the computation rather than just extracting the answers. It is a special case of "quantum circuits with intermediary measurements", and so is no more powerful. However, when it was introduced, it up-ended many people's intuitions of the role of unitary transformations in quantum computation. In this model one has constraints such as the following: 1. One prepares, or is given, a very large entangled state — one which can be described (or prepared) by having some set of qubits all initially prepared in the state $\lvert + \rangle$, and then some sequence of controlled-Z operations $\mathrm{CZ} = \mathrm{diag}(+1,+1,+1,-1)$, performed on pairs of qubits according to the edge-relations of a graph (commonly, a rectangular grid or hexagonal lattice). 2. Perform a sequence of measurements on these qubits — some perhaps in the standard basis, but the majority not in the standard basis, but instead measuring observables such as $M_{\mathrm{XY}}(\theta) = \cos(\theta) X - \sin(\theta) Y$ for various angles $\theta$. Each measurement yields an outcome $+1$ or $-1$ (often labelled '0' or '1' respectively), and the choice of angle is allowed to depend in a simple way on the outcomes of previous measurements (in a way computed by a classical control system). 3. The answer to the computation may be computed from the classical outcomes $\pm 1$ of the measurements. As with the unitary circuit model, there are variations one can consider for this model. However, the core concept is adaptive single-qubit measurements performed on a large entangled state, or a state which has been subjected to a sequence of commuting and possibly entangling operations which are either performed all at once or in stages. This model of computation is usually considered as being useful primarily as a way to simulate unitary circuits. Because it is often seen as a means to simulate a better-liked and simpler model of computation, it is not considered theoretically very interesting anymore to most people. However: • It is important among other things as a motivating concept behind the class IQP, which is one means of demonstrating that a quantum computer is difficult to simulate, and Blind Quantum Computing, which is one way to try to solve problems in secure computation using quantum resources. • There is no reason why measurement-based computations should be essentially limited to simulating unitary quantum circuits: it seems to me (and a handful of other theorists in the minority) that MQBC could provide a way of describing interesting computational primitives. While MBQC is just a special case of circuits with intermediary measurements, and can therefore be simulated by unitary circuits with only polynomial overhead, this is not to say that unitary circuits would necessarily be a very fruitful way of describing anything that one could do in principle in a measurement-based computation (just as there exists imperative and functional programming languages in classical computation which sit a little ill-at-ease with one another). The question remains whether MBQC will suggest any way of thinking about building algorithms which is not as easily presented in terms of unitary circuits — but there can be no question of a computational advantage or disadvantage over unitary circuits, except one of specific resources and suitability for some architecture. • 1 $\begingroup$ MBQC can be seen as the underlying idea behind some error correcting codes, such as the surface code. Mainly in the sense that the surface code corresponds to a 3d lattice of qubits with a particular set of CZs between them that you then measure (with the actual implementation evaluating the cube layer by layer). But perhaps also in the sense that the actual surface code implementation is driven by measuring particular stabilizers. $\endgroup$ – Craig Gidney Sep 17 '18 at 15:51 • 1 $\begingroup$ However, the way in which the measurement outcomes are used differ substantially between QECCs and MBQC. In the idealised case of no or low rate of uncorrelated errors, any QECC is computing the identity transformation at all times, the measurements are periodic in time, and the outcomes are heavily biased towards the +1 outcome. For standard constructions of MBQC protocols, however, the measurements give uniformly random measurement outcomes every time, and those measurements are heavily time-dependent and driving non-trivial evolution. $\endgroup$ – Niel de Beaudrap Sep 17 '18 at 15:57 • 1 $\begingroup$ Is that a qualitative difference or just a quantitative one? The surface code also has those driving operations (e.g. braiding defects and injecting T states), it just separates them by the code distance. If you set the code distance to 1, a much higher proportion of the operations matter when there are no errors. $\endgroup$ – Craig Gidney Sep 17 '18 at 16:07 • 1 $\begingroup$ I would say that the difference occurs at a qualitative level as well, from my experience actually considering the effects of MBQC procedures. Also, it seems to me that in the case of braiding defects and T-state injection that it is not the error correcting code itself, but deformations of them, which are doing the computation. These are certainly relevant things one may do with an error corrected memory, but to say that the code is doing it is about the same level as saying that it is qubits which do quantum computations, as opposed to operations which one performs on those qubits. $\endgroup$ – Niel de Beaudrap Sep 17 '18 at 16:21 The Unitary Circuit Model This is the best well-known model of quantum computation. In this model one has constraints such as the following: 1. a set of qubits initialised to a pure state, which we denote $\lvert 0 \rangle$; 2. a sequence of unitary transformations which one performs on them, which may depend on a classical bit-string $x\in \{0,1\}^n$; 3. one or more measurements in the standard basis performed at the very end of the computation, yielding a classical output string $y \in \{0,1\}^k$. (We do not require $k = n$: for instance, for YES / NO problems, one often takes $k = 1$ no matter the size of $n$.) Minor details may change (for instance, the set of unitaries one may perform; whether one allows preparation in other pure states such as $\lvert 1 \rangle$, $\lvert +\rangle$, $\lvert -\rangle$; whether measurements must be in the standard basis or can also be in some other basis), but these do not make any essential difference. Discrete-time quantum walk A "discrete-time quantum walk" is a quantum variation on a random walk, in which there is a 'walker' (or multiple 'walkers') which takes small steps in a graph (e.g. a chain of nodes, or a rectangular grid). The difference is that where a random walker takes a step in a randomly determined direction, a quantum walker takes a step in a direction determined by a quantum "coin" register, which at each step is "flipped" by a unitary transformation rather than changed by re-sampling a random variable. See [ arXiv:quant-ph/0012090 ] for an early reference. For the sake of simplicity, I will describe a quantum walk on a cycle of size $2^n$; though one must change some of the details to consider quantum walks on more general graphs. In this model of computation, one typically does the following. 1. Prepare a "position" register on $n$ qubits in some state such as $\lvert 00\cdots 0\rangle$, and a "coin" register (with standard basis states which we denote by $\lvert +1 \rangle$ and $\lvert -1 \rangle$) in some initial state which may be a superposition of the two standard basis states. 2. Perform a coherent controlled-unitary transformation, which adds 1 to the value of the position register (modulo $2^n$) if the coin is in the state $\lvert +1 \rangle$, and subtracts 1 to the value of the position register (modulo $2^n$) if the coin is in the state $\lvert -1 \rangle$. 3. Perform a fixed unitary transformation $C$ to the coin register. This plays the role of a "coin flip" to determine the direction of the next step. We then return to step 2. The main difference between this and a random walk is that the different possible "trajectories" of the walker are being performed coherently in superposition, so that they can destructively interfere. This leads to a walker behaviour which is more like ballistic motion than diffusion. Indeed, an early presentation of a model such as this was made by Feynmann, as a way to simulate the Dirac equation. This model also often is described in terms of looking for or locating 'marked' elements in the graph, in which case one performs another step (to compute whether the node the walker is at is marked, and then to measure the outcome of that computation) before returning to Step 2. Other variations of this sort are reasonable. To perform a quantum walk on a more general graph, one must replace the "position" register with one which can express all of the nodes of the graph, and the "coin" register with one which can express the edges incident to a vertex. The "coin operator" then must also be replaced with one which allows the walker to perform an interesting superposition of different trajectories. (What counts as 'interesting' depends on what your motivation is: physicists often consider ways in which changing the coin operator changes the evolution of the probability density, not for computational purposes but as a way of probing at basic physics using quantum walks as a reasonable toy model of particle movement.) A good framework for generalising quantum walks to more general graphs is the Szegedy formulation [ arXiv:quant-ph/0401053 ] of discrete-time quantum walks. This model of computation is strictly speaking a special case of the unitary circuit model, but is motivated with very specific physical intuitions, which has led to some algorithmic insights (see e.g. [ arXiv:1302.3143 ]) for polynomial-time speedups in bounded-error quantum algorithms. This model is also a close relative of the continuous-time quantum walk as a model of computation. • 1 $\begingroup$ if you want to talk about DTQWs in the context of QC you should probably include references to the work of Childs and collaborators (e.g. arXiv:0806.1972. Also, you are describing how DTQWs work, but not really how you can use them to do computation. $\endgroup$ – glS Mar 26 '18 at 17:23 • 2 $\begingroup$ @gIS: indeed, I will add more details at some point: when I first wrote these it was to quickly enumerate some models and remark on them, rather than give comprehensive reviews. But as for how to compute, does the last paragraph not represent an example? $\endgroup$ – Niel de Beaudrap Mar 26 '18 at 20:05 • 1 $\begingroup$ @gIS: Isn't that work by Childs et al. actually about continuous-time quantum walks, anyhow? $\endgroup$ – Niel de Beaudrap Oct 29 '18 at 20:22 Quantum circuits with intermediary measurements This is a slight variation on "unitary circuits", in which one allows measurements in the middle of the algorithm as well as the end, and where one also allows future operations to depend on the outcomes of those measurements. It represents a realistic picture of a quantum processor which interacts with a classical control device, which among other things is the interface between the quantum processor and a human user. Intermediary measurement is practically necessary to perform error correction, and so this is in principle a more realistic picture of quantum computation than the unitary circuit model. but it is not uncommon for theorists of a certain type to strongly prefer measurements to be left until the end (using the principle of deferred measurement to simulate any 'intermediary' measurements). So, this may be a significant distinction to make when talking about quantum algorithms — but this does not lead to a theoretical increase in the computational power of a quantum algorithm. • 2 $\begingroup$ I think this should go with the "unitary circuit model" post, they are both really just variations of the circuit model, and one does not usually really distinguish them as different models $\endgroup$ – glS Mar 26 '18 at 17:20 • 1 $\begingroup$ @gIS: it is not uncommon to do so in the CS theory community. In fact, the bias is very much towards unitary circuits in particular. $\endgroup$ – Niel de Beaudrap Mar 26 '18 at 20:00 Quantum annealing Quantum annealing is a model of quantum computation which, roughly speaking, generalises the adiabatic model of computation. It has attracted popular — and commercial — attention as a result of D-WAVE's work on the subject. Precisely what quantum annealing consists of is not as well-defined as other models of computation, essentially because it is of more interest to quantum technologists than computer scientists. Broadly speaking, we can say that it is usually considered by people with the motivations of engineers, rather than the motivations of mathematicians, so that the subject appears to have many intuitions and rules of thumb but few 'formal' results. In fact, in an answer to my question about quantum annealing, Andrew O goes so far as to say that "quantum annealing can't be defined without considerations of algorithms and hardware". Nevertheless, "quantum annealing" seems is well-defined enough to be described as a way of approaching how to solve problems with quantum technologies with specific techniques — and so despite Andrew O's assessment, I think that it embodies some implicitly defined model of computation. I will attempt to describe that model here. Intuition behind the model Quantum annealing gets its name from a loose analogy to (classical) simulated annealing. They are both presented as means of minimising the energy of a system, expressed in the form of a Hamiltonian: $$ \begin{aligned} H_{\rm{classical}} &= \sum_{i,j} J_{ij} s_i s_j \\ H_{\rm{quantum}} &= A(t) \sum_{i,j} J_{ij} \sigma_i^z \sigma_j^z - B(t) \sum_i \sigma_i^x \end{aligned} $$ With simulated annealing, one essentially performs a random walk on the possible assignments to the 'local' variables $s_i \in \{0,1\}$, but where the probability of actually making a transition depends on • The difference in 'energy' $\Delta E = E_1 - E_0$ between two 'configurations' (the initial and the final global assignment to the variables $\{s_i\}_{i=1}^n$) before and after each step of the walk; • A 'temperature' parameter which governs the probability with which the walk is allowed to perform a step in the random walk which has $\Delta E > 0$. One starts with the system at 'infinite temperature', which is ultimately a fancy way of saying that you allow for all possible transitions, regardless of increases or decreases in energy. You then lower the temperature according to some schedule, so that time goes on, changes in state which increase the energy become less and less likely (though still possible). The limit is zero temperature, in which any transition which decreases energy is allowed, but any transition which increases energy is simply forbidden. For any temperature $T > 0$, there will be a stable distribution (a 'thermal state') of assignments, which is the uniform distribution at 'infinite' temperature, and which is which is more and more weighted on the global minimum energy states as the temperature decreases. If you take long enough to decrease the temperature from infinite to near zero, you should in principle be guaranteed to find a global optimum to the problem of minimising the energy. Thus simulated annealing is an approach to solving optimisation problems. Quantum annealing is motivated by generalising the work by Farhi et al. on adiabatic quantum computation [arXiv:quant-ph/0001106], with the idea of considering what evolution occurs when one does not necessarily evolve the Hamiltonian in the adiabatic regime. Similarly to classical annealing, one starts in a configuration in which "classical assignments" to some problem are in a uniform distribution, though this time in coherent superposition instead of a probability distribution: this is achieved for time $t = 0$, for instance, by setting $$ A(t=0) = 0, \qquad B(t=0) = 1 $$ in which case the uniform superposition $\def\ket#1{\lvert#1\rangle}\ket{\psi_0} \propto \ket{00\cdots00} + \ket{00\cdots01} + \cdots + \ket{11\cdots11}$ is a minimum-energy state of the quantum Hamiltonian. One steers this 'distribution' (i.e. the state of the quantum system) to one which is heavily weighted on a low-energy configuration by slowly evolving the system — by slowly changing the field strengths $A(t)$ and $B(t)$ to some final value $$ A(t_f) = 1, \qquad B(t_f) = 0. $$ Again, if you do this slowly enough, you will succeed with high probability in obtaining such a global minimum. The adiabatic regime describes conditions which are sufficient for this to occur, by virtue of remaining in (a state which is very close to) the ground state of the Hamiltonian at all intermediate times. However, it is considered possible that one can evolve the system faster than this and still achieve a high probability of success. Similarly to adiabatic quantum computing, the way that $A(t)$ and $B(t)$ are defined are often presented as a linear interpolations from $0$ to $1$ (increasing for $A(t)$, and decreasing for $B(t)$). However, also in common with adiabatic computation, $A(t)$ and $B(t)$ don't necessarily have to be linear or even monotonic. For instance, D-Wave has considered the advantages of pausing the annealing schedule and 'backwards anneals'. 'Proper' quantum annealing (so to speak) presupposes that evolution is probably not being done in the adiabatic regime, and allows for the possibility of diabatic transitions, but only asks for a high chance of achieving an optimum — or even more pragmatically still, of achieving a result which would be difficult to find using classical techniques. There are no formal results about how quickly you can change your Hamiltonian to achieve this: the subject appears mostly to consist of experimenting with a heuristic to see what works in practise. The comparison with classical simulated annealing Despite the terminology, it is not immediately clear that there is much which quantum annealing has in common with classical annealing. The main differences between quantum annealing and classical simulated annealing appear to be that: • In quantum annealing, the state is in some sense ideally a pure state, rather than a mixed state (corresponding to the probability distribution in classical annealing); • In quantum annealing, the evolution is driven by an explicit change in the Hamiltonian rather than an external parameter. It is possible that a change in presentation could make the analogy between quantum annealing and classical annealing tighter. For instance, one could incorporate the temperature parameter into the spin Hamiltonian for classical annealing, by writing $$\tilde H_{\rm{classical}} = A(t) \sum_{i,j} J_{ij} s_i s_j - B(t) \sum_{i,j} \textit{const.} $$ where we might choose something like $A(t) = t\big/(t_F - t)$ and $B(t) = t_F - t$ for $t_F > 0$ the length of the anneal schedule. (This is chosen deliberately so that $A(0) = 0$ and $A(t) \to +\infty$ for $t \to t_F$.) Then, just as an annealing algorithm is governed in principle by the Schrödinger equation for all times, we may consider an annealing process which is governed by a diffusion process which is in principle uniform with tim by small changes in configurations, where the probability of executing a randomly selected change of configuration is governed by $$ p(x \to y) = \max\Bigl\{ 1,\; \exp\bigl(-\gamma \Delta E_{x\to y}\bigr) \Bigr\} $$ for some constant $\gamma$, where $E_{x \to y}$ is the energy difference between the initial and final configurations. The stable distribution of this diffusion for the Hamiltonian at $t=0$ is the uniform distribution, and the stable distribution for the Hamiltonian as $t \to t_F$ is any local minimum; and as $t$ increases, the probability with which a transition occurs which increases the energy becomes smaller, until as $t \to t_F$ the probability of any increases in energy vanish (because any of the possible increase is a costly one). There are still disanalogies to quantum annealing in this — for instance, we achieve the strong suppression of increases in energy as $t \to t_F$ essentially by making the potential wells infinitely deep (which is not a very physical thing to do) — but this does illustrate something of a commonality between the two models, with the main distinction being not so much the evolution of the Hamiltonian as it is the difference between diffusion and Schrödinger dynamics. This suggests that there may be a sharper way to compare the two models theoretically: by describing the difference between classical and quantum annealing, as being analogous to the difference between random walks and quantum walks. A common idiom in describing quantum annealing is to speak of 'tunnelling' through energy barriers — this is certainly pertinent to how people consider quantum walks: consider for instance the work by Farhi et al. on continuous-time quantum speed-ups for evaluating NAND circuits, and more directly foundational work by Wong on quantum walks on the line tunnelling through potential barriers. Some work has been done by Chancellor [arXiv:1606.06800] on considering quantum annealing in terms of quantum walks, though it appears that there is room for a more formal and complete account. On a purely operational level, it appears that quantum annealing gives a performance advantage over classical annealing (see for example these slides on the difference in performance between quantum vs. classical annealing, from Troyer's group at ETH, ca. 2014). Quantum annealing as a phenomenon, as opposed to a computational model Because quantum annealing is more studied by technologists, they focus on the concept of realising quantum annealing as an effect rather than defining the model in terms of general principles. (A rough analogy would be studying the unitary circuit model only inasmuch as it represents a means of achieving the 'effects' of eigenvalue estimation or amplitude amplification.) Therefore, whether something counts as "quantum annealing" is described by at least some people as being hardware-dependent, and even input-dependent: for instance, on the layout of the qubits, the noise levels of the machine. It seems that even trying to approach the adiabatic regime will prevent you from achieving quantum annealing, because the idea of what quantum annealing even consists of includes the idea that noise (such as decoherence) will prevent annealing from being realised: as a computational effect, as opposed to a computational model, quantum annealing essentially requires that the annealing schedule is shorter than the decoherence time of the quantum system. Some people occasionally describe noise as being somehow essential to the process of quantum annealing. For instance, Boixo et al. [arXiv:1304.4595] write Unlike adiabatic quantum computing[, quantum annealing] is a positive temperature method involving an open quantum system coupled to a thermal bath. It might perhaps be accurate to describe it as being an inevitable feature of systems in which one will perform annealing (just because noise is inevitable feature of a system in which you will do quantum information processing of any kind): as Andrew O writes "in reality no baths really help quantum annealing". It is possible that a dissipative process can help quantum annealing by helping the system build population on lower-energy states (as suggested by work by Amin et al., [arXiv:cond-mat/0609332]), but this seems essentially to be a classical effect, and would inherently require a quiet low-temperature environment rather than 'the presence of noise'. The bottom line It might be said — in particular by those who study it — that quantum annealing is an effect, rather than a model of computation. A "quantum annealer" would then be best understood as "a machine which realises the effect of quantum annealing", rather than a machine which attempts to embody a model of computation which is known as 'quantum annealing'. However, the same might be said of adiabatic quantum computation, which is — in my opinion correctly — described as a model of computation in its own right. Perhaps it would be fair to describe quantum annealing as an approach to realising a very general heuristic, and that there is an implicit model of computation which could be characterised as the conditions under which we could expect this heuristic to be successful. If we consider quantum annealing this way, it would be a model which includes the adiabatic regime (with zero-noise) as a special case, but it may in principle be more general. Your Answer
260485aa63d5e0b6
Science X Newsletter Wednesday, Mar 31 Dear ymilog, Be an ACS Industry Insider: Here is your customized Science X Newsletter for March 31, 2021: Spotlight Stories Headlines A new strategy to enhance the performance of silicon heterojunction solar cells Neuroscientists have identified a brain circuit that stops mice from mating with others that appear to be sick Snakes, rats and cats: the trillion dollar invasive species problem Researchers achieve world's first manipulation of antimatter by laser Deep diamonds contain evidence of deep-Earth recycling processes 450-million-year-old sea creatures had a leg up on breathing New study discovers ancient meteoritic impact over Antarctica 430,000 years ago Scientists create the next generation of living robots 'Sweat sticker' diagnoses cystic fibrosis on the skin in real time Indian astronomers probe X-ray pulsar 2S 1417–624 Small-molecule therapeutics: Big data dreams for tiny technologies Quantum material's subtle spin behavior proves theoretical predictions Decades of hunting detects footprint of cosmic ray superaccelerators in our galaxy Greenland caves: Time travel to a warm Arctic Scientists discover unique Cornish 'falgae' Physics news Researchers achieve world's first manipulation of antimatter by laser Researchers with the CERN-based ALPHA collaboration have announced the world's first laser-based manipulation of antimatter, leveraging a made-in-Canada laser system to cool a sample of antimatter down to near absolute zero. The achievement, detailed in an article published today and featured on the cover of the journal Nature, will significantly alter the landscape of antimatter research and advance the next generation of experiments. Quantum material's subtle spin behavior proves theoretical predictions Using complementary computing calculations and neutron scattering techniques, researchers from the Department of Energy's Oak Ridge and Lawrence Berkeley national laboratories and the University of California, Berkeley, discovered the existence of an elusive type of spin dynamics in a quantum mechanical system. Lab-made hexagonal diamonds stiffer than natural diamonds 'Agricomb' measures multiple gas emissions from... cows After the optical frequency comb made its debut as a ruler for light, spinoffs followed, including the astrocomb to measure starlight and a radar-like comb system to detect natural gas leaks. And now, researchers have unveiled the "agricomb" to measure, ahem, cow burps. Super-precise Fermilab experiment carefully analyzing the muon's magnetic moment Modern physics is full of the sort of twisty, puzzle-within-a-puzzle plots you'd find in a classic detective story: Both physicists and detectives must carefully separate important clues from unrelated information. Both physicists and detectives must sometimes push beyond the obvious explanation to fully reveal what's going on. New theory suggests uranium 'snowflakes' in white dwarfs could set off star-destroying explosion A pair of researchers with Indiana University and Illinois University, respectively, has developed a theory that suggests crystalizing uranium "snowflakes" deep inside white dwarfs could instigate an explosion large enough to destroy the star. In their paper published in the journal Physical Review Letters, C. J. Horowitz and M. E. Caplan describe their theory and what it could mean to astrophysical theories about white dwarfs and supernovas. Heat conduction record with tantalum nitride A thermos bottle has the task of preserving the temperature—but sometimes you want to achieve the opposite: Computer chips generate heat that must be dissipated as quickly as possible so that the chip is not destroyed. This requires special materials with particularly good heat conduction properties. A successful phonon calculation within the quantum Monte Carlo framework The focus and ultimate goal of computational research in materials science and condensed matter physics is to solve the Schrödinger equation—the fundamental equation describing how electrons behave inside matter—exactly (without resorting to simplifying approximations). While experiments can certainly provide interesting insights into a material's properties, it is often computations that reveal the underlying physical mechanism. However, computations need not rely on experimental data and can, in fact, be performed independently, an approach known as "ab initio calculations." The density functional theory (DFT) is a popular example of such an approach. Study shows promise of quantum computing using factory-made silicon chips The qubit is the building block of quantum computing, analogous to the bit in classical computers. To perform error-free calculations, quantum computers of the future are likely to need at least millions of qubits. The latest study, published in the journal PRX Quantum, suggests that these computers could be made with industrial-grade silicon chips using existing manufacturing processes, instead of adopting new manufacturing processes or even newly discovered particles. Development of a broadband mid-infrared source for remote sensing A research team of the National Institutes of Natural Sciences, National Institute for Fusion Science and Akita Prefectural University has successfully demonstrated a broadband mid-infrared (MIR) source with a simple configuration. This light source generates highly-stable broadband MIR beam at 2.5-3.7 μm wavelength range maintaining the brightness owing to its high-beam quality. Such a broadband MIR source facilitates a simplified environmental monitoring system by constructing a MIR fiber-optic sensor, which has the potential for industrial and medical applications. Astronomy and Space news Indian astronomers probe X-ray pulsar 2S 1417–624 Using the Neutron Star Interior Composition Explorer (NICER) instrument aboard the International Space Station (ISS) and NASA's Swift spacecraft, astronomers from India have investigated an X-ray pulsar known as 2S 1417–624. Results of the study, published March 24 on, provide important information about the evolution of different timing and spectral properties of this source during its recent outburst. Decades of hunting detects footprint of cosmic ray superaccelerators in our galaxy An enormous telescope complex in Tibet has captured the first evidence of ultrahigh-energy gamma rays spread across the Milky Way. The findings offer proof that undetected starry accelerators churn out cosmic rays, which have floated around our galaxy for millions of years. The research is to be published in the journal Physical Review Letters on Monday, April 5. NASA tests mixed reality, scientific know-how and mission operations for exploration Mixed reality technologies, like virtual reality headsets or augmented reality apps, aren't just for entertainment—they can also help make discoveries on other worlds like the Moon and Mars. By traveling on Earth to extreme environments—from Mars-like lava fields in Hawaii to underwater hydrothermal vents—similar to destinations on other worlds, NASA scientists have tested out technologies and tools to gain insight into how they can be used to make valuable contributions to science. Two strange planets: Neptune and Uranus remain mysterious after new findings Uranus and Neptune both have a completely skewed magnetic field, perhaps due to the planets' special inner structures. But new experiments by ETH Zurich researchers now show that the mystery remains unsolved. Until now, researchers have believed that dark energy accounted for nearly 70 percent of the ever-accelerating, expanding universe. First X-rays from Uranus discovered Astronomers have detected X-rays from Uranus for the first time, using NASA's Chandra X-ray Observatory. This result may help scientists learn more about this enigmatic ice giant planet in our solar system. US, China consulted on safety as their crafts headed to Mars As their respective spacecrafts headed to Mars, China and the U.S. held consultations earlier this year in a somewhat unusual series of exchanges between the rivals. NASA's Webb Telescope General Observer scientific programs selected Mission officials for NASA's James Webb Space Telescope have announced the selection of the General Observer programs for the telescope's first year of science, known as Cycle 1. These specific programs will provide the worldwide astronomical community with one of the first extensive opportunities to investigate scientific targets with Webb. Venus plots a comeback In terms of space exploration, Mars is all the rage these days. This has left our closest neighbor, Venus—previously the most attractive planet to study because of its proximity and similar atmosphere to Earth—in the lurch. A new article in Chemical & Engineering News, the weekly newsmagazine of the American Chemical Society, highlights how scientists and space agencies are turning their eyes back toward Venus to learn more about its atmosphere and geology. Technology news A new strategy to enhance the performance of silicon heterojunction solar cells Crystalline silicon (c-Si) solar cells are among the most promising solar technologies on the market. These solar cells have numerous advantageous properties, including a nearly optimum bandgap, high efficiency and stability. Notably, they can also be fabricated using raw materials that are widely available and easy to attain. Scientists create the next generation of living robots Last year, a team of biologists and computer scientists from Tufts University and the University of Vermont (UVM) created novel, tiny self-healing biological machines from frog cells called "Xenobots" that could move around, push a payload, and even exhibit collective behavior in the presence of a swarm of other Xenobots. The global race to develop 'green' hydrogen Roboreptile climbs like a real lizard Scientists design 'smart' device to harvest daylight A team of Nanyang Technological University, Singapore (NTU Singapore) researchers has designed a 'smart' device to harvest daylight and relay it to underground spaces, reducing the need to draw on traditional energy sources for lighting. Thermal power nanogenerator created without solid moving parts As environmental and energy crises become increasingly more common occurrences around the world, a thermal energy harvester capable of converting abundant thermal energy—such as solar radiation, waste heat, combustion of biomass, or geothermal energy—into mechanical energy appears to be a promising energy strategy to mitigate many crises. Assessing how much data iOS and Android share with Apple and Google The School of Computer Science and Statistics in Dublin, Ireland, has begun investigating how much user data iOS and Android send to Apple and Google, respectively. Overall, they discovered that, even when the devices are idle or minimally configured, each tends to share an average of 4.5 minutes' worth of data every day. Even without a brain, these metal-eating robots can search for food When it comes to powering mobile robots, batteries present a problematic paradox: the more energy they contain, the more they weigh, and thus the more energy the robot needs to move. Energy harvesters, like solar panels, might work for some applications, but they don't deliver power quickly or consistently enough for sustained travel. Volkswagen hoaxes media with fake news release as a joke Volkswagen of America issued false statements this week saying it would change its brand name to "Voltswagen," as a way to stress its commitment to electric vehicles, only to reverse course Tuesday and admit that the supposed name change was just a joke. A hydrogen future for planes, trains and factories Hydrogen could potentially power trains, planes, trucks and factories in the future, helping the world rid itself of harmful emissions. A physical party to prove you're a real virtual person The ease of creating fake virtual identities plays an important role in shaping the way information—and misinformation—circulates online. Could 'pseudonym' parties, that would verify proof of personhood not proof of identity, resolve this tension? New AI tool 85% accurate for recognizing and classifying wind turbine blade defects Demand for wind power has grown, and with it the need to inspect turbine blades and identify defects that may impact operation efficiency. ESAIL captures 2 million messages from ships at sea The ESAIL microsatellite for making the seas safer has picked up more than two million messages from 70 000 ships in a single day. Facebook's new tool lets users control what they see, share on their News Feeds Facebook is launching new updates that allows users to control their News Feed algorithm, according to a statement by the tech giant. Tesla's range put to the test Edmunds' test team recently published the results of its real-world range testing for electric vehicles. Notably, every Tesla the team tested in 2020 came up short of matching the EPA's range estimate. Almost all other EVs Edmunds tested met or exceeded those estimates. High production rates for fuel cells To create a sustainable road traffic system, hundreds of thousands of fuel cells will be needed for hydrogen-powered cars in the future. Until now, though, fuel cell production has been complex and too slow. The Fraunhofer team is therefore developing a continuous production line that will be able to process fuel cell components in cycles lasting just seconds. The pilot line is set to be presented at the Hannover Messe Digital Edition from April 12 to April 16, 2021. Smart algorithms make packaging of meat products more efficient In supermarkets you can find a large variety of poultry products, all conveniently packaged in fixed-weight quantities. However, poultry processing plants face numerous challenges due to these fixed-weight batches, growing throughput requirements and small profit margins. To assist the poultry processing plant industry, TU/e-researcher Kay Peeters has developed new production control and planning strategies that reduce operational costs. Will you be paying with a Visa, Mastercard or Bitcoin? Spotify acquires Clubhouse competitor Betty Labs as live audio popularity grows Spotify is entering the live audio market after it announced Tuesday its acquisition of Betty Labs, the creators of the live audio app Locker Room. Advocacy groups urge FTC to be tougher on Google with protecting kids privacy on apps Two advocacy groups want the Federal Trade Commission to take a tougher stance against Google, accusing its app store of recommending apps that transmit kids' personal information such as location without their parents' consent in violation of a 1998 law that protects children online. How many countries are ready for nuclear-powered electricity? As demand for low-carbon electricity rises around the world, nuclear power offers a promising solution. But how many countries are good candidates for nuclear energy development? New OnePlus models take the flagship phone game up a notch There isn't much in the tech world that makes me happier than a day when we get new flagship phones. New Hampshire coastal recreationists support offshore wind As the Biden administration announces a plan to expand the development of offshore wind energy development (OWD) along the East Coast, research from the University of New Hampshire shows significant support from an unlikely group, coastal recreation visitors. From boat enthusiasts to anglers, researchers found surprisingly widespread support with close to 77% of coastal recreation visitors supporting potential OWD along the N.H. Seacoast. Microsoft wins $22 billion deal making headsets for US Army Microsoft won a nearly $22 billion contract to supply U.S. Army combat troops with its augmented reality headsets. Japan's Hitachi acquires GlobalLogic for $9.6 billion Hitachi Ltd. is buying U.S. digital engineering services company GlobalLogic Inc. for $9.6 billion, the Japanese industrial, electronic and construction conglomerate said Wednesday. Deliveroo skids on stock market debut Deliveroo skidded on its stock market launch Wednesday, with its share price slumping by almost a third in value after the app-driven meals delivery company faced criticism from institutional investors over its treatment of self-employed riders. Counting begins in vote on first Amazon labor union Counting of votes cast by Amazon employees at an Alabama warehouse began Tuesday to determine whether it would become the first union shop at the e-commerce colossus. Huawei posts record profit but US pressure, pandemic hit revenue Chinese telecom giant Huawei said Wednesday it achieved the latest in a string of record profits last year, but revenue growth slowed sharply because of the pandemic and tightening US pressure that has pushed it into new business lines to survive. Sports cards have gone virtual, and in a big way Maybe the Luka Doncic rookie basketball card that recently sold at auction for a record $4.6 million was a bit rich for your blood. Perhaps you'd be interested in a more affordable alternative—say, a virtual card of the Dallas Mavericks forward currently listed for a mere $150,000? Delta joins other US airlines in ending empty middle seats Delta Air Lines, the last U.S. airline still blocking middle seats, will end that policy in May as air travel recovers and more people become vaccinated against COVID-19. This email is a free service of Science X Network You received this email because you subscribed to our list.
36662b7f173c7402
Quantum Physics in Consciousness Studies Dirk K. F. Meijer and Simon Raggett Review/Literature compilation: The Quantum Mind Extended* Introduction in Quantum aspects of brain function, p 1-19 Quantum approaches to neurobiology: state of art, p 19-31 David Bohm: Wholeness and the implicate order, p 30-39 Henry Stapp: Attention, intention and quantum coherence, p 39-44 Roger Penrose: Consciousness and the geometry of universe, p 45-54 Stuart Hameroff: Objective reduction in brain tubules, p 55-68 Hiroomi Umezawa and Herbert Frohlich: Quantum brain dynamics, p 69-72 Mari Jibu & Kunio Yasue: Quantum field concepts, p 72-76 Johnjoe McFadden: Electromagnetic fields, p77-80 Gustav Bernroider: Ion channel coherence, p 80-85 Chris King: Cosmology, consciousness and chaos theory, p 85-92 Piero Scaruffi: Consciousness as a feature of matter, p 92-94 Danko Georgiev: the Quantum neuron, p 94-98 Andrei Khrennikov: Quantum like brain and other metaphoric QM models, p 98-102 Hu and Wu/ Persinger: Spin mediated consciousness, p 103-106 Chris Clarke: Qualia and free will, p 106-109 Herms Romijn: Photon mediated consciousness and recent models, p, 110-114 Stuart Kauffman: Consciousness & the poised state p, 114-116 Post-Bohmian concepts of an of a universal quantum field, p 117-121 Dirk Meijer: Cyclic operating mental workspace, p 121-131 Amit Goswami: The vacuum as a universal information field, p 132-146 Simon Raggett: A final attempt to a theory on consciousness p 146-157 Note on cited sources, p 158 References, p 158-175 Internetsites, p 175 Introduction in quantum aspects of brain function Since the development of QM and relativistic theories in the first part of the 20th century, attempts have been made to understand and describe the mind or mental states on the basis of QM concepts (see Meijer, 2014, Meijer and Korf, 2013,). Quantum physics, currently seen as a further refinement in the description of nature, does not only describe elementary microphysics but applies to classical or macro-physical (Newtonian) phenomena as well. Hence the human brain and its mental aspects are associated to classical brain physiology and are also part of a quantum physical universe. Most neurobiologists considered QM mind theories irrelevant to understand brain/mind processes (e.g. Edelman and Tononi, 2000; Koch and Hepp, 2006). However, there is no single theory on QM brain/mind theory. In fact a spectrum of more or less independent models have been proposed, that all have their intrinsic potentials and problems. The elements of quantum physics discussed here are summarized in Table 1 and 2; details of the various QM theories have been described elsewhere (Meijer, 2012; Meijer and Korf, 2013). Some QM mind options assume some sort of space-time multidimensionality, i.e there are more than the four conventional space-time dimensions. Other options assume that one or more extra dimensions are associated with a mental attribute or that the individual mind is (partly) an expression of a universal mind through holonomic communication with quantum fields (Fig.1). The latter idea has led to holographic (holonomic) theories (Pribram 1986, 2011). The human brain is then conceived as an interfacing organ that not only produces mind and consciousness but also receives information. The brain or parts of the brain are conceived as an interference hologram of incoming data and already existing data (a “personal universe”). If properly exposed (“analyzed”), information about the outer world can be distilled. In neurobiological terms, the existing data is equivalent to the subject’s memory, whereas the “analyzer” is cerebral electrophysiology. Bohm hypothesized that additional dimensions are necessary to describe QM interference processes, thereby circumventing probabilistic theories and consciousness-induced collapse of the wave function. In this theory, the universe is a giant superposition of waves, representing an unbroken wholeness, of which the human brain is a part (Bohm, 1990). Accordingly, the individual mind or consciousness is an inherent property of all matter (and energy), and as such being part, or rather an expression, of this universal quantum field. The apparently diffuse time/space localization of mental functions argues in favor of an underlying multidimensional space/time reality. Bohm and Hiley (1987) also proposed a two-arrow (bidirectional) time dimension. In this concept the stochastic (or double stochastic) character of quanta is explained by an underlying quantum field: the implicate order. This concept implies entanglement (non-locality) as well. Another hypothesis, having the potential to couple wave information to mental processes, proposes that wave information is transmitted from and into the brain by wave resonance. Through conscious observation they collapse locally to material entities (Stapp 2009; Pessa and Vitiello, 2003; Schwartz et al., 2004). Stapp (2012) argued that this does not represent an interference effect between superposed states (as assumed by Hameroff and Penrose, 1996), but that through environmental de-coherence, super-positions become informative to the brain/organism. A complementary implication of these theories is that mental processes are not necessarily embedded in entropic physical time. In line with this QM idea is that memories are not stored as a temporal sequence, but rather a-temporally. Fig 1. The Universe as part of a larger mind Fig. 1: The hypothesis that the universe and our minds are integral parts of a universal consciousness Some QM mind theories suppose the possible involvement of specific molecules. A spectrum of ions and molecules has been suggested to operate in a quantum manner (Tuszinsky and Woolf 2010). For instance QM theories have been based on micro-tubular proteins (Penrose 1989; Hameroff 2007), proteins involved in synaptic transmission (Beck and Eccles 1992; Beck 2001), including Ca ion-channels (Stapp 2009) and channel proteins instrumental in the initiation and propagation of action potentials (potassium-ion channels, Bernroider and Roy 2004. There is also the hypothesis that synaptic transmission represents a typical (quantum) probability state that becomes critical for an all or none neuronal response (Beck and Eccles 1992; Beck 2001). Attributing non-linear and non-computable characteristics of consciousness, Hameroff and Penrose, 2011, 2013, argue against mechanisms of all or none firing of axonal potentials (Beck and Eccles, 2003). They rather prefer the model of Davia (2010), proposing that consciousness is related to waves traveling in the brain as a uniting life principle on multiple scales. According to some QM mind theories (Woolf and Hameroff, 2001), tunneling was proposed to facilitate membrane/vesicle fusion in neural information processing at the synapse. Kauffman relates quantum processes in the biological matrix of the brain to the emergence of mental processing (Kauffman 2010; Vattay et al. 2012). This theory, mainly based on chromophores detecting photons, assumes that the coherence of some quantum configurations adhered to proteins is stabilized or is maintained by re-coherence. This principle may have guided evolutionary selection of proteins. Accordingly, mind and consciousness are both quantum mechanical and an expression by the classical neural mechanisms. The underlying coherent quantum states provide the potentiality for the collapse to the de-coherent material state, resulting in classical events such as firing neurons, that are at least to some extent, a-causal, i.e. beyond classical determinacy. The quantum system (of the brain) interacts with a quantum environment, the phase information is lost and cannot be reassembled. By entanglement, the quantum coherence in a small region, e.g. the cell or the brain, might have spatial long-range effects (Vattay et al. 2012; Hagan et al. 2002). Kauffman accepts long-lived coherence states in biological molecules at body temperature (now 750 femto-seconds in chlorophyll at 77K) to be potentially enabling parallel problem solving as major challenges for further investigations. The question is also which neurons or neuronal structures are in particular associated to the coherence/de-coherence brain model of consciousness. The question is often put as to why quantum theory should be involved in discussions of consciousness at all, and also as to why it should be treated as something special. In thinking about quantum theory, it is important not to be bullied into viewing it as something weird and peripheral that can be ignored (Atmanspacher, 2011). Unfortunately, this allows the more superficial thinkers to dismiss all theories of quantum consciousness. This sort of practice has recently been criticized as ‘pseudoscepticism’, a parallel form to pseudoscience. Pseudo-skepticism (see Wikipedia) similarly uses denunciation in the name of science or scientific affiliation without citing any evidence or possible experimentation to establish this criticism, (see Utts and Josephson, 1996). The features of quantum theory that make it special and also possibly relevant to consciousness can be summarized as follows: 1.) Quantum theory describes the fundamental level of energy and matter. In contrast to higher levels, the quantum level has aspects, such as mass, charge and spin that are given properties of the universe, not capable of further reduction or explanation. In quantum theories of consciousness, it is suggested that consciousness is such a fundamental property existing at this level. Some theories are additionally linked to the structure of spacetime, which is nowadays seen as being interconnected with the nature of the quanta( see Chalmers, 2000; Nagel, 2012) 2.) The other fundamental aspect of the universe is spacetime, as described by the special and general theories of relativity. Although both relativity and quantum theory have both been tested to very high degrees of accuracy, they are nevertheless incompatible with one another. The gravitational force is the main problems, since the smooth continuous curvature of space that describes gravity in general relativity is incompatible with the discreteness of particles/waves that is fundamental to quantum theory. String theory and loop quantum gravity have attempted to bridge this gap, but neither are yet regarded as giving a complete picture. (see Smolin, 2004; Penrose, 2004) 3.) In traditional versions of quantum theory, the wave form of the quanta is conceived as a superposition of the many possible positions of a quantum particle. When the wave function collapses the choice of a particular position for the particle is random. This choice of position is an effect without a cause. The property of randomness is not in itself particularly useful in theories of consciousness, but it does open a chink in the deterministic structure of the universe, which is exploited in particular by the Penrose/Hameroff model, 2013, see also Stapp, 2009, 2012) 4.) Non-locality is the remaining special feature of quantum theory. Classical physics comprises only so-called billiard ball relationships, with bits of matter and energy bumping into one another. These relationships are local, in that they involve immediate contact. Such relationships are also normal in quantum physics. However, quantum physics also possesses non-local relationships. This applies where two particles have been in some close relationship, such as two electrons in the same orbital. In this case they can become correlated. For instance the spin on two particles may always be opposite, if one spins up, the other spins down. This is not a problem while the particles are in a wave form, as both will be in a superposition of up and down. However, if the wave function of one particles collapses, that particle chooses one or the other superposition. When that happens, the other particle will choose the opposite position. In experiment, this is shown to happen when the two particles are out of range of a signal travelling at the speed of light. No matter, energy or conventional information is transferred, and the experiment is not regarded as a violation of relativity, but it is demonstrated that quantum properties can correlate instantaneously over any distance.(see for a basic introduction to QM: Thomas A, link internet. The Failure of Modern Consciousness Studies The study of consciousness was a taboo in academic circles through much of the 20th century, at least in part due to the long reign of behaviorism. Even the study of emotion being largely proscribed, with brains conceived as being reasoning machines and nothing else. This started to lift in the late 1980s and at first this seemed to be a marvelous opportunity for the advances made in other areas of science to be applied to the neglected area of consciousness. What followed, however, can be seen as an overall negative in establishing orthodoxies which appear to have negligible chance of success in explaining consciousness, while discouraging explanations that relate to new areas of physics or neuroscience. The traditional explanation for consciousness or the soul in more traditional language is known as dualism. This posits a separate spirit stuff and physical stuff, with the spirit stuff capable of acting on the physical stuff, as when the soul commands the body. The core argument against dualism was that for the spirit stuff to act on the physicsal stuff it would need to have some physically relevant quality and would therefore not be pure spirit stuff. V ice versa looks to apply for physical stuff. The failure of dualism is one of the few points of agreement between mainstream consciousness studies and those that identify consciousness with a fundamental of the universe. (Thompson, 2000) Functionalism was at least in the 1990s the dominant explanation for consciousness, driven by the success of computers as problem solving and memory storage machines. The main proposal is that any system or machine that processes information in the same way as the brain will be conscious, regardless of what it is made of. The biological matter and structure of the human brain was deemed irrelevant. In reality, and despite its popularity, this appears as a pseudo-theory, kicking the problem of consciousness further down the road. It does not explain how consciousness arises in the brain and nor does it explain how consciousness might at some point arise in silicon or other matter. It seems, however, that functionalism has had a malign effect in making mainstream consciousness studies practitioners think it un-necessary to take any notice of modern developments in neuroscience or biology. Identity theory may have been the next most popular theory after functionalism in the 1990s. This declared that consciousness was identical to the brain or identical to its processing. However, it made little attempt to explain why it was identical to the brain, but not to any of the other physical structures in the universe. Nor did it attempt to define what it meant by the brain, despite the fact that our understanding of the physical processing of the brain was changing dramatically. It was further undermined by the discovery that much neural processing such as the dorsal stream governing spontaneous movement could be brought to completion outside of consciousness, which was seen to be more closely related to longer-term evaluations and planning. Epiphenomenalism was and remains another popular idea. The theory proposes that consciousness is a by-product of neural processing that has, however, no function. Despite its popularity this concept is beset by at least three major problems. It conflicts with evolutionary theory in that it is hard to see why evolution should select for something that had no function, particularly as neural processing is exceptionally energy-hungry. The theory also conflicts head on with physics in which there is no acausality, with every object or process having influences elsewhere. Finally, there is the problem that even granting the idea of a functionless by-product, there is still no physical evidence for what produces such a thing in the brain. Like functionalism it appears to be a pseudo-theory. In the present century, there seems to have been a tacit recognition that functionalism and identity theory would have difficulty in becoming the consensus of a wider public. This appears to have given rise to two more theories that avoid treating consciousness as a fundamental. Consciousness resulting from embodiment has been possibly the most fashionable of these ideas. Initially embodiment ideas did represent a genuine step forward in both consciousness studies and psychology as a move away from the brain as a computer in a vat. It now accepted that mental events could influence the body and that visceral events could feed back on the brain. It also accepted that emotion is a relevant aspect of mental life. However, there was an over-reach in suggesting that the body somehow drove consciousness that the brain could not produce. This seemed to assume some kind of undefined special property in the body that was not present in the brain. More specifically it ignored the fact apart from the sense of touch, signals entered the brain directly from the environment and were consciously processed in the higher sensory and frontal cortex before being signaled to the viscera. The attempt to classify consciousness as a form of information or information processing has also become fashionable in this century. Interestingly, there are innumerable examples of non-conscious information, especially when we look at modern technology, with no apparent specification as to how conscious information would differ from non-conscious information. (Meijer, 2012, 2013a, 2014) At a more philosophical level, there is a core difference between information and reality, in that information embraces only what we happen to know, while it can also be defined as an attempt to describe nature’s behavior and microscopic make up that comprises reality. Thus the hunter-gatherer in ancient Africa, glancing up at the sun is only aware of its glare, heat and position in the sky. A fuller understanding of its reality has to wait for modern science. A popular but poorly based concept is to call consciousness an emergent property: The idea of consciousness as an emergent property of classically described matter is superficially plausible, and as such can sometimes look like the best shot of modern consciousness studies. Emergence is a familiar process in physics. Thus liquidity is an emergent property of water. The individual component hydrogen and oxygen atoms do not have the property of liquidity. However, when they are bound together in a sufficiently large number of water molecules, the property of liquidity emerges. The problem for this as an explanation of consciousness is that when emergent properties such as liquidity arise in nature, the emergence can be traced to the component particles and forces, such as the electromagnetic interactions between the water molecules. The macroscopic liquidity is an effect of the microscopic electrical charges and the resulting charge relationships. The problem for consciousness as an emergent property is no arrangement of such particles and forces has been identified that could produce consciousness. Many continue to furiously assert that this is possible, but the claim being made here is in fact the same as dualism, where two things that have no common property are required to act on one another. Anybody who thinks this is possible in physics could simplify their search for consciousness by accepting the idea of dualistic spirits (Murphy, 2007, 2011, Auletta et al, Clayton and Davies, 2006 ). In the last two decades, consciousness studies has gone off in a different direction from physics or neuroscience. Much of consciousness studies is dominated by philosophers and psychologists who have only a scant interest in what has been happening in brain science, let alone physics. In many cases, they see it as their duty up to prop a nineteenth century Newtonian world view, while dealing in abstractions that that take limited account of neuroscience or physics. Neuroscientists have meanwhile been pressured into treating consciousness as not part of their remit, deferring to philosophers when it was necessary to discuss consciousness, even when the philosophy was contradicted by the neuroscientists own discoveries. More fundamental approaches have fallen victim to black propaganda against them. It seems likely that mainstream consciousness studies, if it survives at all, will reach the end of the 21st century without having achieved consensus on a theory that has explanatory value.Onderkant formulier The Descent into the Quantum World Fig.2 : Some central elements of quantum physics: uncertainty of position of particles, wave / particle duality as demonstrated in the double-slit experiment (upper part),as well as entanglement (non-locality) of particles at great distances, the phenomenon of coherence/decoherence and superposition of waves (lower part). The Quantum Wave The Two-Slit Experiment in Quantum Mechanics Fig. 3: The famous double slit experiment: single particles behave like a wave front that show interference pattern on the screen (a), even after passing the two slits decisions to open or close a slit influences the final pattern (b). The EPR Experiment and the Copenhagen Interpretation Fig.4: Quantum entanglement in a pair of distant elementary particles with regard to spin The Copenhagen interpretation and the EPR Experiment The Aspect Experiment and Non-locality Alain Aspect Alain Aspect  The question returned to the fore in the 1980′s as technology overtook the original EPR thought experiment. In 1969 John Bell’s Theorem had shown mathematically how EPR could be tested, and in 1982, (Alain Aspect, 1982). Aspect’s experiment demonstrated the physical reality of EPR. The Aspect experiment did not invalidate Copenhagen, but it transferred the whole debate from the hypothetical to the scientifically tested level. It presented physics with a stark choice. Either one could accept the Copenhagen Interpretation in which the locality of interactions was preserved, but the components of matter and energy were unreal, or one could have a world that was real, but in part governed by non-local influences, Einstein’s dreaded ‘spooky action at a distance’. Quantum Gravity and the Search for Reality The success of quantum theory (see examples in Fig. 5), which describes matter and energy, and of relativity, which describes space and time, have both been marred by the incompatibility of these two key theories. Relativity describes gravity as the smooth, continuous curvature of space under the influence of massive objects, while quantum theory is based on the idea of energy and matter coming in discreet discontinuous units. Mathematically these contrasting features lead to infinities, indicating that something is wrong. The attempt to overcome these problems has led to new theories, such as string theory, and loop quantum gravity (Smollin, 2005). Fig. 5: Wave/ particle duality in Quantum physics should be rather seen as a state in which a particle and wave forms are complementary features in an hidden reality (up left), and Pusey et al showed that the wave form is a physical reality (middle left). Principles of Quantum physics are presently used in a large variety of technologies (up right). The conscious observer or a detector that observes the double slit system and provides interpretable data collapses the wave function to a single slit pattern. LQG proposes that spacetime is quantized or in discrete units. Spacetime is suggested to be created out of a network, or a lattice, or a series of loops. This theory has drawn on the earlier spin network theory developed by Roger Penrose (1994, 2004), and moves towards viewing particles and spacetime as dual aspects of the same thing. Problems and Opportunities in Quantum Theory We have emphasized three problematic aspects of the theory, a causality in the randomness of the wave function collapse, a causality in the non-local influences demonstrated by EPR type experiments, and the resulting lack of agreement as to the underlying reality of the physical universe. At the quantum level, we find properties of mass, charge and spin that are given properties of the universe lacking cause or explanation. If we ask, what is the charge on the electron, what is it, not what does it do, the answer will be a resounding silence. The quanta and related spacetime appear to be the only level of the physical universe where it might be possible for science to insert consciousness as an additional fundamental property, (see for reviews Vannini and Di Corpo, 2008, Hu and Wu, 2010, and Tarlaci, 2010, Meijer and Korf, 2013, Pereira, 2003 Atmanspacher, 2011). Timescales for Neural Processes and Consciousness In looking at the possible physical underpinnings of neuroscience, Georgiev contrasts what is for consciousness studies the still dominant Newtonian orthodoxy of deterministic causes and effects, with quantum physics, in which there is a multitude of potential outcomes, rather than a single determined outcome. Georgiev, 2003 discusses epiphenomenalism, the theory that consciousness is a by-product of brain processing having only an illusion of causal influence. He points out the evolutionary argument against this view, to the effect that evolution would not select for something that conveyed no selective advantage. In general, he sees the idea that we have no freedom or moral responsibility as counterintuitive. Such a counterintuitive result is seen as the inevitable consequence of explanations based on deterministic classical physics. Quantum mechanics does, however, provide a non-deterministic alternative, in which consciousness underlies the neural processes of making choices and thus effecting future possibilities. The author goes on to discuss the vexed question of the possibility of quantum coherence in the brain. Mainstream consciousness studies has managed to fabricate an orthodoxy, to the effect that quantum coherence cannot occur in organic matter. A paper by the physicist, Max Tegmark, is often quoted in this respect. Tegmark asserted what was already an established position, to the effect that quantum coherence in the brain would be too short lived to have a functional role in neural processing( Tegmark, 2000). Max Tegmark Max Tegmark Table 1: The History of Quantum Physics and Quantum Brain Theory 1805: Young: Double-Slit experiment 1860: Maxwell: Electromagnetism Laws 1870: Bolzman: Gas laws/Movement of particles 1900: Planck: Quantum aspects of Energy 1905: Einstein: Special Relativity Theory 1908: Minkowski: 4-Dimensional Space Time 1915: Einstein: General Relativity Theorie 1913: Bohr: Structure of the Atom 1919: Kaluaza: Fifth dimension Gen. relativity /Electromagn. 1923: De Broglie: Wave/Particle duality, hidden variables 1924: Alfred Lotka: Quantum brain in mind/brain relations 1925: Pauli: Bosons and Fermions and Elementary particles 1925: Schroedinger: Wave equation for Electromagnetic particles 1925: Heisenberg: Uncertainty principle in Quantum physics 1925: Uhlenblick/Goudsmit: Electron spin phenomenon 1926: Born: Statistic description of wave/particle duality 1927: Bohr: Measurement in QM Copenhagen interpretation 1927: Planck/Heisenberg : Zero Point Energy Field 1928: Dirac: Quantum-Electrodynamics/Quantum field theory 1928: Artur Eddington : Q M determinism of brain function 1930: Fritz London/Edmond Bayer: Consciousness creates reality 1932: John von Neuman: Relation between Qm and consciousness 1934: J B S Haldane: Quantum wave character and life 1934: Niels Bohr: The Mind and QM are connected 1940: Wheeler/Feynman: Absorber theorie 1942: Casimir: Experimental proof of Zero Point Energy 1948: Gabor: Holography 1951: Bohm: Hidden variables, Pilot waves and Implicate order 1955: Pauli/Jung: Synchronicity 1957: Everrett: Many-worlds hypothesis 1961: Wigner: Consciousness collapses quantum state 1964: Bell: Quantum entanglement is non-local 1967: Wheeler: Quantum flavour dynamics of elem. particles 1966: John Eccles/F. Beck: Quantum effects in synaptic transmission 1967: L M Riccardi/H Umezawa: Quantum Neurophysics 1970: Prigogine: Nonequilibrium dynamics, unilateral time 1971: Pribham: The Holografic brain 1972: Clauser: Experimental proof of quantum entanglement 1974: Schwartz: Superstring theory !974: Ewan Walker: Quantum tunnelling in brain processes 1976: Sperry: the Self in Mind/Brain concepts 1978: Stuart/Takahashi/Umezawa: Quntum brain dynamics 1982: Aspect: Experimental proof quantum correlated particles 1980: John Cramer: Transactional interpretation of quantumphysics 1986: Barrow/Tipler: Anthropic cosmological principle 1986: Herbert Frohlich: Bose-Einstein condensates in biology 1986: Penrose; Quantum gravity induced reduction of wave function 1988: Steven Hawkins/: Multiverse concepts 1989: Ian Marshall/Zohar: Consciousness and Bose-Einstein condensates 1989: Puthoff: Particle inertia and Zero Point Energy 1989: Michael Lockwood: Mind, Brain and Quantum 1991: Zurek: Decoherence of quantum function by environment 1992: Schlempp: The quantum principle of MRI in brain scanning 1992: Smolin: Loop quantum gravity and Black holes/ multiverse 1992: Hamerhoff/Penrose: Microtubuli/Consciousness theory 1992: Pylkkänen: Mind /matter interaction and active information 1993: Goswami: the Self-aware Universe 1993: Herbert: Elemental mind 1994: Henry Stapp: Ca-ions, neuron coherence and free will 1995: Edward Witten: M (string) theory 1995: Mari Jibu/ K.Yasue: Ordered water and superradiance 1995: Gordon Globus: Quantum Cognitio 1996: Price: Backward causation 1996: Chalmers: The hard problem/Panpsychism 1998: Scott Hagan: Mictotubuli biophoton emission 2000: Wheeler: The Participatory Universe: 2001: Wolf: Mind into Matter and the Soul 2000: Vitiello: Dissipative Quantum model of the Brain 2002: Huping Wu/ m Wu: Spin mediated consciousness 2003: Zeilinger: Information and quantum teleportation 2004: Laszlo: The informed universe, non-local Akashi field 2003: Primas: Tensed time in Matter and Mind 2005: Yasue/Umezawa: Bioplasm in Quantum brain dynamics 2006: Scaruffi: Consciousness in elementary particles 2006: Deutch: Fabric of reality, quantum computing and multiverse 2012: King: Cosmology of consciousness 2013: Hameroff and Penrose: Modified the Orch Or brain model QM approaches in neurobiology: the state of art The following section, taken from Meijer and Korf, 2013, discusses the idea that the physical quantum concepts do physically apply to the mind: the mental domain is considered as an aspect of wave information. A special position takes the feature of superposition: quantum particles can be present in multiple spatial locations or states and be described by one or more pure state wave functions simultaneously of which a single state can finally be selected. Penrose, (1989) suggested that the underlying space /time geometry in fact bifurcates during the superposition process and wave collapse occurs in a non-computable manner. It was suggested the conditions found in the microtubule could allow coherent quantum particles to form a unity that can be described by a single wave function. These concepts are considered the ”hard quantum theories”, as opposed to the “soft” or formal theories of the previous section. QM adherents refer often to Wolfgang Pauli (Pauli, 1994), the eminent quantum scientist who suggested that the mental and the material domain are governed by common ordering principles, and should be understood as “complementary aspects of the same reality” (see Atmansapacher and Primas, 1977; Primas 2003). The “hard” mental QM theories apply either to specific brain structures/molecules (this section) or to quantum fields and dimensions or both. Vannini and Di Corpo, 2008, Hu and Wu, 2010, and Tarlaci, 2010 listed and attempted to categorize the various published quantum brain models, without a detailed treatment of the individual models. Vannini and Di Corpo distinguish models based on consciousness creating reality, models based on probability aspects of QM and models based on already established QM order principles. Hu and Wu differentiate in models based on QM elements of entanglement and coherence and models on the relation of QM with consciousness that can include materialistic modes (consciousness emerges from material brain), dualistic mind/matter models and panpsychistic modes. The first two papers emphasize the potential testability of the various models. More detailed reviews can be found in Pereira, 2003 and Tuszynsky and Woolf, 2010, the latter as an introductory chapter of the instructive book: “Emerging Physics of Consciousness”, while an excellent and critical overview of the field is provided by Atmanspacher, 2011. A number of, more or less specific, scientific journals are currently, or were, devoted to this subject: NeuroQuantology, Quantum Biosystems, Mind and Matter, and AntiMatters. Stanford Encyclopedia of Philosophy (see quantum mechanics and quantum theory) also provides an excellent reference. Additional publications on the topic can be found in: J Consciousness Exploration& Research, J New Dualism, Dualism Review, Journal of Cosmology, J Scientific Exploration, Biosystems, , Cognitive Neurodynamics, Science and Consciousness Review, Journal of Mathematical Psychology, Chaos, Solitons and Fractals, Open Systems and Information Dynamics, International Journal of Quantum Chemistry, The Noetic Journal, Neuroscience and Biobehavioral Reviews, Experimental Neurology, J. of Mind and Behavior Physics of Life Reviews, Syntropy Journal, Biological Cybernetics and Kybernetik. The Journal of Consciousness Studies is an excellent source for various models for consciousness (see for example Fig. 6). Fig. 6: Some models that have been proposed for human consciousness Before we delve into the physical aspects of the quantum brain, a number of common misunderstandings on QM modeling should be dealt with: • There is no single theory on quantum mechanical aspects of brain function. In fact a spectrum of more or less independent models have been proposed, that all have their intrinsic potential and problems (see table 2, for references see the above mentioned reviews and Meijer, 2012, Meijer and Korf, 2013). • In spite of the introduction, already in the first part of the 20th century and the spectacular successes of the theory ever since, some still see quantum physics as a sort of esoteric part of science. However it rather represents a revolutionary refinement of classical physics, for example taking into account that the theory was required in order to build an adequate atomic model and more recently to explain experimentally demonstrated teleportation of particles (see Zeilinger, 2000) as well as principles of downward causation (Wheeler, 2002) and time symmetry (see Aharonov et al, 2010). It is also the basis for laser, semi- and super- conductance, and microchip technology as well as MRI brain scanning (Marcer and Schempp, 1997). It should also be kept in mind that classical physics can be fully derived from quantum physics, not the other way around. • Quantum physics is, by some rejected, since so many interpretations of the theory are at stake (Copenhagen, Many worlds, Implicate order, Interactional theory, Micro- macro- scale definition, Environmental de-coherence, Relational quantum mechanics etc.). Yet a number of common elements such as the true particle/wave aspect, instead of only a probability function, (Pusey et al., 2012), superposition, entanglement/non-locality and coherence/de-coherence phenomena are experimentally established and remain very usable in practice, although the related semantics should be carefully defined. • It is often stated that quantum wave information coherence cannot be maintained long enough in the brain due to interaction with the macro-environment of the brain components. Yet, on this point major differences in decoherence-time calculations exist, as based on various models and their intrinsic assumptions (see Hagan et al., 2002 and Tegmark, 2000, Lloyd, 2011). A central point here is that sub-compartments could be present at the molecular or sub-molecular level, that by their special arrangements are quantum noise protected or coherence stabilized. Examples are internal parts of channel proteins (Bernroider, 2004), and stabilization by clustered (gel/sol) arrangements of cytoplasmic water clusters (see Hameroff and Penrose, 1996, Penrose and Hameroff, 2011). The latter authors proposed a hierarchic model encompassing nerve cell depolarization, gel/sol transitions of resulting in disconnection of microtubuli, shape/volume pulsation of dendrites including reorganization of synaptic contacts and finally sol/gel transition stabilizing a new state. Through coherence and macroscopic entanglement, life time of wave information can be much longer than in the classical phase, as a consequence of coherence/decoherence dynamic equilibria, allowing nonlocal remote interaction in large numbers of entangled neurons. Such gel/sol oscillations could even be a primary to excitation/depolarization triggered by normal sensory stimuli and are supposed to interact with zero-point vacuum dipole vibrations (the bi-vacuum matrix model of Kaivairanen, 2006). • It should be realized that decoherence, does not, per definition imply destruction of information since, firstly, it is not compatible with the quantum principles of non-cloning and non-deletion, secondly a cyclic process of decoherence and re-coherence processes cannot be excluded (see Hartmann, et al 2006; Li and Paraoano (2009); Atmanspacher, 2011) while thirdly, even if such decoherence does occur, it may result in mixture of possibilities that may be accommodated by the collection of perceivable worlds in the brain (Stapp, 2012). It has been proposed by Vattay and Kauffman, 2012, that a decoherent state can be converted back to a coherent state by the input of adequate phase and amplitude information. The resulting coherent states can last long enough in warm biological systems in order to, for example, enable coherent search processes for antenna-mediated transport of photon energy in photosynthesis. The author postulates that similar “poised realm” or micro-domains, on the edge of chaos, could also be instrumental in the human brain as sites where a dynamic interplay of decoherence and re-coherence takes place. • It is often assumed that QM is only valid for a description of nature on the micro-scale (elementary particles etc.). Yet convincing evidence was more recently presented that quantum physics can be applied to macromolecules (Zeilinger, 2000), and to the surprise of many, even can occur in warm and wet biological systems (photosynthesis: Engel et al., 2010) and brains of birds in relation to magnetic sensing and navigation (for references see Arndt et al., 2009, Lloyd, 2011), see 7. Fig 7: Quantum phenomena that have been detected at the life macro-scale • Lloyd concluded: “Quantum coherence plays a strong role in photosynthetic energy transport, and may also play a role in the avian compass and sense of smell. In retrospect, it should come as no surprise that quantum coherence enters into biology. Biological processes are based on chemistry, and chemistry is based on quantum mechanics. If an organism can attain an advantage in reproduction, however slight, by putting quantum coherence to use, then over time natural selection has a chance to engineer the necessary biochemical mechanisms to exploit that coherence. Different types of quantum processes that operate at the same time scale can interact strongly either to assist or to impede one another. In photosynthetic energy transfer, the convergence of quantum time scales gives rise to more efficient and robust transport. Evolved biological systems exhibit the quantum Goldilocks effect: natural selection pushes together time scales to allow quantum processes to help each other out”. • A spectrum of atoms/molecules has been suggested to operate in a quantum manner: Ca 2+- and K+- ions, H2O, enzymes, membrane receptor and channel proteins, membrane lipids, neurotransmitter molecules, in addition to macromolecular structures such as DNA/RNA, gap junctions, pre-synaptic vesicles, microtubules and micro-filaments (Tuszinsky and Woolf, 2010, Meijer, 2014) • Since our integral universe can be described by the current laws of QM and relativity, it does not seem warranted to place the human brain outside nature: some see even cosmic architecture mirrored in our complex brain (Kak, 2009; Amoroso, 2003) • The discussion around higher brain functions is frequently obscured by modalities of promissory materialism: “at present we do not understand consciousness but within 20 years the problem will be resolved !” Not only is such an extrapolation scientifically unwarranted but certainly cannot be falsified. Even more damaging is the assumption that one will find the solution by further using current technology, instead of postulating new (for example quantum) models and innovative experimental approaches. • Some QM models are based on the interaction of brain components with experimentally detected quantum fields (Yasue and Jibu 1995, Vitiello, 1995, Pessa and Vitiello, 2003). The central aspects of realistic quantum field theory hold that the essence of material reality is a set of fields. These fields obey the principles of special relatively and quantum theory and the intensity of a field at a point gives the probability for finding its associated quanta as the fundamental particles that are observed by experimentalists. These fields may holographically project into each other, implying interactions/interpenetrations of their associated quantum waves. Vitiello proposes a virtual shadow brain working in a time-reversed mode that stabilizes coherence and neural memory structures. • It could be worthwhile to project neo-darwinism and its biological evolution theories against the canvas of potential QM mechanisms, in the sense that parallel quantum superpositions and backward causation mechanisms can provide explanations and/or alternatives for evolution jumps and so-called emergent phenomena (see Davies, 2004; Murphy, 2011; Auletta et al., 2008 Davies and Gregersen, 2010; Ellis, 2005; Vattay et al 2012). Recently, models were proposed for the transfer of information in biological evolution on the basis of quantum formalisms (Bianconi and Rahmede, 2012, Djordjevic, 2012). • On the basis of QM concepts one should be prepared to envision uncommon and even utterly strange manifestations of quantum entanglement: certain transpersonal human experiences (Kak, 2009; Radin and Nelson 2006; Di Biase, 2009 a, b, Jahn and Dunne, 2007) should not be seen as potentially be explained by QM, but rather required (Radin and Nelson, 2006) by the concept that our world is part of a quantum universe (Vedral, 2010; Lloyd, 2006; Barrow and Tippler, 1986).  QM and Higher Brain Functions Here we discuss current QM theories as possible theories bridging the classical neuronal and mental concepts. QM theories does indeed apply to the same brain physiological phenomena, but introduce also typical features such as particle/wave duality, entanglement and non-locality, as well as wave interference and superposition. In addition processes such as quantum coherence and resonance of wave interactions are at stake.                                 Quantum Brain models proposed Author Year   Author Year Amaroso 2009 Lockwood 1989 Baaquie and Martine 2005 Marshall 1989 Beck and Eccles 2000 Mender 2007 Bernroider 2000 Hameroff and Penrose 2012 Bohm 1980 Pereira 2003 Culbertson 1963 Pitkänen 1990 Di Biase 2009 Pribram 1971 Flanagan 2003 Romijn 2000 Fröhlich 1968 Santinover 2002 Goswami 1993 Sarfatti 2011 Georgiev 2003 Stapp 1993 Herbert 1987 Talbot 1991 Hu and Wu 2005 Umezawa and Ricciardi 1967 Järvilehto 2004 Vannini and DiCorpo 2009 Josephson 1991 Vitiello 1995 Kaivarainen 2006 Walker 1970 King 1989 Wolf 1995 Yasue 1995  Table 2: Quantum Brain Models proposed from 1960 and further (see for references Meijer, 2012; Meijer and Korf, 2013; Meijer, 2014; Vannini and Di Corpo, 2008; Hu and Wu, 2010, and Tarlaci, 2010). It is not our purpose to assess the various QM theories in detail, rather we intend to discuss some of their major implications regarding the concept of a “quantum brain”. The key position of proteins in the quantum-mediated initiation and execution of mental activities was already emphasized. Several QM theories are based on specific properties of proteins, as for instance micro-tubular proteins, (Penrose, 1989; Hameroff, 2007), proteins involved in facilitating synaptic transmission (e.g. Beck and Eccles 1992; Beck 2001), including Ca2+- channels, see Stapp, 2009), as well as specific channel proteins, instrumental in the initiation and propagation of action potentials (K+- channels), Bernroider and Roy, 2004), see Fig 8. QM theories also extends the mind to different spaces and time dimensions and some consider the individual mind (partly) as an expression of a universal mind through holonomic communication with quantum fields. In the latter approach, the human brain is conceived as an interfacing organ that not only produces mind and consciousness but also receives information necessary for full deployment of these mental phenomena (see next session). The central question here is whether neuronal cells are the sole units for information processing in the brain rather than sub-cellular organelles or molecules (Schwarz et al., 2004). Fig 8. Some aspects of quantum brain models: Synaptic transmission by vesicular exocytosis of neurotransmitter molecules, Ca2+ -influx via Ca2+- channel protein in the neuronal membrane facilitates fusion of synaptic vesicles in the presynaptic terminal. The fusion of sufficient vesicles leads to transmitter release and depolarization of the postsynaptic membrane, this fusion process bears a quantum probability character. A major debate about these theories concerns the possibility of coherent quantum states in the “warm” and wet internal milieu of the brain. (see e.g. Atmanspacher, 2011). The defenders of the quantum brain models have argued that in vivo molecular configurations exist that enable the modulation of quantum states through efficient protection and shielding of the wave interaction compartments in the cells (Hagan et al., 2002). The particular local collapse of the wave function, in this manner, produces new information. As originally proposed by Eccles, this is realized by membrane protein induced fluxes of Ca2+ or K+ ions, that than increase the probability of fusion of neurotransmitter-filled vesicles in the synapses, leading to the firing of the particular neuron or even groups of neurons. The central hypothesis here is that synaptic transmission represents a typical (quantum) probability state in which the total number of vesicles available for exocytosis is critical for an all or none response of neuronal firing (Beck and Eccles, 1992; Beck, 2001). Coherent neuronal perturbations and especially their entangled state are supposed to provide non-local “binding” of sensory and cognitive brain centers, and may also enable perception of qualia and the unitary sense of “conscious self” (Hameroff, 2007). As the “mesoscopic” scale of brain activity where the “binding” process is expected to occur is in the vicinity of the quantum domain, the binding principle is likely to be a quantum non-local effect, probably the only known physical mechanism able of performing such a task. One possibility is the formation of a quantum photonic field (Flanagan, 2006); another possibility is the formation of coherent states on the level of trans-membrane ion fluxes such as that of Ca2+ as suggested by Pereira, 2003, 2007, see section 8, Fig. 8).  Hameroff and Penrose (2011, see later) argue against mechanisms of all-or-none firing of axonal potentials as suggested by Beck and Eccles, since such binary states do not include non-linear and not computable characteristics of consciousness. They rather prefer the model of Davia, 2010 (see the chapter in the book edited by Tuszinski and Woolf, 2010), proposing that consciousness is related to traveling waves in the brain as a uniting life principle on multiple scales. The latter is based on energy dissipation, enzyme catalysis, protein folding that maintains energy balance in an excitable system such as the brain, conditions that are also compatible with the isoenergetic brain model treated in Meijer and Korf, 2013. Non-linearity in brain processes is modeled using the well-known Schrödinger equation, adjusted with a non-linear term, as proposed earlier by Walker (see Behera, 2010 in the same book), by which robustness of a classical approach is combined with the more flexible elements of quantum theory. The originators of this hypothesis (Penrose, 1989; Hameroff, 2007) have discussed that microtubule, in principle, can maintain quantum states (i.e. superposition) lasting at least 10-6 seconds, long enough to be instrumental in the transfer of quantum wave information. Such lasting quantum states are possible because of the shielding of hydrophobic pockets in the particular proteins, as well as the formation of coherent clusters of these molecules that thereby share a common quantum wave function (so called Bose-Einstein Condensates).There is indirect evidence that microtubules may be relevant for neurocognition: increased synthesis in relation to postnatal development with regard to synaptogenesis and visual learning and as counterpart aging deficits in memory as well as interactions with general anesthetics (Penrose and Hameroff, 2011, 2012; Tuszynski and Woolf, 2010; Kalvairainen, 2005). Yet such correlative studies should not only be further substantiated with experiments that show quantum states in isolated tubules, as reported by (Bandyopadhyay, 2011), but rather and most importantly, directly demonstrate tubular involvement in higher brain functions in situ. More recently in 2013 Bandyopadhyay’s group demonstrated that in microtubules the energy level of up to 40,000 individual tubulin proteins and the energy level of the microtubule are the same. The water core and the individual tubulin proteins are suggested to control the properties of the microtubule by means of delocalised electromagnetic oscillations. The properties of the microtubule might be taken to suggest that the system can support a macroscopic quantum state. The authors say that prior to this 2013 paper the properties of tubulin and microtubules were not extensively studies using the up-to-date technologies mentioned here. Theories that apply to metals, insulators and semi-conductors are not relevant to microtubules. In conclusion: tubular and synaptic channel proteins exhibit conformational transitions within 10-9 seconds that may last for 10-6 seconds or even longer, (Beck and Eccles 1992; Beck 2001; Bernroider and Roy, 2004, Kaivairainen, 2005). These perturbations may last long enough to be finally detected as miniature neuronal potentials (Hamill et al., 1981, Hagan et al., 2002). The particular mechanisms also imply a manifestation of non-local quantum effects due to distant coherence, a phenomenon that was even recorded in laser stimulated neuronal cell cultures in which classical physical explanations were excluded (Pizzi et al., 2004). The coherence of such quantum states among brain proteins has been suggested to lead to material changes in brain physiology through orchestrated collapse of quantum coherent clusters of tubulin proteins, triggered by quantum gravity expressed at the spin (Planck scale) level. On the basis of a recent theory on the nature of gravity (Verlinde, 2011), postulating that gravity is not a force but rather an entropic compensation for the movement of mass/information, it was speculated that consciousness may arise from a gravity-mediated reaction on the entropic displacement of information as it occurs in high density in the human brain (Meijer, 2012). Anyhow, there should be a mechanism to integrate signal processing within a single neuron with other, even distant, neurons and consequently non-local effects due to quantum entanglement should play a role also in this case. These quantum processes may explain phenomena such as qualia, meaning, sensation of unity, intentionality as well as conflict solving, reliability in the sense of correspondence with the outer world and the sense of self. The latter is related to the feeling of causal power that could result from a quantum/classical interface in which classical synaptic processes create a quantum coherent state that enables quantum computation that exert a back-influence on the original synaptic process (Pereira, 2003, 2007). The existence of nonlocality in brain function, being a basic property of the universe strongly argues for an underlying deep reality out of space/time as originally proposed by Bohm (1990) in the form of an implicate order. Bohm claimed that these mechanisms also play a role in different forms of transpersonal and extrasensory perception by wave resonance with an universal quantum field (Kak, 2009, Jahn and Dunne, 2007, Kafatos and Draganescu, 2000, Kafatos, 2009). The main issue of the present essay is that wave information provides a potential coupling to mental processes. For instance, wave information could be transmitted from and into the brain by wave resonance and may locally collapse to matter entities through conscious observation, including sufficient individual attention and intention (Stapp, 2009). Stapp (2012) argued recently that this does not represent an interference effect between superposed states, as assumed by Hameroff and Penrose (1996), but that through environmental de-coherence, superpositions will be converted to multiple mixtures of information. Since our brain contains a large collection of perceivable worlds, it is able by supercausal free choice and subsequent common random choice to make a fit with one or more of the abovementioned mixed information modalities. The particular waves than spread out and rapid sequence repetitions (the so called Zeno effect) may sufficiently maintain coherence in parts of the brain. Of note, Stapp does not see free will as based on quantum probability aspects. He states: “In the original Copenhagen formulation this extra process is initiated by what is called “A free choice on the part of the experimenter.” The phrase “free choice” emphasizes the fact that, while a definite particular choice is needed, this choice is not determined by any known law or rule: The purely physical aspects of the theory have, therefore a significant causal gap, which opens the door to a possible causal input from the mentally side of reality”. Quantum information may exert physical effects via a bottom-up flow of information starting at spin networks (Penrose, 1994; Hu and Wu, 2010), that can be passed on as wave forms of elementary particles/atoms, to be ultimately expressed at the level of neuronal molecules. Meijer and Korf, 2013, consider the latter flow of information more feasible than being directly transferred through vibratory interference at the molecular level. According to this integral quantum model, perturbations at the various spatiotemporal domains allow both time-symmetric forward and backward causation and therefore top-down influence of quantum fields). The basic question is: how are quantum waves or quantum fields finally perceived by the human brain and how they influence or even induce phenomena as (self)consciousness? Organisms do indeed visually perceive photons that exhibit wave/particle duality; humans even sense less than ten photons, whereas insects may even detect a single photon (Baylor et al., 1979; Menini et al., 1995). Sensitive detection is possible with dedicated cellular structures as for instance in the mammalian retina that amplify the energy of a single photon by a cascade of processes, based on changes of protein conformations and cellular potential energy, leading to the electrochemical stimulation of neurons projecting to the brain. Recently, photosensitive proteins have been coupled to ion-channel proteins with biotechnical techniques, so that the neural activity can be modified or inhibited in vivo by light introduced via optic microfibers (Lima and Miesenböck, 2005; Boyden et al., 2005; Tsai et al., 2009). These experimental approaches demonstrate that quantum effects may directly affect neural function, but it remains to be shown more definitely, that this also does occur directly inside the human brain as it was demonstrated in the brain of birds (see the reviews of Arndt et al., 2009, Lloyd, 2011). Quantum information mechanisms were recently used to model human consciousness as well as the unconscious in relation to conscious perception (Martin et al, 2013) in which various modalities of non-locality were discussed. Of note, entanglement and non-localty may not only apply to spatial separation but also a temporal one (Megidish et al, 2012). It was proposed by Martin that archetypes can be stored as quantum systems and that consciousness may be controlled by quantum entanglement from outside space-time. Although this cannot be easily envisioned, Nicolescu (1992, 2011) made clear that the relation between different levels of reality have to be interpreted in the framework of Gödel’s incompleteness theorems, and that it may be intrinsically impossible to construct a complete theory for describing the unity of all levels of reality. Interestingly, recently a 5-dimensional space-time brane model was proposed in order to adequately position consciousness and universal consciousness in the cosmos (Carter, 2014a, 2014b), an item that was discussed earlier by Smythies, 2003 suggesting that consciousness may be in a brane rather than in the brain. Atmanspacher (2003) explained that mind/matter correlations may require new science, in the sense that the use of emergence and reductionistic schemes may not be adequate and should be replaced by possible symmetry breaking within a domain in which matter and mind are unseparated. He cites d’Espanat postulating an independent ”Ultimate Reality” that is neither mental nor material. Another issue is whether, more or less, random quantum events can be orchestrated in such way that the information becomes meaningful for the brain. Thus the major challenge is to directly demonstrate that proteins such as in microtubules, K+-channels or synaptic vesicles and associated proteins become informative to the organism. It has been put forward that a combination of quantum mechanisms and non-linear (chaos) theory have to be considered in the amplification of subtle external information necessary for immediate action (King, 2003, 2011). Future information (feeling of future events) may be realized by time-reversed sensing of such an event on the basis of an attractor state. According to the “supercausal” model of consciousness of Chris King, the constant interaction between information coming from the past and information coming from the future leads to that quantum entities that are always confronted with bifurcations between past and future causes. This involves fractal structures and chaotic dynamics that enable free choices to be performed. Consequently consciousness should be a property of all living structures in which each biological process is forced to choose between information coming from the past and information coming from the future (King, 2003). Such models (including that of Vannini and Di Corpo, 2008) attribute consciousness to principles of relativity, quantum physics, and fractal geometry and on the basis of established physical applications of these theory’s, would, in principle, allow experimental testing to falsify them. It is of interest that top-down recurrent connections in higher order in the associative cortex was shown to be indispensable conscious perception (Boly et al., 2011). In more general terms: processing and amplification of quanta/wave information in the brain may underlie the presumed higher brain or mental functions. If one assumes that such detection mechanisms does indeed operate in the brain, than the next question is whether the information to be processed is exclusively associated with quantum waves or quantum states or, alternatively, with the specific proteins that carry them. Apart from discussing the inherent mechanisms such as forward and backward causation, superposition and entanglement in the mental space, we shortly treat the idea that that individual mind may, at least partly, be an expression of universal consciousness as opposed to the concept the mind is merely an attribute of matter. David Bohm: Wholeness and the Implicate Order David Bohm en Louis de Broglie David Bohm and Louis De Broglie     David Bohm and Louis De Broglie David Bohm, 1980, 1990, took the view that quantum theory and relativity contradicted one another, and that this contradiction implied that there existed a more fundamental level in the physical universe. He claimed that both quantum theory and relativity pointed towards this deeper theory. This more fundamental level was supposed to represent an undivided wholeness and an implicate order, from which arose the explicate order of the universe as we actually experience it. The explicate order is seen as a particular case of the implicate order. (Fig. 9).Fig. 9: The Implicate Order concept af David Bohm in which particles and their more complex forms in our classical world are steered by, so called, pilot waves that operate from a 4-dimensional hidden domain, in a mode of active information. The implicate order applies both to matter and consciousness, and it can therefore explain the relationship between these two apparently different things. Mind and matter are here seen as related projections into our explicate order from the underlying reality of the implicate order. Bohm claims that when we look at the extension of matter and separation of its parts in space, we can see nothing in these concepts that helps us with understanding consciousness. Bohm compares this problem to Descartes discussion of the difference between mind and matter. Descartes to some extent relied on God to resolve the gap. Bohm says that since Descartes time the idea of introducing God into the equation has been let drop, but he argues that as a result conventional modern thinking has no way left to it for bridging the gap between matter and consciousness. In Bohm’s scheme there is an unbroken wholeness at the fundamental level of the universe, in which consciousness is not separated from matter. Bohm’s view of consciousness is closely connected to Karl Pribram’s, 1991 holographic conception of the brain. Pribram sees sight and the other senses as lenses, without which the universe would appear as a hologram. Pribram thinks that information is recorded all over the brain, and that this information is enfolded into a whole, also in the manner of a hologram, although it is suggested that the physical function involved is more complicated than a hologram. In Pribram’s scheme, it is suggested that the different memories are connected by association and manipulated by logical thought. If the brain is also attending to sensory data, all of these facets are proposed to fuse together in an overall experience or unanalysable whole. This is suggested to be closer to the essence of consciousness than the mere excitation of neurons. In trying to arrive at a description of consciousness. Bohm discusses the experience of listening to music. He thinks that the sense of movement and change that constitutes the experience of the music relies on notes both from the immediate past and the present being held in the brain at the same time. Bohm does not view the notes from the immediate past as memories but as active transformations of what came earlier. He proposes that a given moment can cover an extended duration, as opposed to the more conventional ‘now’ concept of something instantaneous. The moment is proposed to have extension in time and space, but the amount of this extension is not precisely defined. One moment gives rise to the next, with content that was implicate in the immediate past becoming explicate in the present. The sense of movement in music is the result of the intermingling of transformations. Bohm likens these transformations to the emergence of consciousness from the implicate order. He thinks that in listening to music people are directly perceiving the implicate order. The order is thought to be active and to flow into emotional and physical responses. Bohm also discusses the problem of time, the concept of ‘now’ and the difficulty of distinguishing ‘now’ from the immediate past, which no longer exists. In classical physics this problem is overcome via the calculus, with its concept of ‘the limit’, which is effectively a zero change in time or space. This is successful for calculating the movement of material objects in classical physics, which comprises the explicate order. However, it is not applicable to quantum theory in which movement is not seen as continuous. In the implicate order intermingled elements are present together, and processes are the outcome of what is enfolded in the implicate order. In this structure, there is a flow between experience and logical thought that is considered by Bohm to hold out the possibility of a bridge between matter and consciousness. Bohm also advances the idea of overall necessity driving short-term brain processes. Thus it is proposed that an ensemble of elements enfolded in the brain will constitute the next development of thought, and that these elements are bound by an overall necessity that brings them together, and also determines the next moment in consciousness. Bohm relates movement to the implicate order; for movement, we can also read change or flow or the coherence of our perception of a piece of music over a short period of time. Evidence for this is claimed to derive from studies of infants (Piaget, 1956), who have to learn about space and time, which are seen as part of the explicate order, but appear to have a hard-wired understanding of movement that is implicate. Bohm’s view is that the movement and flow of the implicate order are hard-wired into human brains, in the same way that Chomsky asserts that grammar is hard- wired into the human brain, but that by way of contrast, the classical space and time of the explicate order are something that has to be learnt by experience. – Basil Hiley was the long-term associate of David Bohm, and is a continuing exponent of many of his ideas (Bohm and Hiley, 1987, 1993). Hiley argues that the Bohmian notion of active information introduced in relation to quantum phenomena can also be applied to classical signalling. This is suggested to have relevance to concept of meaning as opposed to mere information. Hiley queries whether the word ‘information’ that is widely used in science including neuroscience always carries the same meaning. Bohm and Hiley were interested in so-called active information that drives physical processes and leaves no choice as to whether they are implemented or not. This is distinct from a mere list of data or instructions or a way of viewing entropy. Active information has been used in a number of papers relative to the mind/matter relationship (Hiley , 2001 and Hiley & Pylkkänen, 2005) The colloquial understanding of information is that it is data from which meaning can be extracted by an intelligent entity. Hiley regards it as a fundamental question as to whether information has objective significance devoid of the subjective involvement. Verbal communication is seen as a particular problem, where meaning is translated into sound waves and then back into meaning. Hiley relates this meaning to the agency of the speaker and the agency of the listener. He relates this inseparable link to Bohr’s notion of the indivisibility of the quantum action, which cannot distinguish between the system under observation and the means of observation. Bohm believed that a quantum potential could be extracted from Schrödinger’s equation and that this quantum potential could act as an information potential. In transmitting a signal there is a trade off between the duration of the pulse and the frequency. There is an ambiguity in the signal that is similar to the uncertainty in quantum mechanics. The two concepts are said to employ different aspects of the same mathematical structure. Hiley refers to the two-slit experiment, where the potential is claimed to cover the whole experimental arrangement. The quantum information changes in relation to any change in the experimental arrangement, and this is related to information entering the brain and changing the arrangement of its parts. Within the brain Bohm thought that meaning was in the process itself. Bohm proposed that there were two sides or two poles to the brain, the manifest and relatively stable material side and the subtle mind-like side. The manifest side is classical physics, while the subtle side is the quantum level that produces the classical level. Thus the mind cannot be separated from matter. The ambiguity or uncertainty of the quantum comes through in the ambiguity attached to meaning. The quantum is seen as a pool of information shared by entangled particles. When the potential or pool vanishes, the classical world emerges. Hiley also agrees that this system could operate in terms of quantum fields. The main weakness of this description seems to be the lack of detail as to how the quantum mechanism would operate in the brain, and the lack of distinction between information which does not by itself imply consciousness and consciousness itself. The emergence of meaning could be thought to imply consciousness but this important point is not at all developed. Fig. 10: The reversible ink drop /cylinder-experiment, as an allegory for the unfolding of ”implicate order” with hidden variables But if the cylinder is then turned in the opposite direction, the thread-form reappears and re-becomes a droplet; the droplet is unfolded again. Bohm realized that when the ink was diffused through the glycerin it was not in a state of ’disorder’ but possessed a hidden, or non-manifest, order. In Bohm’s view, all the separate objects, entities, structures, and events in the visible or explicate world around us are relatively autonomous, stable, and temporary ’subtotalities’ derived from a deeper, implicate order of unbroken wholeness. Bohm gives the analogy of a flowing stream: On this stream, one may see an ever-changing pattern of vortices, ripples, waves, splashes, etc., which evidently have no independent existence as such. Rather, they are abstracted from the flowing movement, arising and vanishing in the total process of the flow. Such transitory subsistence as may be possessed by these abstracted forms implies only a relative independence or autonomy of behavior, rather than absolutely independent existence as ultimate substances. Anthony Valentini Anthony Valentini Valentini, 2002 consistently defends the pilot wave mechanism of David Bohm. Bohm, he says, had an interesting trajectory. There are really three Bohms. There’s the very early Bohm who was interested in Niels Bohr’s ideas about complementarity. Then there’s the Bohm of the 1950s who worked on the pilot wave theory of hidden variables. Then in the 1960s he changed again. He met Krishnamurti and got very interested in Indian philosophy and started trying to tag some mystical ideas onto the pilot-wave theory. If you look at the yoga sutras of Patanjali you can see this idea that material objects are somehow illusions and projections from something deeper, that things emerge from this deeper level and disappear into this deeper level again. So, indeed, Bohm tried to adopt an interpretation of the wave as a manifestation of a deeper level, perhaps associated with consciousness. Why does Valentini like the pilot-wave theory ?: • It preserves a realist ontology wherein particles possess determinate values of space-time location and momentum. • They continue to possess such values between various acts of observation/measurement, rather than acquiring them only in consequence of being measured with respect to this or that parameter. • This allows for greater continuity with certain components of classical (prequantum) physics such as the conservation laws respecting matter-energy and angular momentum. • The pilot-wave hypothesis produces results in perfect accordance with those obtained in standard QM by means of the Schrödinger-derived wave probability function. • While avoiding any recourse to mysterious ideas of the wave packet collapse as somehow brought about by observer intervention or only on the instant – in Schrödinger’s parable – when the box is opened up for inspection and the cat thus release from its supposed ‘superposed’ (dead-and-alive) state. • Pilot-wave theory also seeks to explain quantum effects such as photon deflection or multipath interference without proposing a massively expanded ontology of parallel worlds, shadow universes, multiple intersecting realities, etc. Pilot-wave theory has three axioms. The first is de Broglie’s law of motion, which specifies exactly how particles are guided by the wave. The second is Schrödinger’s wave equation, telling us how the wave itself changes over time. The third is that particles have to start off with a certain probability distribution.“In any given experiment, each particle is accompanied by a wave”. The particle starts off somewhere inside the wave. In order to give results that can be verified with an experi­ment, all three axioms have to be used. In classical physics there is an interplay between particle and field, each generates the dynamics of the other. In the original pilot wave theory the steering wave acts on positions of particles, but it is not acted upon by the particles. However Holland, (2001) has explored some deeper ideas related to this question in his work on a possible Hamiltonian formulation of pilot-wave and proposed a particle to wave back-reaction This implies that, through the pilot wave mechanism, particles, just like waves, carry information regarding their future states. It also means that particle trajectories may exert a back reaction on the wave function, implying symmetric interaction between implicate and explicate orders. What is so unusual about Antony Valentini? He, in fact, resurrected a theory that undoes the central tenet of quantum mechanics, and gives relativity theory a support as well. The theory follows quantum math, but at the same time allows for new possibilities beyond conventional quantum mechanics. It’s a theory that says there is indeed an objective reality behind the things we observe, that quantum uncertainty is not fundamental, and that somewhere, somehow, time is universal—not relative. This implies goodbye to ghostly probabilities, with their strange propensity for collapsing into real things and hello to hidden variables that are objective. This seems related to the American physicist John Archibald Wheeler (1990, 2002), who suspected that reality exists not because of physical particles, but rather because of the act of observing the universe. “Information may not be just what we learn about the world. It may be what makes the world.” In other words: when humans ask questions about nature, there is an active transfer of information in the domain of quantum waves where, in principle, backward causation from the future is possible. The second arrow (from future to past) remains hidden (unnoticed) for us, because life is trapped in the momentum of time. Entanglement means that particles separated at any distance, under certain conditions, can have mutually determined properties (are correlated). In this block universe multiple path’s or life lines are laid out of which the individual chooses a single one. Consequently this concept allows free choice and therefore is not deterministic. Such non-locality becomes manifest by observation (or collapse of the wave aspect), as has been shown by electron spin orientation or polarized light. This might also be viewed as backward causation. In his “Transactional Interpretations of Quantum Mechanics, John Cramer (1988) stated that “Nature, in a very subtle way, may be engaging in backwards-in-time handshaking: The transaction between retarded waves, coming from the past, and advanced waves, coming from the future, gives birth to a quantum entity with dual properties of the wave/particle. Thus the wave property is a consequence of the interference between retarded and advanced waves, and the particle property is a consequence of the point in space where the transaction takes place”. The transactional interpretation requires that waves can really travel backwards in time. This assertion seems counterintuitive, as we are accustomed to the fact that causes precede effects. It is important to underline, however, that, unlike other interpretations of QM, the transactional interpretation takes into account special relativity theory which describes time as a dimension of space, as mentioned earlier. Of note, the completed transaction erases all advanced effects, so that no direct advanced wave signaling is possible: “The future can affect the past only very indirectly, by offering possibilities for transactions” (Cramer, 1988, see Fig. 11). King, 2003 (see later) stated: “the hand-shaking space-time relation implied by the transactional interpretation makes it possible that the apparent randomness of quantum events masks a vast interconnectivity at the sub-quantum level, reflecting Bohm’s implicate order, although in a different manner from Bohm’s pilot wave theory. Because transactions connect past and future in a time-symmetric way, they cannot be reduced to predictive determinism, because the initial conditions are insufficient to describe the transaction, which also includes quantum boundary conditions coming from the future absorbers. However this future is also unformed in real terms at the early point in time emission takes place”. The principle of backward causation has been experimentally demonstrated recently. Aharonov’s team and various collaborating groups (see Aharonov, 2010), studied whether the future may influence the past by sophisticated quantum physics technology. Aharonov concluded that a particle’s past does not contain enough information to fully predict its fate, but he wondered, if the information is not in its past, where could it be? Clearly, something else must also regulate the particle’s behavior. Aharonov and coworkers proposed a new framework called time-symmetric quantum mechanics. Recent series of quantum experiments in Fig. 11: The transactional interpretation of QM of Cramer of retarded and advanced waves from past and future that produce the present (up left) and the time-symmetric concept of the prize winning Aharonov (upper right inset) arising from post-selection (soft) measurement of a quantum state, that prevents wave collapse and also shows that the future may affect the past. This shows that through the wave aspect the “wavicle”(inset right below), intrinsically contains an aspect of the future.Fig. 11: The transactional interpretation of QM of Cramer of retarded and advanced waves from past and future that produce the present (up left) and the time-symmetric concept of the prize winning Aharonov (upper right inset) arising from post-selection (soft) measurement of a quantum state, that prevents wave collapse and also shows that the future may affect the past. This shows that through the wave aspect the “wavicle”(inset right below), intrinsically contains an aspect of the future. about 15 different laboratories around the world seem to actually confirm the notion that the future can influence results that happened before those measurements were even made, see Fig. 11. Generally the protocol included three steps: a “pre-selection” measurement carried out on a group of particles; an intermediate measurement; and a final, “post-selection” step in which researchers picked out a subset of those particles on which to perform a third, related measurement. To find evidence of backward causality, meaning information flowing from the future to the past, the effects of, so called, weak measurements were studied. Weak measurements involve the same equipment and techniques as traditional ones but do not disturb the quantum properties in play. Usual (strong) measurements would immediately collapse the wave functions in superposition to a definite state. The results in the various groups were amazing: repeated post-selection measurement of the weak type changed the pre-selection state, revealing an aspect of non-locality. Thus it appears that the universe might have a destiny that reaches back and “collaborates” with the past to bring the present into view. On a cosmic scale, this idea could also help explain how life arose in the universe against tremendous odds and confirms the idea that knowledge was inherited from a common information pool (Meijer 2012, Kak, 2009, Jahn and Dunne, 2007). Henry Stapp: attention, intention and quantum coherence Henry Stapp Henry Stapp Stapp starts by asking what sort of brain action corresponds to a conscious thought. He criticizes the mainstream for assuming that Newtonian physics can be applied directly to the brain, and claims that a quantum framework is needed to understand the brain. The Copenhagen interpretation of quantum theory was the first mainstream version, and was pragmatic in recommending the theory as a system of rules that allowed the calculation of empirically verifiable relationships between observations. Stapp, 2009, 2012, favors Heisenberg’s refinement of the original Copenhagen position. Heisenberg thought that the probability distribution of quantum theory really existed in nature, and that the evolution of this probability was punctuated by uncontrolled events, which are the events that actually occur in nature, and which at the same time eliminate the other probabilities. The development of computing during the second half of the 20th century demonstrated that thought-like or cognitive processes required internal representations not allowed for in the then prevailing behaviourist concept. However, this still did not account for conscious experience, and in this period thinking or cognition came to be seen as something separate from consciousness. Both Bohr and Heisenberg viewed quantum theory as a set of rules for making predictions about observations under experimental conditions. These predictions are incompatible with classical physics in respect of the prediction of non-locality. Heisenberg did not view the quanta as actual things, but as tendencies for certain types of events to occur. The orderly evolution of the system is deterministic, but this controls only the tendencies for things or propensity for events, and not the actual things or events themselves. The things are controlled by quantum jumps that do not individually conform to any natural law, but collectively conform to statistical rules. Heisenberg and Schödinger Heisenberg and Schödinger Stapp, 2009, 2012, bases his proposal for quantum consciousness on three observations. 1.) The brain’s representation of the body or body schema must be represented by some form of physical structure in the brain. 2.) Some brain processes such as the behavior of calcium ions involved in synaptic transmission need to be treated quantum mechanically. Stapp also thinks that the sensitivity and non-linearity of the synaptic system, the involvement of calcium ions and the large number of meta-stable states into which the brain could evolve all point to a quantum mechanical system.          3.) Stapp suggests that the brain could evolve into a state analogous to the deterministic evolution of the quantum state from which an actual state must be selected. Although Stapp pays a lot of attention to the synapses his is not actually a neuron based theory. Rather the event could be selected from the large scale excitation of the brain. The selection of events from a wide range of probabilities is seen as being particular adaptive where an organism needs to select from a range of future probabilities. Stapp wishes to establish the relationship between mind and matter, the relationship between reality and quantum theory, and also how relativity is reconciled with both experience and non-locality. The solution is suggested to be a series of creative events bringing into being one of a range of possibilities created by prior events. He suggests that consciousness exercises top-level control over neural excitations in the brain. The neural excitations are regarded as a code, and each human experience is regarded as a selection from this code. He sees the physical world as a structure of tendencies in the world of the mind. He finds it as unacceptable that there is an irreducible element of chance in nature as described by quantum theory, which is the most usual conclusion to be drawn from the randomness of the wave function collapse. The element of conscious choice is seen as removing chance from nature. He distinguishes between systems where an external representation and knowledge of the laws of physics can accurately predict how the system develops, and his own idea of a system that is internally determined in a way that cannot be represented outside the system. The brain is viewed as a self-programming computer with self-sustaining neural patterns as the codes. It is necessary to integrate the code from sensory input, with the code from previous experience. This creates a number of probabilities, from which consciousness has to select. The conscious act is the selection of a piece of top-level code, which then exercises control over the flow of neural excitation. The unity of conscious thought comes from a unifying force in the conscious act itself. It selects a single code from amongst a multitude on offer in the brain. Raising an arm involves a conscious act selecting the top-level code that raises the arm. This is suggested to close the traditional explanatory gap between thought and classical physics, because here the conscious thought is the selection of the code that allows the physical act. Stapp goes on to discuss the conscious process of looking at pictures. According to him top-level codes instruct lower-level codes to produce new top level codes and to initiate their storage in memory. The experience of noticing something is deemed to be the process of initiation into memory. There are close connections between the top-level code and the memory structure. The lower level codes have to be functioning correctly i.e. not damaged, and to be focused on the incoming stimuli in order for it to be put into higher level code and to be registered in memory. Stapp discusses what neural research would need to reveal if it were to support his theory. It would need to reveal the neural connections needed to support self-sustaining patterns of neural excitation. It is necessary to find the neurons providing the top level coding, then the mechanism for storing memory traces of this, and finally the mechanism by which these memories are involved in the production of new top-level codes. Each conscious experience is seen as a creative act represented in the physical world by the selection of a top level code from among the many generated by the laws of quantum theory. The conscious experiences are the initiation of processes that produce changes in the body schema and the external and internal reality schema. The conscious act is functionally equivalent to changes in the physical world as represented in quantum theory. In the Heisenberg version of quantum theory physical things are events and quantum theory gives the propensity for particular events to occur. This is seen as providing a link between conscious processes and brain processes. In the Heisenberg version it is the act of observation which leads to the selection of a particular propensity. Stapp attaches great importance to the idea of the formation of a record. This is seen as analogous to the Geiger counter that registers a record of a quantum event. Every conscious experience is seen as recordable, because it is evidence of some form of brain process. The later retrievability of the experience is evidence of a record in the brain. A key process in brain dynamics is seen as persisting patterns of neural excitation producing physical changes in neurons that enable a particular pattern to be re-excited, and allow re-excited pattern to connect with new stimuli. This is seen as the basis of the brain’s associative memory. The top-level brain process is viewed as a process of actualizing symbols, composed of earlier symbols connected into a whole by neural links. The top-level process is seen as directing information gathering, planning and choice of particular plans, monitoring the execution of plans. This can be understood in terms of top-level direction of multiple neural processes. Because of the top-level directive role, its connection to associative memory and the multiple structure of the symbols involved it is suggested that each top-level event corresponds to a psychological event, and this in turn connects psychological events to the quantum level. Both the top-level brain event and the psychological event act as choosers of a possibility, or converters of potentialities into actualities. Each human conscious experience is seen as the feel of an event in the top-level of processing in the human brain, a sequence of Heisenberg actual events, actualizing a quasi-stable pattern of neural activity. Activation of particular symbols creates a tendency for the activation of other related symbols. The body schema is the product of actualized events accumulated over the life of the body. The top-level symbols have compositional structure formed from other symbols. The Heisenberg events are seen as being capable of grasping a whole a pattern of activity, and this is seen as accounting for the unity of consciousness. The continuity or flow of time is explained by an overlap of symbols with the preceding mental event. Stapp drawing on studies of infants assumes that humans have a hard-wired body-world schema. Consciously directed action is seen as a projection of this body-world schema into the future, with a corresponding representation in the brain. This body-world schema is seen as directing the unconscious brain, issuing commands for motor action and instructions for mental processing. Ongoing questions to nature continue to be posed by the observer. This equates to the ‘Heisenberg choice’, where the human observer has to decide what question to put to nature. In this case it is the conscious processing in the brain that does this. Each experience leads to further updating of the system. When an action is initiated by a thought, this usually includes some monitoring of the subsequent action, to check it against the intended action. So something experienced as an intention becomes an action, the attention to which is also experienced. Stapp views the deterministic unfolding of matter according to the Schrödinger equation as running parallel to the movement from intention to attention, as two poles of the same quantum event that prolong the coherent state and thereby protect against potential decoherence. He also sees a tripartite structure being the Schrödinger equation, the Heisenberg choice of question to ask and the (Dirac) choice of answer from nature. Stapp’s point is that only a conscious observer within the brain can ask the question, and drive the quantum process. This also allows the experiential process to enter into the causal structure of the body/brain. Stapp feels that some additional process is needed and the conscious observer is a perfect candidate. He sees quantum theory as informational in nature and thus linked to increments in knowledge occurring in the brain. The increment in knowledge is seen as linked to a reduction in the quantum state, thus linking mind to the physical world. Mind is thus seen as entering into the physical world through the Heisenberg choice. When the quantum state is reduced a wave that extends over an indefinite amount of space is instantaneously reduced to a tiny local region. Stapp feels that this constitutes a representation of knowledge rather than a representation of matter. The wave before collapse is seen as a matter of potentiality or probabilities, which are themselves often conceived as ideas rather than realities. However, the quantum state pre-collapse evolves in line with the deterministic Schrödinger equation, giving the state some of the properties of the physical, thus and creating in fact a sort of hybrid. Stapp does not suggest that our conscious thoughts are completely unconstrained, but he does see our thoughts as a part of the causal structure of the mind-brain that is not dominated by the actions of the smallest components of the brain, but is also not a random effect. Our thoughts are seen not as linked to external objects, but instead linked to patterns of brain activity. Stapp points out that his theory has a place for an efficacious conscious mind linked to the physical processes of the brain. He suggests that the dynamic of the Schrödinger evolution, which is to produce an event that replicates the event that produced, it could somehow stand in for the later action of conscious minds. The identity theory of mind claims that each mental state is identical to some process in the brain. However, classical physics says that the entire causal structure of a physical system is determined by the microscopic level of the physical structure, so that larger scale effects such as consciousness cannot have any influence. A potential problem with the whole Copenhagen influenced interpretation of quantum theory is its possible dualism. Mathematics can be seen as a mental process instantiated in protein, which, in principle, cannot directly influence the external world. Somehow the mathematical description of the quantum waves is sitting out there in space, and then as a result of a measurement becomes a physical particle. In Copenhagen, a mental concept external to the body seems to become physical with no explanation as to how the two could interact. The Copenhagen system has the additional problem of what was happening before human minds emerged to perform measurements, for which Stapp’s explanation appears rather sketchy. Consequently, a more detailed model is required to picture the inherent interaction between a more general form of consciousness as a measuring device in evolution (see later). Roger Penrose: Consciousness and the Spacetime Geometry of Universe Roger Penrose Fig. 12: The Twistor theory as proposed by Penrose to find a basic structure for spacetime geometry, as is also attempted by String theories. Twistor theory was later on applied by Witten in the universal string M-theory, to diminish the total number of required extra dimensions Penrose’s own take on the wave function collapse suggests that it is a real event. He sees superposition as a separation in the underlying space-time geometry. Each quanta is embedded in a bit of space, and as the superpositions grow further apart, a blister or separation appears in space-time. This can be viewed as the same thing as the beginning of the multiple world view, but instead of going on to generate separate universes, if the separation between superpositions grows to more than the Planck length, the wave collapses and chooses one of the superposed alternatives. Twistor theory (Fig. 12) in the context of space-time has been pioneered by Roger Penrose and others since the 1960s and is based on the association of a complex twistor space CP3 to the space of light rays in space-time. The name derives from the Robinson congruence which is the natural realization of a (non-null) twistor. Penrose thereby attempted to encode spacetime points, affording a quantized spacetime. Some appealing aspects of the theory are: – twistor space becomes the basic space so that light rays are the fundamental objects from which space-time is derived; – discrete quantities such as spin are represented in the discrete values obtained by contour integration; – its evident elegance and simplicity The normal quantum wave collapse is seen as an entirely random choice of the state of a quantum particle, from amongst the various superpositions of states. However, these collapses involve interaction with the environment. Penrose suggests that a quanta, which does not interact with the environment will undergo objective reduction (OR) when the separation between superpositions begins to exceed the Planck length. He also suggests that while the normal collapse is totally random OR is not totally random but involves a non-computable process. This is suggested because Penrose thinks that the brain manifests a non-computational aspect, and that the wave function collapse is the only place in the universe where such a thing can exist. Penrose also proposes that OR based quantum computation occurs in the brain. Important distinction between Penrose and Wigner Objective reduction, Consciousness, Spacetime and the Second law & Gravity Fig. 13: Neuronal tubulus as the potential site for quantum mediated effects in brain. Each tubulin is shown to have 9 rings representing 32 actual phenyl or indole rings per tubulin, with coupled, oscillating London force dipole orientations among rings traversing ‘quantum channels’ , aligning with rings in adjacent tubulins in helical pathways through microtubule lattices. On the right, superposition of alternative tubulin and helical pathway dipole states. Evidence for non-computational spacetime In support of this, he points out that when the physicists, Geroch and Hartle, 1986 studied quantum gravity, they ran up against a problem in deciding whether two spacetimes were the same. The problem was solvable in two dimensions, but intractable in the four dimensions that accord with the four dimensional spacetime, in which the superposition of quantum particles needs to be modeled. It has been shown that there is no algorithm for solving this problem in four dimensions. Significance for consciousness The road from physics to mental phenomena has already been frequented, notoriously by Pauli and Jung and, under the influence of Pauli, by Heisenberg. The interaction is not limited to a three decades-long Jung–Pauli epistolary and the reciprocal influences have been profound. The founding role of Pauliʼs work in quantum physics does not need to be recalled ( Pauli, 1994) and the effects of his quantum vision in the development of Jungʼs vision of the human mind (archetypes included) have been well explored. The title of the essay by Jung in their co-authored volume “Synchronizität als ein Prinzip akausaler Zusammenhänge” (Synchronicity as a Principle of Acausal Connections) could not indicate more clearly the influence of Pauli’s quantism on Jung’s perception of reality (Jung and Pauli, 1955), and the interplay of the two great minds. Before them, the self-referentiality of the Euclidean approach to human consciousness has been narrated by Lewis Carroll in his “Through the looking-glass”. Philosophically, Orch OR perhaps aligns most closely with Alfred North Whitehead, (see link Wikipedia) and who viewed mental activity as a process of ‘occasions’, spatio-temporal quanta, each endowed—usually on a very low level, with mentalistic characteristics which were ‘dull, monotonous, and repetitious’. These seem analogous, in the Orch OR context, to ‘proto-conscious’ non-orchestrated OR events. Whitehead viewed high level mentality, consciousness, as being extrapolated from temporal chains of such occasions. In his view highly organized societies of occasions permit primitive mentality to become intense, coherent and fully conscious. These seem analogous to Orch OR conscious events. Abner Shimony, 2005, Henry Stapp, 2007 and Hameroff, 1998 recognized that Whitehead’s approach was potentially compatible with modern physics, specifically quantum theory, with quantum state reductions—actual events—appearing to represent ‘occasions’, namely Whiteheadʼs high level mentality, composed of ‘temporal chains … of intense, coherent and fully conscious occasions’ (Fig. 14), these being tantamount to sequences of Orch OR events. These might possibly coincide with gamma synchrony, but with our current ‘beat frequency’ ideas gamma synchrony might more likely to be a beat effect than directly related to the OR reduction time τ. As Orch OR events are indeed quantum state reductions, Orch OR and Whitehead’s process philosophy appear to be quite closely compatible. Whitehead’s low-level ‘dull’ occasions of experience would seem to correspond to our non orchestrated ‘proto-conscious’ OR events. According to this scheme, OR processes would be taking place all the time everywhere and, normally involving the random environment, would be providing the effective randomness that is characteristic of quantum measurement. Quantum superpositions will continually be reaching a threshold for OR in non-biological settings as well as in biological ones, and OR would usually take place in the purely random environment such as in a quantum system under measurement. Nonetheless, in the Orch OR scheme, these events are taken to have a rudimentary subjective experience, which is undifferentiated and lacking in cognition, perhaps providing the constitutive ingredients of what philosophers call qualia. We term such un-orchestrated, ubiquitous OR events, lacking information and cognition, ‘proto-conscious’. See Fig 14 Fig. 14 : Quantizing space time of Whitehead by postulating units of experience and actual occasions that have a mental and physical pole and carry elements of goals, satisfaction and beauty. They arebuild up from previous occasions from societies of occasions( right insets). In this regard, Orch OR has some points in common with the viewpoint, which incorporates spiritualist, idealist and panpsychist elements, these being argued to be essential precursors of consciousness that are intrinsic to the universe. It should be stressed, however, that Orch OR is strongly supportive of the scientific attitude, and it incorporates the viewpoint’s picture of neural electrochemical activity, accepting that non-quantum neural network membrane-level functions might provide an adequate explanation of much of the brain’s unconscious activity. Orch OR in microtubules inside neuronal dendrites and soma adds a deeper level for conscious processes. Stuart Hameroff: Quantum coherence in brain tubules Stuart Hameroff      Roger Penrose Stuart Hameroff and Roger Penrose Hameroff and Penrose, 2011 2013, classify all the mainstream approaches to consciousness as ‘classical functionalism’. Functionalism takes no account of what the brain is made of or of anything finer grained than the level of neuron-to-neuron connections. It believes that these connections could be copied in another material such as silicon, and that the resulting construct would be conscious. However, Hameroff argues that although axonal spikes and synaptic connections clearly play a key role in information processing in the brain, they may not be the main currency of consciousness. Hameroff argues that quantum processing in microtubules within the dendrites and gap junctions between dendrites are the main currency of consciousness. The main case against quantum processing in the brain has always been that any quantum coherence in the brain would decohere faster than the time taken for any useful biological process. Hameroff accepts that this is in principle a valid argument. However, Hameroff claims that the microtubules may be screened from their environment by a gelatinous non-liquid ordered state that arises in the neuronal interior. A further objection to quantum processing is that even if it arose in one neuron, it would difficult for it to communicate across the brain. This is countered by the suggestion that there could be quantum tunneling at gap junctions between neurons. In recent years, gap junctions have been discovered to be more widespread in the brain than was previously thought. They are also correlated with the 40Hz gamma synchrony. This oscillation was at one time promoted by Crick and Koch as the most promising correlate of consciousness. However, the idea fell from favour with mainstream neuroscience, when it was discovered that the gamma synchrony correlated with dendritic activity rather than axonal spiking. In general, Hameroff argues that the emerging evidence of neurobiology has moved in favour of the Orch OR model over the last decade, not withstanding the continued unpopularity of the theory. Hameroff summarizes his proposals in the early part of the chapter. He thinks that consciousness arises in the dendrites of neurons that are connected by gap junctions to form ‘hyperneurons’, and that these are related to the gamma synchrony. Axonal spikes and synapses are seen as making inputs to and receiving outputs from the microtubular process as part of an interactive systems. Hameroff touches on the famous Libet, 2006 experiments that demonstrated a 500ms timelag between a stimulus and the perception of it entering consciousness, although the subject is not aware of this time lag, as a result of a so-called backward referral in time. The mainstream has tended to favour an interpretation resembling the Dennett ‘multiple drafts’ concept, which would involve an after the event reconstruction of what had happened. Hameroff, however, thinks that the backward referral in time should be taken seriously. This was also the view of Roger Penrose, who suggested that backward referral (Fig. 15) might be indicative of quantum activity. Fig.15 : Backward referral of time as proposed by Libet et al. Hameroff points out that changes in dendrites can lead to increased synaptic activity. This is basic to ideas about learning, memory and neural correlates of consciousness. The changes in dendrites involve the number and arrangement of receptors and the arrangement of dendritic spines and dendrite-to-dendrite connections. Axon potentials or spikes have been assumed to be the main basis of consciousness, but Hammerof suggests that there could be other candidates. Electrodes implanted into the brain detect mainly the activity of dendritic gap junctions plus inhibitory chemical synapses. Thus the detected synchrony derives from dendrites rather than axonal spikes. The main function of dendrites is seen to be the handling of input signal into the neuron, which may eventually result in an axon spike. However, this is not the whole story, since many cortical neurons have dendrites but no axons. Here dendrites interact with other dendrites. Also there can be extensive dendritic activity with no spikes. The evidence suggests that there are complex logic functions in the dendrites, and these may oscillate over a wide area, while remaining below the axon spiking threshold. Many post-synaptic receptors send signals into the dendrite cytoskeleton Gamma synchronies, in the 30-70 Hz range, have aroused interest as possible correlates of consciousness. Gray and Singer, 1989, found coherent gamma oscillations in the brain that were dependent on visual stimulation. It was suggested that this synchrony could solve the binding problem, which is the problem of how the different inputs into the brain are bound together into a single conscious experience. It was suggested that the synchrony relected the activity of a relevant assembly of neurons. Varela, 1995 noted that synchrony operated whenever the processing of spatially separated parts of the brain were brought together in consciousness. Gamma synchrony has been demonstrated across cortical areas, hemispheres and the sensory/motor modalities. The synchrony is involved in a range of brain activities including perception of sound, REM dream sleep, attention, working memory, face recognition and somatic perception. Also gamma decreases during general anesthesia and returns on waking from this. Hameroff regards gamma synchrony as the best overall correlate of consciousness. He further addresses the question of how the gamma synchrony is mediated. There is coherence over large areas of the brain, sometimes including multiple cortical areas and both hemispheres of the brain, with zero or near zero phase lag. If the synchrony was based on the axon/synapse system a considerable lag would be expected. In fact, the lack of coherence between the synchrony and axonal spike activity has led to a reduction in the amount of mainstream attention paid to the gamma synchrony. Hameroff points to gap junctions as an alternative to synapses for connections between neurons. Neurons that are connected by gap junctions depolarize synchronously. Gap junctions play a more important role in the adult brain than was previously supposed. Numerous studies show that gap junctions mediate the gamma synchrony. A neuron may have many gap junction connections but not all of them are necessarily open at the same time. The opening and closing of the junctions may be regulated by the microtubules. Hameroff suggests that cells connected by gap junctions may in fact constitute a cell assembly, with the added advantage of synchronous excitation. Cortical inhibitory neurons are heavily studded with gap junctions, possibly connecting each cell to 20 to 50 others (4). The axons of these neurons tend to form inhibitory GABA chemical synapses on the dendrites of other interneurons. Fig. 16: Schematic representation of a brain microtubule, build up from tubulin proteins that can undergo rapid fluctuations in three dimensional configuration, enabling the sensing and transmission of quantum information (Qbits), see also Fig. 13. Hameroff moves on to discuss the role of the cytoskeleton, which is seen to determine the structure, growth and function of neurons. Actin is the main constituent of dendritic spines and is present throughout the neuronal interior. Actin can de-polymerize into a dense meswork, and when this happens the interior of the cell is converted from an aqueous solution into a gelatinous state. Furthermore, when this happens the whole of the cytoskeleton forms a negatively charged matrix around which water molecules are bound into an ordered state. It is noted that the neurotransmitter glutamate binding to NMDA and AMPA receptors cause gel states in actin spines. The cytoskeleton of the dendrites is distinct both from that found in cells outside the brain and from the cytoskeleton found in the axons of neurons. The microtubules in dendrites are shorter than those in axons and have mixed as opposed uniform polarity. This appears a sub-optimal arrangement from a normal structural point of view, and it is suggested that in conjunction with microtubule associated proteins (MAPs), this arrangement may be optimal for information processing rather than supportive structural functions. These microtubule/MAP arrangements are connected to synaptic receptors on the dendrite membrane by a variety of calcium and sodium influxes, actin and other inputs. Alterations in the microtubule/MAPs network in the dendrites correlate with the arrangement of dendrite synapatic receptors. Studies demonstrate that the cytoskeleton is also involved in signal transmission. It is suggested that the microtubule lattice is well designed to represent and process information (Fig. 17). Tubulin was supposed to switch between two conformations (see Fig. 18). It is suggested that tubulin conformational states could interact with with neighboring tubulin by means of dipole interactions. The dipole-coupled conformation for each tubulin could be determined by the six surrounding tubulins. Hameroff describes protein conformation as a delicate balance between countervailing forces. Proteins are chains of amino-acids that fold into three dimensional conformations. Folding is driven by van der Waals forces between hydrophobic amino-acid groups. These groups can form hydrophobic pockets in some proteins. These pockets are critcal to the folding and regulation of protein. Amino acid side groups in these pockets interact by van der Waals forces. Non-polar atoms and molecules can have instantaneous dipoles. Fig.17 : An ‘integrate-and-fire’ brain neuron, and portions of other such neurons are shown schematically with internal microtubules. In dendrites and cell body/soma (left) involved in integration, microtubules are interrupted and of mixed polarity, interconnected by microtubule-associated proteins (MAPs) in recursive networks (upper circle, right). Dendritic–somatic integration (with contribution from microtubule processes) can trigger axonal firings to the next synapse. Microtubules in axons are unipolar and continuous. Gap junctions synchronize dendritic membranes, and may enable entanglement and collective integration among microtubules in adjacent neurons (lower circle right). In Orch OR, microtubule quantum computations occur during dendritic/somatic integration, and the selected results regulate axonal firings which control behavior Hameroff discusses the process of anesthesia which erases consciousness, but leaves many non-conscious functions intact. Anesthetic gas molecules are soluble in a lipid-like hydrophobic environment. Such areas are present in the brain in the lipid regions of cell membranes and in hydrophobic pockets within proteins. It is suggested that anesthetic gas molecules interact with amino-acid groups via London forces, altering the normal action of London forces on the conformation of protein. Hameroff discusses quantum information processing. Quantum superpositions where the quantum waves represent multiple possibilities for the state of a particle, are known to persist until quanta are either measured or naturally interact with the rest of the environment. Hameroff takes the view that the original mainstream interpretation, Copenhagen Interpretation, puts not only consciousness but the concept of reality itself outside physics. Alternatives interpretations include the ‘many worlds’ view, where there is no collapse but the superpositions continue in multiple worlds and David Bohm’s idea in which the quanta are guided by active information. It is important to stress that quantum computing as such is not expected to generate consciousness. In quantum computers, which many researchers, are now trying to develop quantum collapse will occur as a result of measurement or interaction with the environment. It is only in the event of OR that non-computability and consciousness could be brought into play. Hameroff goes on to look at some of the detail of the theory that he and Penrose developed as to how consciousness could be based in microtubules in the brain. It is suggested that quantum compuations take place in microtubules orchestrated by the inputs of synapse via MAPs. Hence the theory is often known as Orch OR for orchestrated objective reduction. The computations are suggested to persist for 25 ms, which would link them to the 40Hz gamma synchrony, viewed as a correlate of consciousness even in more mainstream theories. The computations are terminated by objective reduction. It is proposed that in dendrites, the tubulin sub-units of the microtubules interact by dipole coupling so as process information. The tubulin conformation is governed by quantum London forces, so that the tubulins can exist as quantum superpositions of different conformations. In superposition the tubulins would be qbits in a quantum computer, computing by means of non-local entanglement with other tubulin qbits. This entanglement would not just be with tubulins in the same microtubule, but other microtubules in the same dendrite, and in other dendrites connected by gap junctions. Neurons connected by gap junctions can be viewed as a single hyperneuron, and the hyperneuron can be seen as a conventional neuron assembly. The dendritic interiors alternate between two states as a result of the polymerisation of actin protein. In the depolymerised form the interior of the neuron is aqueous and microtubules signal and process information classically. There are synaptic inputs to the microtubules during this phase. When actin polymerises the interior of the dendrite becomes quasi-solid of gelatinous, and water near to the proteins becomes ordered as a result of the actin gelation. Debye layers of counterions may also shield the microtubules, due to the charged C-termini tails on the tubulins. This is suggested to make the microtubules sufficiently isolated from the environment for quantum superposition to occur in the tubulins. The geometry of a quantum computer lattice could be formed so as to be resistant to decoherence. Microtubules are suggested to have a structure which is particularly suitable for error correction. Coherent pumping of energy and quantum error correction may thus help to prevent decoherence. Quantum error correction involves a code that can detect and correct decoheence in a quantum system. Hameroff claims to refute Tegmark’s attempt to disprove the Penrose/Hameroff model, (Hagan, et al, 2002). This is significant as Tegmark’s criticism of Orch OR has been widely accepted as a completely satisfactory dismissal of the theory, and responses to Tegmark are habitualy ignored. Tegmark calculated microtubule decoherence time as being 10^-13 seconds, which would certainly be much too short for any neural activity. However, he worked on the basis of his own model for quantum activity in microtubules, which was never proposed by Hameroff or anyone else, basing his calculation on a 24nm separation of solitons from themselves along the microtubules, whereas Orch OR proposes a superposition separation distance six orders of magnitude smaller. For some reason, Tegmark did not choose to address the Penrose/Hameroff model. This invalidates his particular approach, whatever the truth is about decoherence, but somehow it has not prevented his work from being quoted as an absolutely reliable refutation of Orch OR (Hagan, et al, 2002). A recent update of the Orch OR model A recent review and update of this 20-year-old theory of consciousness published in Physics of Life Reviews, 2013, persists to claim that consciousness derives from deeper level, finer scale activities inside brain neurons. The recent discovery of quantum vibrations in “microtubules” inside brain neurons corroborates this theory, according to review authors Stuart Hameroff and Sir Roger Penrose. Thi groundbreaking article, and some of the accompanying comments, are partly cited and summarized in the following: “ Hameroff and Penrose suggest that EEG rhythms (brain waves) also derive from deeper level microtubule vibrations, and that from a practical standpoint, treating brain microtubule vibrations could benefit a host of mental, neurological, and cognitive conditions. Orch OR was harshly criticized from its inception, as the brain was considered too “warm, wet, and noisy” for seemingly delicate quantum processes. However, evidence has now shown warm quantum coherence in plant photosynthesis, bird brain navigation, our sense of smell, and brain microtubules. The recent discovery of warm temperature quantum vibrations in microtubules inside brain neurons by the research group led by Anirban Bandyopadhyay, 2011 at the National Institute of Material Sciences in Tsukuba, Japan (and now at MIT), corroborates the pair’s theory and suggests that EEG rhythms also derive from deeper level microtubule vibrations. In addition, work from the laboratory of Emerson and Eckenhoff et al, 2013, at the University of Pennsylvania, suggests that anesthesia, which selectively erases consciousness while sparing non-conscious brain activities, acts via microtubules in brain neurons. After 20 years of skeptical criticism, “the evidence now clearly supports Orch OR,” continue Hameroff and Penrose. ” Our new paper updates the evidence, clarifies Orch OR quantum bits, or “qubits,” as helical pathways in microtubule lattices, rebuts critics, and reviews 20 testable predictions of Orch OR published in 1998 – of these, six are confirmed and none refuted.” An important new facet of the theory is introduced. Microtubule quantum vibrations (e.g. in megahertz) appear to interfere and produce much slower EEG “beat frequencies.” Despite a century of clinical use, the underlying origins of EEG rhythms have remained a mystery. Clinical trials of brief brain stimulation aimed at microtubule resonances with megahertz mechanical vibrations using transcranial ultrasound have shown reported improvements in mood, and may prove useful against Alzheimer’s disease and brain injury in the future. The review is accompanied by eight commentaries from outside authorities, including an Australian group of Orch OR arch-skeptics. To all, Hameroff and Penrose respond robustly. They will engage skeptics in a debate on the nature of consciousness, and Bandyopadhyay and his team will couple microtubule vibrations from active neurons to play Indian musical instruments. “Consciousness depends on an anharmonic vibrations of microtubules inside neurons, similar to certain kinds of Indian music, but unlike Western music which is harmonic”. Hameroff explained that consciousness depends on biologically ‘orchestrated’ coherent quantum processes in collections of microtubules within brain neurons, that these quantum processes correlate with, and regulate, neuronal synaptic and membrane activity. The continuous Schrödinger evolution of each such process is supposed to terminate in accordance with the specific Diósi–Penrose (DP) scheme of ‘objective reduction’ (‘OR’) of the quantum state. This orchestrated OR activity (‘Orch OR’) is taken to result in moments of conscious awareness and/or choice. The DP form of OR is related to the fundamentals of quantum mechanics and space–time geometry, so Orch OR suggests that there is a connection between the brainʼs biomolecular processes and the basic structure of the universe. The authors recently reviewed the Orch OR in light of criticisms and developments in quantum biology, neuroscience, physics and cosmology (Hameroff and Penrose, 2012). The authors introduce a novel suggestion of ‘beat frequencies’ of faster microtubule vibrations as a possible source of the observed electro-encephalographic (‘EEG’) correlates of consciousness. They conclude that consciousness plays an intrinsic role in the universe. The group of Bandyopadhyay, 2011 has indeed discovered conductive resonances in single microtubules that are observed when there is an applied alternating current at specific frequencies in gigahertz, megahertz and kilohertz ranges. Electron dipole shifts do have some tiny effect on nuclear positions via charge movements and Mossbauer recoil. A shift of one nanometer in electron position might move a nearby carbon nucleus a few femtometers (‘Fermi lengths’, i.e. 10−15 m10−15 m), roughly its diameter. The effect of electron spin/magnetic dipoles on nuclear location is less clear.(Fig.18) Recent Orch OR publications have cast tubulin bits (and quantum bits, or qubits) as coherent entangled dipole states acting collectively among electron clouds of aromatic amino acid rings, with only femtometer conformational change due to nuclear displacement (Fig.13). As it turns out, femtometer displacement might be sufficient for Orch OR/ Diósi–Penrose objective reduction (DP) is a particular proposal for an extension of current quantum mechanics, taking the bridge between quantum- and classical-level physics as a ‘quantum-gravitational’ phenomenon. This is in contrast with the various conventional viewpoints, whereby this bridge is claimed to result, somehow, from ‘environmental decoherence’, or from ‘observation by a conscious observer’, or from a ‘choice between alternative worlds’, or some other interpretation of how the classical world of one actual alternative may be taken to arise out of fundamentally quantum-superposed ingredients. Fig.18 : Early, and current, versions of the OrchOR qubit. (a)Schematic cartoon version of OrchOR tubulin protein qubit used in OrchOR publications mainly from 1996 to 1998.On left ,tubulin oscillates between 2states with 1nanometer conformational flexing (10%tubulindiameter). On right, both state sexist in quantum superposition. (Irrespective of the schematic cartoon the 1 nanometer displacement has never been implemented in OrchORcalculations). The states are shown to correlate with electron locations(dipole orientations)in two adjacent phenyl (or indole)resonance rings in a non-polar ‘hydrophobic pocket’. (b)Schematic cartoon version of the OrchOR qubit developed since 2002 (following identification of tubulin structure by electron crystallography. Each tubulin is shown to have 9 rings representing 32 actual phenyl or indole rings per tubulin, with coupled, oscillating London force dipole orientations among rings traversing ‘quantum channels’ , aligning with rings in adjacent tubulins in helical pathways through microtubule lattices. On the right, superposition of alternative tubulin and helical pathway dipole states. There is no conformational flexing. Mechanical displacement occurs at the femtometer level of tubulin atomic nuclei (not shown) . Reimers et al . continually, and exclusively, criticize the obsolete, non- implemented version on left(a), and ignore the actual OrchOR dipole pathway qubit version on right(b). The DP version of OR involves a different interpretation of the term ‘quantum gravity’ from what is usual. Current ideas of quantum gravity (see, for example, Smolin, 2004, normally refer, instead, to some sort of physical scheme that is to be formulated within the bounds of standard quantum field theory—although no particular such theory, among the multitude that has so far been put forward, has gained anything approaching universal acceptance, nor has any of them found a fully consistent, satisfactory formulation. ‘OR’ here refers to the alternative viewpoint that standard quantum (field) theory is not the final answer, and that the reduction R of the quantum state (‘collapse of the wave function’) that is adopted in standard quantum mechanics is an actual physical process which is not part of the conventional unitary formalism U of quantum theory (or quantum field theory). In the DP version of OR, the reduction R of the quantum state does not arise as some kind of convenience or effective consequence of environmental decoherence, etc., as the conventional U formalism would seem to demand, but is instead taken to be one of the consequences of melding together the principles of Einsteinʼs general relativity with those of the conventional unitary quantum formalism U, and this demands a departure from the strict rules of U. According to this OR viewpoint, any quantum measurement—whereby the quantum-superposed alternatives produced in accordance with the U formalism becomes reduced to a single actual occurrence—is a real objective physical process, and it is taken to result from the mass displacement between the alternatives being sufficient, in gravitational terms, for the superposition to become unstable. It is helpful to have a conceptual picture of quantum superposition in a gravitational context. According to modern accepted physical theories, reality is rooted in 3-dimensional space and a 1-dimensional time, combined together into a 4-dimensional space–time. This space–time is slightly curved, in accordance with Einsteinʼs general theory of relativity, in a way which encodes the gravitational fields of all distributions of mass density. Each different choice of mass density effects a space–time curvature in a different, albeit a very tiny, way. This is the standard picture according to classical physics. On the other hand, when quantum systems have been considered by physicists, this mass-induced tiny curvature in the structure of space–time has been almost invariably ignored, gravitational effects having been assumed to be totally insignificant for normal problems in which quantum theory is important. Surprising as it may seem, however, such tiny differences in space–time structure can have large effects, for they entail subtle but fundamental influences on the very rules of quantum mechanics’. The initial part of each space–time is at the upper left of each individual space–time diagram, and so the bifurcating space–time diagram on right moving downward and rightward illustrates two alternative mass distributions evolving in time, their space–time curvature separation increasing. mechanically (so long as OR has not taken place), the ‘physical reality’ of this situation, as provided by the evolving wavefunction, is being illustrated as an actual superposition of these two slightly differing space–time manifolds,The OR process is considered to occur when quantum superpositions between such slightly differing space–times take place differing from one another by an integrated space–time measure which compares with the fundamental and extremely tiny Planck (4-volume) scale of space–time geometry. As remarked above, this is a 4-volume Planck measure, involving both time and space, so we find that the time measure would be particularly tiny when the space-difference measure is relatively large (as with Schrödingerʼs hypothetical cat), but for extremely tiny space-difference measures, the time measure might be fairly long. For example, an isolated single electron in a superposed state (very low EGEG) might reach OR threshold only after thousands of years or more, whereas if Schrödingerʼs (∼10 kg) cat were to be put into a superposition, of life and death, this threshold could be reached in far less than even the Planck time of 10−43 s10−43 s.(Fig.19) Fig. 19: As superposition curvature E reaches threshold, OR occurs and one particle location/curvature is selected, and becomes classical. The other ceases to exist. In the situations under consideration here, where we expect a conscious brain to be at far from zero temperature, and because technological quantum computers require zero temperature, it is very reasonable to question quantum brain activities. Nevertheless, it is now well known that superconductivity and other large-scale quantum effects can actually occur at temperatures very far from absolute zero. Indeed, biology appears to have evolved thermal mechanisms to promote quantum coherence. Ouyang and Awschalom, 2003 showed that quantum spin transfer through phenyl ring π orbital resonance clouds (the same as those in protein hydrophobic regions, as illustrated in Fig.14, are enhanced at increasingly warm temperatures. Spin flip currents through microtubule pathways, may be directly analogous.) In the past 6 years, evidence has accumulated that plants routinely use quantum coherent electron transport at ambient temperatures in photosynthesis Engel et al, 2007 and Hildner, 2013. Photons are absorbed in one region of a photosynthetic protein complex, and their energy is conveyed by electronic excitations through the protein to another region to be converted to chemical energy to make food. In this transfer, electrons utilize multiple pathways simultaneously, through π electron clouds in a series of chromophores (analogous to hydrophobic regions) spaced nanometers apart, maximizing efficiency (e.g. via so-called ‘exciton hopping’). Chromophores in photosynthesis proteins appear to enable electron quantum conductance precisely like aromatic rings are proposed in Orch OR to function in tubulin and microtubules. Quantum conductance through photosynthesis protein is enhanced by mechanical vibration, and microtubules appear to have their own set of mechanical vibrations (e.g. in megahertz as suggested by Sahu et al., 2013. Megahertz mechanical vibrations is ultrasound, and brief, low intensity (sub-thermal) ultrasound administered through the skull to the brain modulates electrophysiology, behavior and affect, e.g. improved mood in patients suffering from chronic pain, perhaps by direct excitation of brain microtubules Further research has shown warm quantum effects in bird-brain navigation, Gaucher et al,2011, ion channels Benroider and Roy, 2005 , sense of smell Turin, 1996 , DNA, Rieper, 2011, protein folding, Luo and Lu, 2011, and biological water, Reiter, 2013 (see also the reviews of Arndt, 2009 and Lloyd, 2011 on these aspects). What about quantum effects in microtubules? In the 1980s and 1990s, theoretical models predicted ‘Fröhlich’ gigahertz coherence and ferroelectric effects in microtubules. In 2001 and 2004, coherent megahertz emissions were detected from living cells and ascribed to microtubule dynamics (powered by mitochondrial electromagnetic fields) by the group of Jiri Pokorný in Prague . Beginning in 2009, Anirban Bandyopadhyay and colleagues at the National Institute of Material Sciences in Tsukuba, Japan, were able to use nanotechnology to address electronic and optical properties of individual microtubules (Sahu et al, 2013 a,b). The group has made a series of remarkable discoveries suggesting that quantum effects do occur in microtubules at biological temperatures. First, they found that electronic conductance along microtubules, normally extremely good insulators, becomes exceedingly high, approaching quantum conductance, at certain specific resonance frequencies of applied alternating current (AC) stimulation. These resonances occur in gigahertz, megahertz and kilohertz ranges, and are particularly prominent in low megahertz (e.g. 8.9 MHz). Conductances induced by specific (e.g. megahertz) AC frequencies appear to follow several types of pathways through the microtubule—helical, linear along the microtubule axis, and ‘blanket-like’ along/around the entire microtubule surface. Second, using various techniques, the Bandyopadhyay group also determined AC conductance through 25-nm-wide microtubules is greater than through single 4-nm-wide tubulins, indicating cooperative, possibly quantum coherent effects throughout the microtubule, and that the electronic properties of microtubules are programmed within each tubulin. Their results also showed that conductance increased with microtubule length, indicative of quantum mechanisms (Fig. 20). Fig.20: Top: Tentatively proposed picture of a conscious event by quantum computing in one of a vast number of microtubules all acting coherently so that there is sufficient mass displacement for Orch OR to take place. Tubulins are in classical dipole states (yellow or blue), or quantum superposition of both dipole states (gray). Quantum superposition/computation evolves during integration phases (1–3) in integrate-and-fire brain neurons, increasing quantum superposition EGEG (gray tubulins) until threshold is met, at which time a conscious moment occurs, and tubulin states are selected which regulate firing and control conscious behavior. Middle: Corresponding alternative superposed space–time curvatures reaching threshold at the moment of OR and selecting one space–time curvature. Bottom: Schematic of a conscious Orch OR event showing U-like evolution of quantum superposition and increasing EGEG until OR threshold is met, and a conscious moment occurs . The resonance conductance (‘Bandyopadhyay coherence’ – ‘BC’) through tubulins and microtubules is consistent with the intra-tubulin aromatic ring pathways (Fig. 13), which can support Orch OR quantum dipoles, and in which anesthetics bind, apparently to selectively erase consciousness. Bandyopadhyayʼs experiments do seem to provide clear evidence for coherent microtubule quantum states at brain temperature. This said, solid scientific evidence (microtubules and the rest) is not yet completely convincing and one is left with the desire to contribute to the whole intellectual construction in order, not to leave it in its present state. Anyhow, certain parts of the mosaic are particularly appealing: the fact for instance that anesthetic gas exert their effects on consciousness, and that actual evidence from genomics and proteomics point to anesthetic action in microtubules. As Faraday said, it is always better to have a partial vision of the facts rather than having none. Some could, on the contrary, be fully convinced of the existence and function of objective reductions of the quantum states occurring in and orchestrated by biological structures. Of these, microtubules would represent the most efficient and evolutionary winning example, consciousness being the most visible of its non-epiphenomenal phenotypes. As a novel suggestion relative to their previous studies, “beat frequencies” are introduced by Hameroff and Penrose as a possible source of the observed electro-encephalographic (EEG) correlates of consciousness. Introducing quantum physics into the realm of biology entails another major positive aspect: room is made for Darwinism and Chance-and-Necessity reasoning. Biological structures as microtubules evolved (well within Darwinian logics) which occurred to cause objective reduction of the quantum state. Once Darwin enters the scene, everything becomes possible. Our mind provides the a posteriori verification. For a deeper look at this concept, the reader is referred to the elaboration of the terms“Ereignis” and “Ereignen” by Martin Heidegger. The basic Hameroff and Penrose assumption would in this case objectively become of paramount importance. The Hameroff–Penrose form of orchestrated objective reduction( Hameroff and Penrose, 2011, 2013) is related to the fundamentals of quantum mechanics and space–time geometry. Hence the connection between the basic structure of the Universe and biomolecular processes. Relating these effects to neurons might appear an unjustified self-inflicted limitation and, in this perspective, the general conclusion should not be avoided: consciousness is a property and a manifestation of life, life is universal in principle. Thus, consciousness is in principle universal”. A note of caution: Roger Penrose himself recently said: “I donʼt see why we should take quantum mechanics as sacrosanct. I think thereʼs going to be something else which replaces it”. These words, if they can be considered as not being out-of-context, find their explanation in the incompleteness of quantum theory. The awareness of this incompleteness is at the very basis of the Orch OR Theory and is reappearing throughout this important essay. Hiroomi Umezawa and Herbert Frohlich: Quantum Brain Dynamics Hiroomi UmezawaHerbert Fröhlich Hiroomi Umezawa       Herbert Fröhlich The basic concept in quantum brain dynamics (QBD) is that the electrical dipoles of the water molecules in the brain constitute a cortical field. The quanta of this field are described as corticons. The field interacts with quantum coherent waves propagating along the neuronal network. There is more than one view within QBD as to how this system supports or instantiates consciousness. The ideas behind quantum brain dynamics (QBD) derived originally from the physicists, Hiroomi Umezawa and Herbert Frohlich in the 1960s. In the last 20 years, these ideas have been elaborated and given greater prominence by the combined efforts of Japanese physicists, Mari Jibu and Kunio Yasue, 1992, 1993 and the Italian physicist, Giuseppe Vitiello, 1995, 2001. Iain Stuart, Umezawa and Yasushi Takahashi (1978) proposed the idea of a cortical field in the brain. Water comprises 70% of the brain, and QBD proposes that rather than providing a passive background, water could be an active player in brain processes. Water molecules have a constant electric dipole, and are considered in QBD to be capable of interacting with waves generated by biomolecules that are also electrical dipoles. In QBD, the totality of the water molecules in the brain is viewed as the best candidate for a cortical field, with the water’s electrical dipoles binding both to one another and to the biomolecules of the neuronal network. There are also suggested to be long-range waves within the cortical field. The quanta of the cortical field are given the name of corticons, and in Jibu and Yasue’s version of the theory, the interaction between the cortical field and the neuronal network, particularly the dendritic part of that network, is the basis of consciousness. The other half of the theory refers to biomolecules propagating through the neuronal network, an idea deriving from the work of Frohlich, 1968. Frohlich argued that it was not clear how order was sustained in living systems, given the likely disrupting effect of the fluctuations in biochemical processes (Frohlich, 1985). His ideas relate mainly to the ordering of the neuronal network, on which the proposed cortical network of Umezawa is proposed to act. Frohlich saw the electric potential across the cell membrane as the macroscopic observable of an underlying quantum order. Frohlich’s studies claim to show that with oscillating electrical charges in a thermal bath, a large number of quanta may become condensed into a single state, known as a Bose condensate, allowing long-range correlations amongst the dipoles involved. He also proposed that biomolecules with a high electric dipole moment line up along the actin filaments, and that electric dipole oscillations propagate along these filaments in the form of quantum coherent waves. There is some support for these ideas, in the form of experimental confirmation that biomolecules with high electric dipole moment have a periodic oscillation (Gray and Singer, 1989). Fig.21: The hypothesis of an individual double as created by our mind Vitiello agrees with Frohlich in arguing that living systems constitute ordered chains of chemical reactions, which could normally be expected to collapse in the random chemical environment of biological tissue. In Vitiello’s view stable ordering comes from the quantum level, but this is described by quantum field theory rather than quantum mechanics. He also claims that the folding of protein, which is fundamental to the activity of cells, cannot be described by classical physics, but could be quantum ordered. Vitiello, 1995, 2001 provides citations, which he feels support a quantum dynamical view of biological tissue, notably studies of radiation effects on cell growth, on electromagnetic fields and stress, on dynamical response to external stimuli, on non-linear tunnelling, on coherent nuclear motion in membrane proteins, on optical coherence in biological systems, on weak radiation fields and biological systems by (Popp, 1986) and on energy transfer via solitons and coherent excitations. QBD proposes that the cortical field not only interacts with, but also to a good extent controls the neuronal network. It suggests that biomolecular waves propagate along the actin filaments, an important part of the cytoskeleton, particularly in the vicinity of the cell membrane and dendritic spines. The waves derive energy from ATP molecules stored in the membrane, and these in turn are controlled by calcium ions. These waves are also suggested to control the action of ion channels, which are crucial in the transmission of signals to the synapses.. The neurons membrane is further suggested to act as a Josephson junction providing insulation between two layers of superconductivity. The superconductivity current across the membrane can be controlled by the electrical potentials across the same membrane. Vitiello also discusses the question of quantum decoherence. He claims that QBD only requires quantum oscillations to last 10-14 picoseconds, which should be much shorter than the period required for decoherence ( Del Giudice, 1988, 2002). In common with Stuart Hameroff, he additionally argues that ordered water around protein molecules may shield them from the surrounding thermal bath. A decisive further step in developing the approach has been achieved by taking dissipation into account. Dissipation is possible when the interaction of a system with its environment is considered. Vitiello (1995) describes how the system-environment interaction causes a doubling of the collective modes of the system in its environment (Fig.21). This yields infinitely many differently coded vacuum states, offering the possibility of many memory contents without overprinting. Finally, dissipation generates a genuine arrow of time for the system, and its interaction with the environment induces entanglement. In a recent contribution, Pessa and Vitiello (2003) have addressed additional effects of chaos and quantum noise. Mari Jibu & Kunio Yasue: Quantum field concepts Untitled-39  Untitled-39bUntitled-39a                                  Mari Jibu,                        Kunio Yasue              Giuseppe Vitiello Jibu and Yasue (1992, 1993) appear to see consciousness as simply a function of the interaction of the corticons, the energy quanta which are proposed to arise in the cortical field, with the biomolecular waves of the neuronal network. Vitiello, while thinking in terms of much the same quantum systems as Jibu and Yasue, proposes that these quantum states produce two poles, first a subjective representation of the external world and secondly a self, which opens itself to this representation of the external world. According to Vitiello’s version of the theory, consciousness is not strictly speaking in either the self or the external representation but between the two, in the opening of one to the other. The concepts derive from the Japanese physicist, Hiroomi Umezawa, 1993 who speculated that understanding the processes of memory in the brain would involve quantum field theory. This led onto the idea that understanding consciousness would also involve quantum field theory. The first four chapters of their book in 1993 provide a standard background to quantum theory and neuroscience. Those without some grounding would be better advised to look at more standard text books or popularizations, as the style of the book is generally difficult and unnecessarily repetitive. The first four chapters of the book deal with quantum theory. For those not familiar with this, there are many much more comprehensible descriptions. This is followed by some descriptive passages on the brain, which is again better described elsewhere. Getting beyond these introductory stages, the authors make the same point as others in stressing the estrangement between physics, where fundamental new views of nature emerged during the last hundred years and neuroscience which has remained largely wedded to 19th century physics. In particular physics has tended to think dynamically, in terms of controlled changes. Physics deals primarily with the inanimate, but the concepts of dynamics can be applied to living organisms, as they also undergo controlled changes. The authors suggest that the functions of the cortex might be better understood through the dendritic network, by which information enters cells. They stress that many neurons in the cortex do not have axons but only dendrites. They think that the conventional processing system described in the axon-neurotransmitter-dendrite system may overlook other networks in the brain. Neurons without axons are the majority in the cortex and the authors see these as the likely basis of consciousness. Fig.22: Schematic representation of the synapse and synaptic cleft with the element of quantum tunneling of electrons(a) and dendritic network (b) The authors discuss the dendritic network at length. They point out that it is much more sophisticated than the axonal network (Fig. 18). The dendritic membrane comprises biomolecules with electric dipoles, the positive poles of the membrane are aligned on the inner surface and the negative poles on the outer surface. The negative poles on the outer surface attract positive ions, while the positive poles on the inner surface attract negative ions. The regions where these interactions occur are called Debye layers. The dendrites of several neurons are often entangled in a network. Chemical synapses are located on the tips of dendritic spines and there are emphases on the dendritic membranes. In such processes even quantum tunneling may play a significant role (Fig.22). Since the 1970s, Evan Harris Walker has proposed that quantum tunneling of electrons would take place across junctions between Neurons. Stuart Hameroff says that “… Gap junctions enable quantum tunneling among dendrites …”.According to principles of modern physics: if a particle such as an electron encounters a barrier such as the synaptic junction, there is a finite probability that the particle will … be found on the other side … From the point of view of Bohm’s pilot wave quantum theory, Peter R. Holland says that quantum tunneling is explained because the effective barrier potential is not the classical barrier potential , but is is the quantum potential. From the many-worlds point of view, quantum tunneling means that the electron is in a superposition of position states, some of which are on one side of the junction and some of which are on the other side. Therefore quantum tunneling can also allow quantum superposition states to extend from neuron to neuron across gap junctions. There is experimental confirmation that biomolecules of high electric dipole moment have a periodic oscillation (Fröhlich, 1968). The authors suggest that these oscillations are crucial to the functioning of the brain. This can be called wave cybernetics, because the wave or biomolecule oscillation is seen as the controlling factor in the brain. Frohlich proposed a theory where biomolecules with high electric dipole moment line up along the actin filaments immediately below the cell membrane, while electric dipole oscillations propagate along each filament as coherent waves. These are maintained by electrons trapped in and moving along the protein molecules. This is now known as a Frohlich wave. These waves exchange energy with the electromagnetic field. Stuart, Umezawa, and Takahashi, 1978 proposed the idea of a cortical field. This interacts with the macroscopic dynamics of the main neural network, which in turn transmits signals to the body tissues. The filamentous strings found in the cells also extend outside the cells forming an extracellular matrix that is also linked to the cell membrane. So the membrane proteins are linked both to the cytoskeleton and the extracellular matrix. Fig.23: Cartoons of, so called, Bose –Einstein conjugates The authors propose that Fröhlich waves propagate along the filamentous strings. The waves are produced by energy stored in ATP molecules at membrane protein sites, which are in turn controlled by calcium ions. The waves also effect the operation of ion channels, which control neural impulses. The authors suggest that this structure can give rise to a macroscopic quantum phenomena, similar to superconductivity. They also regard the cell membrane as an insulating layer between two areas of superconductivity, otherwise known as a Josephson junction. This means that superconductivity current across the Josephson Junction can be controlled by electric potential differences in the insulating layer. The authors suggest that this quantum activity may facilitate the functioning of the brain and in particular an interface between the proposed cortical field and the neurons network. The cortical field is proposed to contain energy quanta behaving as particles, which the authors call corticons. Corticons are suggested to exist everywhere in the cerebral cortex. The interface between the cortical field and the neuron network takes place in the waves propagating along the filamentous strings in the cytoskeleton and the extracellular matrix. The authors emphasize the nature and importance of water within the brain. They suggest that water is not just a background substance, but is an active component in cell assemblies. This idea lies behind the original concept of the cortical field and corticons. The water molecule has a constant electrical dipole. It also has a symmetrical form that is invariant under reflection. The molecule rotates around its symmetry axis, which is the electrical dipole. Thus the molecule is a quantum mechanical spinning top, which interacts with the fields generated by biomolecules. The totality of water molecules in the brain is seen as the best candidate for the sought for cortical field. In water, one side of the molecule becomes negatively charged, and one side positively charged creating an electric dipole. This is an attraction between molecules known as hydrogen bonding. The attraction is both between water molecules and between water molecules and other molecules with electrical dipoles. Biomolecules such as proteins have constant electric dipoles and connect to water molecules. The cortical field is identified with the water rotational field, created by the spinning dipoles of the water molecules. The field on the cytoskeleton and extracellular matrix is proposed to be a Bose field (Fig. 23), and the interaction between this Bose field and the corticons of the cortical field is seen as the basis of consciousness. Corticons are identified with the energy quanta of the water rotational field of the brain. The corticons interact with each other by emitting and absorbing the exchange bosons of the Bose field, and are themselves the energy quanta of the water rotational field. The water rotational field is a dipole field and therefore interacts with an electromagnetic field. There are also suggested to be long-range correlation waves in the water rotational field of the brain. The brain structures described here are thought to be sensitive to and to modify themselves in responses to information coming into the brain. The combined dynamics of the cortical field and the electromagnetic field comprise what the authors describe as quantum brain dynamics (QBD). The dynamics of the corticons is thought to be capable of controlling the dendritic and neural networks. The authors think that the creation and annillation of corticons in the QBD is what is called consciousness. Unfortunately the authors do not explain why they think this, and therefore like more mainstream theories of consciousness, the actual consciousness seems to be created by fiat. There is no more apparent reason why consciousness should arise from this physical interaction than from the physical interaction of electrical potentials and chemical in the synapses. The authors could have suggested that consciousness was a fundamental property of photons or of the proposed corticons or of particular fields but they do not do this. Johnjoe McFadden: Electromagnetic fields in brain Johnjoe McFadden McFadden starts by stating that synchronous firing in the brain correlates with awareness and perception indicating that disturbances in the brain’s electromagnetic field also correlate with these. This field is a representation of neuronal information and its dynamics could be seen as a correlate of consciousness. McFadden, 2001 views this field as the physical substrata of consciousness. Popper, 1997 and Libet, 2006 have both suggested that consciousness might derive from an overarching field that could integrate the processing of neurons, but they did not think that this could be any known physical field. At the same time, there has been considerable interest in synchronous firing of neurons. Awareness has been shown to correlate with the synchrony of firing in the 40-80Hz range, and this may bind together neurons involved in different aspects of the same visual perception, thus creating the unity of consciousness (Fig.24). The brain’s electromagnetic field is induced by neuron firing, and also the movement of ions involved in the fluctuation of electrical potential along the cell membrane. The structure of the cortex tends to amplify the induced field. Experiments in the olfactory bulb have demonstrated EEG activity in response to sensory stimuli. Information about the stimuli related to the spatial pattern of the EEG amplitude. The author concludes that the brain contains a highly structured extracellular electromagnetic field. The field is weak with the trans membrane fields being about 3,000 times stronger. It is suggested that neurotransmission through gap junctions may be voltage dependent and therefore sensitive to local fields. However, McFadden prefers to concentrate on the voltage-gated ion channels in the cell membranes, because their role is better understood. Synchonous firing is thought to due to a large number of spatially distributed neurons. It is thought that many millions of neurons could be influenced by such firing. McFadden claims evidence for neuron communication via the electromagnetic field. Fig. 24: The electromagnetic field theory of consciousness as part of an integral electromagnetic spectrum The medical use of trans cranial magnetic stimulation (TMS) is taken to indicate the sensitivity of the brain to weak electromagnetic fields, and as this has impacts on behavior, it is argued to impact neuronal computation and neuronal function Even when fields are weaker than the surrounding noise, they can modulate neurons. The brain’s electromagnetic field is argued to hold the same information as the neuron firing patterns. The widespread of the electromagnetic field would help to explain the unity of consciousness. Clusters of neurons in the visual cortex have been shown to fire in synchrony in response to particular stimuli. With insects, destruction of synchronous firing has been shown to reduce the ability to discriminate between stimuli. There is indirect evidence for the correlation between synchronous firing and attention and awareness in humans. The olfactory system of rabbits shows that the sensory information is encoded in the spatial pattern of the EEG, and therefore of the electromagnetic field. This correlation also reflected what a particular smell meant to the rabbit, when it had been trained to associate particular things with a smell. This suggested that the shape of the electromagnetic field could be related to perception and meaning. This is taken to suggest that consciousness is related to the electromagnetic field. Where there is habituation with a process and therefore less conscious activity there is a reduction in synchronous firing, so loss of awareness correlates with reduced disturbance in the brain’s electromagnetic field. The theory predicts that only activity that acts on the motor neurons is conscious. This is testable, although there is no direct evidence. The EEG shows that activity increases during creative thinking, declines with sleep but revives with REM dreaming, so the amount of conscious activity correlates with the amount of electromagnetic activity. The high conductivity of the cerebral fluid in the brain ventricles makes the brain into a kind of Faraday’s cage, insulating it from external electrical fields. However, it is much easier for magnetic fields to penetrate the brain and other tissues. Moving magnetic fields, such as those used in TMS do produce effects in the brain. Mc Fadden and the function of consciousness McFadden sides with those who argue that consciousness must have a function or evolution would not have selected for it. Field effects that had an advantageous effect on the performance of ion channels would have been selected for. McFadden thinks that there is information transfer between neurons during synchronous firing. He proposes that the neural circuits involved in conscious and unconscious activity differ in their sensitivity to the electromagnetic field. The conscious will is claimed to be our experience of the electromagnetic field. He thinks that consciousness is not actually the electromagnetic field, but its ability to transmit information to neurons. He also points out the difficulty of trying to perform two conscious tasks or a conscious and unconscious task at the same time. The two interfere with each other, while unconscious multi-tasking is possible. Consciousness is required for the laying down of long-term memories and for most learning. The Cemi field theory conceives that the electromagnetic field in the brain fine tunes the probabilities of neuron firings. The affected neurons may be part of large connected assemblies, and this leads to memory and learning. In simulated networks non-synaptic neuronal interactions via the elctromagnetic field and also gap junctions enhance learning. Modulation of long term potentiation by electromagnetic fields has also been demonstrated in vitro in rat hippocampal slices. McFadden and Free will The author claims that free will is the subjective experience of the influence of the cemi field on neurons. However, the influence of the cemi field is seen as entirely deterministic. The fluctuations in the field that are capable of modulating the firing of neurons would all be generated by changing patterns of electrical activity, while the neurons themselves induce the field. The author admits that there might be some element of random quantum fluctuations in the field, but this randomness is unsuitable for producing free will. The author, in common with others in consciousness studies, tries to have it both ways at this point. The functioning of the brain is claimed to be entirely deterministic, but something called ‘will’ is active in driving our conscious actions. This appears to be a clear a contradiction, since the whole idea of will is an agent which initiates something of its own accord. The Cemi theory is trying to provide a plausible explanation of consciousness. The author could have said that consciousness was a fundamental property of electrical charge, or individual charged particles, or the photons that intermediate it, thus making it a primitive or a brute fact of the universe. But he does not do this. He says that our conscious will is our experience of the influence of the Cemi field. This seems to raise a host of questions and contradictions. If the Cemi field isn’t conscious itself, who or what is experiencing it’s influence. This suggests a dualistic non-physical entity that experiences the action of the field. Even if we are happy with this concept it is not clear why this particular set of electromagnetic fields should produce this experience for this entity. Like many before him, McFadden suddenly declares by fiat that one particular part of the otherwise ordinary material of the brain produces consciousness. Again, it is reasonable to say that evolution selected for a particular type of field that could fine tune the neurons, but the additional production of a feeling of free will, which is false has no demonstrable value. Gustav Bernroider: Ion channel coherence Gustav Bernroider Ion channels are a crucial component in the axonal spiking/synaptic firing model of neuronal signaling and information processing. The axonal signal starts from the body of the neuron and proceeds down an extension called the axon, by means of a fluctuation in the difference in electrical potential across the membrane that forms the exterior of the axon. The membrane is formed by a double layer of lipids. The ion channels consist of protein molecules inserted through the lipid bi-layer. The axon fires when sodium (Na+) ions flow in through one set of ion channels, and subsequently returns to its resting state when potassium (K+) ions flow out through another set of ion channels. This process continues down the length of the axon until it reaches the synapse, which it allows to fire, and thus communicate with other neurons. Ion channels are thus a key mechanism in the brain’s signaling and information processing (see Fig. 25). Fig. 25 : The potassium channel structure that protects the K+-ion from decoherence (above) and the flow of quantum information through entangled series of channels. Bernroider and Roy, 2004, 2005 base this theory on recent studies of ion channels. These have been made possible by advances in high-resolution atomic-level spectroscopy and accompanying molecular dynamics simulations. In this work, they draw particularly on the work of the MacKinnon group, and on studies of the potassium (K+) channel, especially the closed state of this channel. The functioning of the K+ channel occurs in two stages, firstly, the selection of K+ ions in preference to any other species of ion, and secondly voltage-gating that controls the flow of these favored K+ ions. The authors say that the traditional understanding of both functions has been altered by the recent studies. In its closed state, the channel is now seen to stabilise three K+ ions, two in the permeation filter of the ion channel and one in a water cavity to the intracellular side of this permeation path. In the case of the channel’s voltage gating, the electrical charges involved which were previously thought to act independently of the surrounding proteins and lipids, are now seen to be coupled to these proteins and lipids, and are thus involved in the gating process. Atomic-level spectroscopy has revealed the detailed structure of the K+ channel in its closed state. The filter region of the channel has a framework of five sets of four oxygen atoms, which are each part of the carboxyl group of an amino-acid molecule in the surrounding protein. These are referred to as binding pockets, involving eight oxygen atoms in total. Both ions in the channel oscillate between two configurations (Fig. 21). Bernroider and Roy’s calculations lead them to claim that ion permeation can only be understood at the quantum level. Taking this as an initial assumption, they go on to ask whether the resulting model of the ion channel can be related to logic states. Their calculations suggest that the K+ ions and the carboxyl atoms of the binding pockets are two quantum-entangled sub-systems, and they equate this to a quantum computational mapping. The K+ ions that are destined to be expelled from the channel could, in the authors hypothesis, encode information about the state of the oxygen atoms in the axon membrane . In a later paper, presented at the Quantum Mind conference, Bernroider, 2007, proposed that different ion channels could be non-locally entangled, thus proposing a quantum process over an extended area of the axon. Given the importance of the ion channels in brain functioning, this model would give quantum coherence and non-locality in the axon membrane an integral role in the brain’s signalling and information processing. Further to this, Bernroider and Roy have pointed out a similarity between the structure of the K+ ion channel and some recent proposals for building quantum computers, in which ions are held in microscopic traps. The authors argue that their model is well protected against decoherence, which has always been the most cogent criticism of quantum consciousness proposals. In particular, they claim that Tegmark’s calculations do not apply to their model. The authors agree that for ions moving freely in water, Tegmark’s coherence time of 10^20 seconds would apply. However, they argue that the situation of the ions held in the permeation filter of the ion channel is markedly different, with a temperature about half the prevailing level for the brain, and the ions protected from decoherence by the binding pockets and the adjoining water cavity . Bernroider and Roy propose a quantum information system in the brain that is driven by the entangled ion states in the voltage-gated ion channels. These ion channels, situated in the neuron’s membrane are a crucial component of the conventional neuroscience description of axon spiking leading to neural transmitter release at the synapses. The ion channels allow the influx and outflux of ions from the cell driving the fluctuation of electrical potential along the axon, which in turn provides the necessary signal to the synapse. Fig. 26 : Crystallographic X-ray determined structure of a potassium channel (a) and a schematic representation of it showing the polypeptide units (b) The authors draw particularly on the work of MacKinnon and his group, notably his crystallographic X-ray work, see Fig. 26. The study shows that ions are coordinated by carboxyl based oxygen atoms or by water molecules. An ion channel can be in either a closed or an open state, and in the closed state there are two ions in the permeation path that are confined there. The authors regard this closed gate arrangement as the essential feature with regard to their research work. The open gate presents very little resistance to the flow of potassium ions, but the closed gate is a stable ion-protein configuration. Bernroider’s theory might be seen to represent even more of a challenge to conventional neuroscience than the other quantum consciousness theories. This is because its recruits as its basis the axon membrane and ion channels which form a crucial part of the conventional neuroscience model, and then tries to remodel these core structures on a quantum-driven basis. It is hard to deny that if this theory were to become better substantiated, it would produce in neuroscience a revolution of the most profound kind. The essential question was how selectivity could be maintained without compromising conductance. The interaction between ions, attracted water molecules and neighbouring oxygen atoms is considered to require a quantum description. This raises the question of whether quantum effects can propagate in the classical states of proteins. The access of ions to the pore gate is a relatively slow process not likely to require quantum processing. However, the selectivity filter can change its conformation from permissive to non-permissive on a much shorter timescale. It appears that in the conditions of the selectivity filter the ion’s wave function can become highly delocalized over a significant part of the filter region. Fig. 27 : Neurotransmission via a neuronal synapse with ion channels (a) and a neuronal network (b) A New Theory of Quantum Consciousness? Bernroider’s theory could potentially be a vehicle for transferring consciousness from the implicate into the explicate order of David Bohm. Bernroider differs from Penrose and Hameroff’s Orch OR model in his emphasis of the axons and membranes, as opposed to the dendrites and the cytoskeleton. However, there are similarities between the two models in that both of them propose quantum coherence, non-locality and subsequent wave function collapse linked to the brain’s macroscopic information processing activity. As it stands, Bernroider’s proposals only deal with information processing in the brain rather than consciousness as such. However, it appears possible that wave function collapse in the ion channels might link to Penrose’s proposed geometry of space time, just as readily as wave function collapse in the cytoskeleton (Fig. 27). Bernroider’s theory is distinct from all earlier quantum consciousness theories in locating its mechanism in structures that are central to mainstream theories of the brain’s information processing and production of consciousness. If future experimentation were to substantiate kind the Bernroider proposals, this would involve a revolution in neuroscience of the most profound character.                                 . Chris King: Cosmology, consciousness, chaos and fractal geometry Chris King Chris King, 1989, 2003 2011, 2012, 2014, favors the approach of Chalmers over the approach of Dennett in looking at the problem of consciousness. He describes Dennett’s ‘multiple drafts’ concept as a description of how verbal reports of internal states are produced, but as lacking in any explanation of how consciousness is achieved (Dennett, 2007). He reminds us of Chalmers comment that a theory of physics that does not explain consciousness is not a theory of everything. Furthermore, he argues that ultimately our knowledge of objective science is only available via our subjective conscious experience (Fig. 28). Fig. 28 : Human consciousness as a template for a spectrum of common and transcendental experiences (from King 2012). He cautions against the common tendency to try and discount quantum uncertainty as something that will be averaged out as a result of the very large number of quanta involved in any macroscopic state. In Chaos theory, which may well have a role in brain processes, small fluctuations may be inflated into important differences, and quantum uncertainties may be included in these small differences. King goes on to look at the possible uses of quantum computation. He mentions that classical computing has a problem with the potentially unlimited time needed to check a range of possibilities King favors the transactional interpretation of EPR type non-local quantum correlations. In the transactional interpretation of non-local events, when a measurement is made on an entangled particle, it sends a photon back in time to when it and the other entangled particle were emitted, and then forward in time to the second entangled particle. Thus the net time taken to send the quantum information about the measurement of the first particle is zero, and the effect of measurement on the second particle appears to be instantaneous, despite the spatial gap between them. The backward travel in time, which looks like an exotic feature is allowed by the laws of physics as embodied in both the Maxwell and Schrodinger equations King, 2014 thinks that the transactional interpretation of non-locality can be combined with quantum computing to give a spacetime anticipating system and that this may be basic to the way the brain works. He argues that the brain’s performance is not particularly impressive in terms of what classical computers are good at, but it’s impressive in terms of anticipating environmental and behavioral changes. Further citing this article: “The transactional interpretation visualizes an exchanged particle wave function as the interference of a retarded usual time direction offer wave and a time-reversed advanced confirmation wave. Time symmetric interactions also occur in quantum field theories where special relativity allows both advanced and retarded solutions because of the energy relation E = ± p2 + m2 . Virtual photons and electron-positron pairs deflecting an electron in quantum electrodynamics. Since the photon is its own anti-particle, a negative energy photon traveling backwards in time is precisely a positive energy one traveling forwards. In quantum mechanics, not only are all probability paths traced in the wave function, but past and future are interconnected in a time-symmetric hand-shaking relationship, so that the final states of a wave-particle or entangled ensemble, on absorption, are boundary conditions for the interaction, just as the initial states that created them are. The transactional interpretation of quantum mechanics expresses this relationship neatly in terms of offer waves from the past emitter/s and confirmation waves from the future absorbers, whose wave interference becomes the single or entangled particles passing between. When an entangled pair are created, each knows instantaneously the state of the other and if one is found to be in a given state, e.g. of polarization or spin, the other is immediately in the complementary state, no matter how far away it is in space-time. This is the spooky action at a distance, which Einstein feared because it violates local Einsteinian causality in which particles not communicating faster than the speed of light. However quantum entanglement cannot be used to make classical causal predictions, which would formally anticipate a future event, so the past-future handshaking lasts only as long as a particle or entangled ensemble persist in their wave function. Weak quantum measurement (WQM) is one way a form of quantum anticipation could arise. Weak quantum measurement (Aharonov et al. 2010) is a process where a quantum wave function is not irreversibly collapsed by absorbing the particle but a small deformation is made in the wave function whose effects become apparent later when the particle is eventually absorbed e.g. on a photographic plate in a strong quantum measurement. Weak quantum measurement changes the wave function slightly mid-flight between emission and absorption, and hence before the particle meets the future absorber involved in eventual detection . A small change is induced in the wave function, e.g. by slightly altering its polarization along a given axis (Kocsis et al. 2011). This cannot be used to deduce the state of a given wave-particle at the time of measurement because the wave function is only slightly perturbed, and is not collapsed or absorbed, as in strong measurement, but one can build up a prediction statistically over many repeated quanta of the conditions at the point of weak measurement, once post-selection data is assembled after absorption. This suggests (Merali, 2010, Cho, 2011) that, in some sense, the future is determining the present, but in a way we can discover conclusively only by many repeats. Focus on any single instance and you are left with an effect with no apparent cause, which one has to put it down to a random experimental error. This has led some physicists to suggest that free-will exists only in the freedom to choose not to make the post-selection(s) revealing the future…..To view and read the full book click here Leave a Reply
9a06c8b4de78803b
tisdag 28 februari 2017 Update of realQM I have put up an update of realQM for inspection, with Chapter 6 presenting the basic model. It includes in particular the following remark on the difference between realQM and the stdQM of text books: Schrödinger approached mathematical modeling of the atom starting with wave functions and then seeking an equation satisfied by the wave functions as solutions, thus proceeding from solutions to equation rather than from equation to solutions as the normal approach with the equation formulated on physical principles. This is reflected in the absence of any derivation of Schrödinger's equation from basic physical principles, which is a main defect of stdQM. Starting from solutions and then finding an equation satisfied by the solutions hides the physics, while starting with the equation requires physics to formulate the equation. And this is the essence of realQM! fredag 24 februari 2017 Skeptics Letter Reaches the White House The Washington Examiner reports: Also Washington Times reports on this historic letter: lördag 18 februari 2017 Scott Pruitt New Director of EPA Trump's Pick for EPA Chief Scott Pruitt: Climate Change Dissent Is Not a Crime Pruitt is expected to scrap the Clean Power Plan (CPP) defining the gas of life CO2 to be a toxic to be put under severe control, as well as the Paris Agreement formed on the same premise. Pruitt's standpoint based on science is that there is no scientific evidence that CO2 is toxic or that CO2 emission from burning of fossil fuels can cause measurable global warming.  The work force at an EPA without CPP is estimated to be reduced from 15000 to 5000, with new main concern being clean air and water and not meaningless control of CO2. This brings hope to the all poor people of the world that there can be energy and food for everybody!  lördag 11 februari 2017 QM: Waves vs Particles: Schrödinger vs Born From The Philosophy of Quantum Mechanics The Interpretations of QM in Historical Perspective by Max Jammer, we collect the following account of Schrödinger's view of quantum mechanics as wave mechanics, in full correspondence with realQM: • Schrödinger interpreted quantum theory as a simple classical theory of waves. In his view, physical reality consists of waves and waves only.  • He denied categorically the existence of discrete energy levels and quantum jumps, on the grounds that in wave mechanics the discrete eigenvalues are eigenfrequencies of waves rather than energies, an idea to which he had alluded at the end of his first Communication. In the paper "On Energy Exchange According to Wave Mechanics," which he published in 1927, he explained his view on this subject in great detail. • The quantum postulate, in Schrödinger's view, is thus fully accounted for in terms of a resonance phenomenon, analogous to acoustical beats or to the behavior of "sympathetic pendulums" (two pendulums of equal, or almost equal, proper frequencies, connected by a weak spring).  • The interaction between two systems, in other words, is satisfactorily explained on the basis of purely wave-mechanical conceptions as if the quantum postulate were valid- just as the frequencies of spontaneous emission are deduced from the time-dependent perturbation theory of wave mechanics as if there existed discrete energy levels and as if Bohr's frequency postulate were valid.  • The assumption of quantum jumps or energy levels, Schrödinger concluded, is therfore redundant: "to admit the quantum postulate in conjunction with the resonance phenomenon means to accept two explanations of the same process. This, however, is like offering two excuses: one is certainly false, usually both."  • In fact, Schrodinger claimed, in the correct description of this phenomenon one should not apply the concept of energy at all but only that of frequency. We contrast with the following account of Born's view of quantum mechanics as particle statistics: • Only four days after Schrödinger's concluding contribution had been sent to the editor of the Annalen der Physik the publishers of the Zeitschrift fur Physik received a paper, less than five pages long, titled On the Quantum Mechanics of Collision Processes, in which Max Born proposed, for the first time, a probabilistic interpretation of the wave function implying thereby that microphysics must be considered a probabilistic theory. • When Born was awarded the Nobel Prize in 1954 "for his fundamental work in quantum mechanics and especially for his statistical interpretation of the wave function," he explained the motives of his opposition to Schrödinger's interpretation as follows:  • "On this point I could not follow him. This was connected with the fact that my Institute and that of James Franck were housed in the same building of the Göttingen University. Every experiment by Franck and his assistants on electron collisions (of the first and second kind) appeared to me as a new proof of the corpuscular nature of the electron." • Born's probabilistic interpretation, apart from being prompted by the corpuscular aspects in Franck's collision experiments, was also influenced, as Born himself admitted, by Einstein's conception of the relation between the field of electromagnetic waves and the light quanta. • In the just mentioned lecture delivered in 1955, three days before Einstein's death, Born declared explicitly that it was fundamentally Einstein's idea which he (Born) applied in 1926 to the interpretation of Schrödinger's wave function and which today, appropriately generalized., is made use of everywhere.  • Born's probability interpretation of quantum mechanics thus owes its existence to Einstein, who later became one of its most eloquent opponents. We know that the view of Born, when forcefully missioned by Bohr, eliminated Schrödinger from the scene of modern physics and today is the text book version of quantum mechanics named the Copenhagen Interpretation. We understand that Born objected to Schrödinger's wave mechanics because he was influenced by Einstein's 1905 idea of a "corpuscular nature" of light and certain experiments suggesting a "corpuscular nature" of electrons.  But associating a "corpuscular nature" to light and electrons meant a giant step back from the main advancement of 19th century physics in the form of Maxwell's theory of light as electromagnetic waves, a step back first taken by Einstein but then abandoned, as expressed by Jammer: • Born's original probabilistic interpretation proved a dismal failure if applied to the explanation of diffraction phenomena such as the diffraction of electrons.  • In the double-slit experiment, for example, Born's original interpretation implied that the blackening on the recording screen behind the double-slit, with both slits open, should be the superposition of the two individual blackenings obtained with only one slip opened in turn.  • The very experimental fact that there are regions in the diffraction pattern not blackened at all with both slits open, whereas the same regions exhibit strong blackening if only one slit is open, disproves Born's original version of his probabilistic interpretation.  • Since this double-slit experiment can be carried out at such reduced radiation intensities that only one particle (electron, photon, etc.) passes the appara- tus at a time, it becomes clear, on mathematical analysis, that the $-wave associated with each particle interferes with itself and the mathematical interference is manifested by the physical distribution of the particles on the screen. The wave function must therefore be something physically real and not merely a representation of our knowledge, if it refers to particles in the classical sense. Summing up:  • Real wave mechanics in the spirit of Schrödinger makes a lot of sense, and that is the starting point of realQM. • Born's particle statistics does not make sense, and the big trouble is that this is the text book version of quantum mechanics. How could it be, with these odds, that Born took the scene? The answer is the "obvious"  generalisation of Schrödinger's wonderful 3d equation for the Hydrogen atom with one electron with physical meaning, into the 3N-dimensional linear Schrödinger equation for an atom with $N > 1$ electrons, a trivial generalisation without physical meaning. There should be another generalisation which stays physical and that is the aim of realQM. In the end Schrödinger may be expected to take the game because he has a most perfect and efficient brain, according to Born. To get more perspective let us quote from Born's 1954 Nobel Lecture: Born's argument against Schrödinger's wave mechanics in the spirit of Maxwell in favor of his own particle mechanics in the spirit of Newton, evidently was that a "tick" of Geiger counter or "track" in a cloud chamber both viewed to have "particle-like quality", can only be triggered by a "particle", but there is no such necessity...the snap of a whip is like a "particle" generated by a "wave"... Born ends with: • How does it come about then, that great scientists such as Einstein, Schrö- dinger, and De Broglie are nevertheless dissatisfied with the situation?  • The lesson to be learned from what I have told of the origin of quantum mechanics is that probable refinements of mathematical methods will not suffice to produce a satisfactory theory, but that somewhere in our doctrine is hidden a concept, unjustified by experience, which we must eliminate to open up the road. fredag 10 februari 2017 2500 Years of Quantum Mechanics Erwin Schrödinger connects in Nature and the Greeks (1954) and in 2400 Jahre of Quantenmechanik (1948) the standard Copenhagen Interpretation of his wave function of quantum mechanics, back to the Greek atomists Leucippus and Democritus (born around 460 BC) preceded by the view of Anaximenes (died about 526) disciple of Anaximander of matter as collections of "particles" as "indivisible smallest bodies separated by void" subject to "rarefaction and condensation". In the Copenhagen Interpretation wave functions are supposed to represent probability distributions of collections of electrons viewed as "particles in void" in the same way as the Greek atomists did 2500 years ago. The contribution from modern physics to this ancient view is the element of probability eliminating causality by stating that "particles" are supposed to "jump around", or "jiggle" in the terminology of Feynman, without cause and thus always be nowhere and everywhere in the void at the same time. Schrödinger compares this ancient "particle" view boosted by probability with his own opposite view that "all is waves without void obeying causality" as possibly a true advancement of physics. This is the starting point of realQM...as ontic/realistic/objective rather epistemic/idealistic/subjective... Recall Roger Penrose in Foreword to Nature and the Greeks and Science and Humanism: • Moreover, in my personal view, the more "objective" philosophical standpoints of Schrõdinger and Einstein with respect to quantum mechanics, are immeasurably superior to "subjective" ones of Heisenberg and Bohr.  • While it is often held that the remarkable successes of quantum physics have led us to doubt the very existence of an "objective reality" at the quantum level of molecules, atoms and their constituent particles, the extraordinary precision of the quantum formalism - which means, essentially, of the Schrõdinger equation - signals to us that there must indeed be a "reality" at the quantum level, albeit an unfamiliar one, in order that there can be a "something" so accurately described by that very formalism. tisdag 7 februari 2017 Towards a New EPA Without CO2 Alarmism The US Environmental Protection Agency EPA is facing a complete revision along a plan drawn by CO2 alarmism skeptic Mylon Ebell, but EPA still trumpets the same old CO2 alarmism of the Obama administration under the head lines of Climate Change: • Humans are largely responsible for recent climate change. • Greenhouse gases act like a blanket around Earth, trapping energy in the atmosphere and causing it to warm. This phenomenon is called the greenhouse effect... and is natural and necessary to support life on Earth. However, the buildup of greenhouse gases can change Earth's climate and result in dangerous effects to human health and welfare and to ecosystems. The reason that this propaganda is still on the EPA web page can only be that the new director of EPA Scott Pruitt has not yet been confirmed. It will be interesting to see the new web page after Pruitt has implemented the plan of Ebell to dismantle CO2 alarmism...in the US...and then... söndag 5 februari 2017 From Meaningless Towards Meaningful QM? The Schrödinger equation as the basic model of atom physics descended as a heavenly gift to humanity in an act of godly inspiration inside the mind of Erwin Schrödinger in 1926. But the gift showed to hide poison: Nobody could give the equation a physical meaning understandable to humans, and that unfortunate situation has prevailed into our time as expressed by Nobel Laureate Steven Weinberg (and here): Weinberg's view is a theme on the educated physics blogosphere of today: Sabine agrees with Weinberg that "there are serious problems", while Lubos insists that "there are no problems". There are two approaches to mathematical modelling of the physical world: 1. Pick symbols to form a mathematical expression/equation and then try to give it a meaning. 2. Have a meaningful thought and then try to express it as a mathematical expression/equation.  Schrödinger's equation was formed more according to 1. rather than 2.  and has resisted all efforts to be given a physical meaning. Interpreting Schrödinger's equation has shown to be like interpreting the Bible as authored by God rather than human minds. What makes Schrödinger's equation so difficult to interpret in physical terms, is that it depends on $3N$ spatial variables for an atom with $N$ electrons, while an atom with all its electrons seems to share experience in a common 3-d space.  Here is how Weinberg describes the generalisation from $N=1$ in 3 space dimensions to $N>1$ in $3N$ space dimensions as "obvious": • More than that, Schrödinger’s equation had an obvious generalisation to general systems. Weinberg takes for granted that what "is obvious" does not have to be explained.  But everything in rational physics needs rational argumentation and nothing "is obvious", and so this is where quantum mechanics branches off from rational physics. If what is claimed to be "obvious" in fact lacks rational argument, then it may simply be all wrong. The generalisation of Schrödinger's equation to $N>1$ fell into that trap, and that is the tragedy of modern physics. There is nothing "obvious" in the sense of "frequently encountered" in the generalisation of Schrödinger's equation from 3 space dimensions to 3N space dimension, since it is a giant leap away from reality and as such utterly "non-obvious" and "never encountered" before. In realQM I suggest a different form of Schrödinger's equation as a system in 3d with physical meaning. PS Note how Weinberg describes the foundation of quantum mechanics: • The first postulate of quantum mechanics is that physical states can be represented as vectors in a sort of abstract space known as Hilbert space. • According to the second postulate of quantum mechanics, observable physical quantities like position, momentum, energy, etc., are represented as Hermitian operators on Hilbert space.  We see that these postulates are purely formal and devoid of physics. We see that the notion of Hilbert space and Hermitian operator are elevated to have a mystical divine quality, as if Hilbert and Hermite were gods like Zeus (physics of the sky) and Poseidon (physics of the sea)...much of the mystery of quantum mechanics comes from assigning meaning to such formalities without meaning... The idea that the notion of Hilbert space is central to quantum mechanics was supported by an idea that Hilbert space as a key ingredient in the "modern mathematics" created by Hilbert 1926-32 should be the perfect tool for "modern physics", an idea explored in von Neumann's monumental Mathematical Foundations of Quantum Mechanics.  Here the linearity of Schrödinger's equation is instrumental and its many dimensions doesn't matter, but it appears that von Neumann missed the physics: • I would like to make a confession which may seem immoral: I do not believe absolutely in Hilbert space no more. (von Neumann to Birkhoff 1935) fredag 3 februari 2017 Unphysical Basis of CO2 Alarmism = Hoax CO2 alarmism is based on an unphysical version of Stefan-Boltzmann's Law and associated Schwarzschild equations for radiative heat transfer stating a two-way radiative heat transfer from-warm-to-cold and from-cold-to-warm with net transfer as the difference between the two-way transfers. This is expressed as "back radiation" from a colder atmosphere to warmer Earth surface in Kiehl-Trenberth's Global energy budget (above) and in Pierrehumbert's Infrafred radiation and planetary temperature based on Schwarzschild's equations, presented as the physical basis of CO2 alarmism. In extended writing I have exposed the unphysical nature of radiative heat transfer from-cold-to-warm as violation of the 2nd law of thermodynamics, see e.g. Massive two-way radiative heat transfer between two bodies is unphysical because it is unstable, with the net transfer arising from the difference between two gross quantities, and the 2nd law says that Nature cannot work that way: There is only transfer from-warm-to-cold and there can be no transfer from-cold-to-warm. Radiative heat transfer is always one-way from-warm-to-cold. CO2 alarmism is thus based on a picture of massive radiative heat transfer back-and-forth between atmosphere and Earth surface (see above picture), as an unstable system threatening to go into "run-away-global-warming" at slightest perturbation.  But there is no true physics behind this picture, only alarmist fiction.  Real physics indicates that global climate is stable rather than unstable, and as such insensitive to a very small change of the composition of the atmosphere upon doubling of CO2. There is little/no scientific evidence indicating that the effect could be measurable, that is be bigger than 0.5 C. Note that climate models use Schwarzschild's equations to describe radiative heat transfer and the fact that these equations do not describe true physics is a death-blow to the current practice of climate simulation used to sell CO2 alarmism. So, when you meet the argument that Pierrehumbert is an authority on infrared radiation and planetary temperature, you can say that this is not convincing because Pierrehumbert is using incorrect physics (which also comes out by the fact that he forgets gravitation as the true origin of the very high temperature on the surface of Venus and not radiation). If now CO2 alarmism is based on incorrect physics or non-physics, then it may be fair to describe it as "hoax". Think of it: Suppose that "scientific consensus" through MSM is bombarding you with a message that the Earth has to be evacuated because there is imminent fear that the "sky is going to fall down" because Newton's law of gravitation says that "everything is pulled down". Would you then say that "since it is said so it must be so" or would you say that this is a non-physical misinterpretation of Newton's law?  Think of it! The edX course Making Sense of Climate Science Denial is a typical example of the CO2 alarmism  based on the incorrect physics of "back radiation", which is forcefully trumpeted by the educational system,  as illustrated in the following key picture of the course:
18896ba38ea2e5b8
Newtonian Mechanics The Hen and the Egg of Gravitation We suggest to view the gravitational field as the primary variable, which generates mass by local differentiation, instead of being generated by mass by global integration in actionat distance. Does the Earth Rotate? We show that the equivalence of inertial and gravitational mass results from an operational definition based on Newton’s 2nd Law connecting mass to accelleration. Hubble’s Law Without Dark Energy Hubble’s Law states that galaxies appear to be receding from the Earth with a velocites proportional to their distances to the Earth. We derive Hubble’s Law using Newtonian mechanics from a Big Bang scenario with a rapid expansion phase from a hot dense spherical initial rest state centered at the origin of an Euclidean coordinate system under a pressure force increasing linearly with distance from the origin compatible with a constant heat source, followed by expansion with constant velocity with heat source shut off. Gravitational Law as Perfect Harmony as Perfect Marriage Does Einstein’s Generalization of Newton’s Gravitation Survive Ockham’s Razor? Universality of Newton’s Law of Gravitation The Universe as Weakly Compressible Gas subject to Pressure and Gravitational Forces New View of Motion under Gravitation without Classical Mysteries Zeno’s Arrow Paradox Still Unresolved after 2500 Years The Equivalence Principle from Newton’s 2nd Law From Spooky Action at Distance to Dig where You Are Physics Illusion 1: Gravitational Attraction as Instant Action at Distance Physics Illusion 9: Instant Action at Distance Not Physical Nor Needed Physics Illusion 12: Modern vs Classical World Physics Illusion 14: Gravitational Motion by Instant Action at Distance Fluid Mechanics The Secret of Turbulence We explain the basic nature of turbulence and show that it is a necessary feature of complex flows, which properly used makes it possible to fly, sail, swim, and extract energy from flows of air and water, and more generally live an interesting life. New Theory of Drag and Lift We give an introduction to a new mathematical theory for the generation of lift and drag on a body moving through a slightly viscous incompressible fluid such as air and water, as well as a large variety of applications. D’Alembert’s Paradox A new resolution of d’Alembert’ s paradox from 1752 is presented.The new resolution is based on computational solution of the incompressible inviscid Euler equations with slip boundary condition showing that zero-drag potential flow is unstable and develops into a turbulent flow with substantial drag. The new resolution is entirely different from the official resolution supported by the fluid dynamics community based on Prandtl’s boundary layer theory, and is supported by mathematical analysis, computation and experiment. The Spell of Prandtl’s Laminar Boundary Layer 20th century fluid mechanics has been obsessed with Prandtl’s theory of (separation in) viscous laminar boundary layers, despite the fact that the fundamentally different case of most importance concerns (separation in) slightly viscous turbulent boundary layers. The Spell of Kutta-Zhukovsky’s Circulation Theory It is shown that the classical circulation theory for the lift of a wing by Kutta-Zhukovsky represents a misunderstanding of mathematical logic. The true reason that an airplane can fly is something else than circulation, while pilots are told (and possibly also believe) that lift it comes from circulation… Flow Separation and Divorce Cost The drag and lift of a body moving through a fluid depends on the mechanism of flow separation, which is shown to be fundamentally different in slightly viscous turbulent flow and laminar flow. Second Law of Thermodynamics We present a deterministic continuum mechanics foundation of thermodynamics for slightly viscous fluids or gases based on a 1st Law in the form of the Euler equations expressing conservation of mass, momentum and energy, and a 2nd Law formulated in terms of kinetic energy, internal (heat) energy, work and shock/turbulent dissipation, without reference to entropy. Black-Body Radiation A new explanation of the spectrum of black-body radiation is presented based on finite precision computation instead of statistics. The Direction of Time We explain why certain physical processes are irreversible and define a direction or arrow of time, based on viewing the process as a form of analog computation with finite precision, in which necessarily developing sharp difference necessarily is destroyed. Efficiency of Heat Pumps and Refrigerators A new form of the 2nd law of thermodynamics is used to analyze the efficiency of heat pumps and refrigerators. Global Climate Mathematics of Global Warming Present far-reaching policies to limit CO2 emission are based on mathematical climate models which show better resemblance to observations with sources from CO2 emission than without. The reliability of these models is unknown, while the consequences for the developing world of strict emission control can be severe. Climate Sensitivity We argue that basic climate sensitivity as global warming from doubled CO2 without feedback, is 0.15 degreesCelcius C (by Fourier’s Law), rather than the commonly presented 1 C by Stefan-Boltzmann’s Law. Basic Thermodynamics of the Atmosphere The basic thermodynamics of global climate is described by the Navier-Stokes equations expressing conservation of mass, momentum and total energy of a fluid, compressible air in the atmosphere and incompressible water in the oceans, subject to graviation and forcing from radiation and rotation. We analyze basic properties of solutions for the atmosphere using a convenient form of the 2nd Law of Thermodynamics. Quantum Mechanics Why Schrödinger Hated His Equation Schrödinger as the inventor of the foundation of quantum mechanics in the form of the Schrödinger wave equation never accepted the statistical particle Copenhagen interpretation of his invention. Many-Minds Quantum Mechanics A computational version of quantum mechanics in the spirit of the Hartree method is presented in which each electron in a multi-electron system updates its own state over time by solving its own Schrödinger equation in three-dimensional space expressing the attraction from the kernels and the repulsion from the other electrons, which defines a stable configuration of the system by computation. In this many-minds model the full many-dimensional wave-function is not computed (and thus does not exist) and Pauli’s exclusion principle is replaced by stability requirement. A parallel is made with the interaction of a group of people interacting pairwise, which can described as a set of individual states while the total multi-dimensional interaction is determined by nobody (and thus does not exist). The Microscopic World Cannot Be a Casino According to the dominating Copenhagen interpretation of quantum mechanics, the elementary pointlike particles of modern physics are interacting by playing games of roulette. Macroscopic physics is thus considered to be based on microscopic roulette wheels. However, a roulette wheel has its own microscopics as a necessary requirement for unpredictability, which leads into microscopics of microscopics with elementary particles which are not elementary. We suggest an alternative interpretation of quantum mechanics as complex interaction of wave functions which are not game addicted. Are All Grey Cats Identical? We exhibit difficulties of the concept of identical particles in the probabilistic Copenhagen interpretation of quantum mechanics. The Dark Age of the Uncertainty Principle Heisenberg’s Uncertainty Principle is the signum of a Dark Age of modernity with nobody understanding what physicists (and politicians) are saying. Waves or Particles or Both? On microscopic atomistic scales only waves can exist, because microscopic particles must have their own microscopics, and microscopics upon microscopics does not make sense. The Brainwash by Bohr Quantum mechanics based on a linear Schrödinger equation with multi-dimensional wave function leads into the non-physical Copenhagen Interpretation threatening to take physics into a dead end. Formulating instead the Schrödinger equation as a non-linear system of three-dimensional wave functions, opens to a physical interpretation and new possibilities. The Desperation of Planck In an act of desperation to motivate his modification of Wien’s displacement law of blackbody radiation, Planck in 1900 introduced the idea of a smallest package of energy named quantum of energy, and thus gave birth to modern physics but also prepared the end pf physics. Observation vs Computation in Quantum Mechanics A child of age less than a year likes to play peeakbo, the thrill being that the world seems to seize to exist when the eyes are being closed and miracously reappears when the eyes are opened. At the age of two a child understands that the world exists independently of being observed. In the Copenhagen Interpretation of quantum mechanics, physical reality only exists to the the extent it is being observed and thus is an ideal playground for peekabo. But computation changes the game… Many-Minds Not Many-Worlds QuantumMechanics The many-worlds interpretation of quantum mechanics lacks reason and thus is not science. Quantum Contradictions  Theory of Relativity Many-Minds Relativity A theory of relativity is presented, which is physical, in contrast to Einstein’s special theory of relativity, which is non-physical. Is One Dollar = One Euro? The principles of constant of speed of light and equivalence of inertial and gravitational mass underlying theories of relativity have an ambiguous character of being both analytic/true by definition and synthetic/ expressing properties of physical systems. The ambiguity causes confusion. Did Einstein Not Understand Math? What can you expect from a mathematical theory developed by someone who does not understand mathematics? Theory of Relativity from Relativity of Simultaneity Reality vs Illusion in Physics Stars and Planets Find Their Way Without GPS Why Time Dilation (and Special Relativity Theory) Is Illusion Louis Essen: Relativity Not a Theory Universal Rate of Time – Yes. Universal Time – No New View of Motion under Gravitation vs Einstein’s View Einstein’s Relativity Theory as Formalistic Absolutism without Relativity Physics Illusion 2: Photons as Light Particles Physics Illusion 3: Explanation of Michelson-Morley Null Result Physics Illusion 4-8 Physics Illusion 9: Instant Action at Distance Not Physical Nor Needed Physics Illusion 10: Fabric of Curved Space-Time Physics Illusion 11: Lorentz Transformation as Holy Doctrine of Physics Physics Illusion 12: Modern vs Classical World Physics Illusion 13: Light as Stream of Photon Particles Physics Illusion 14: Gravitational Motion by Instant Action at Distance Leave a Reply You are commenting using your account. Log Out /  Change ) Google+ photo Twitter picture Facebook photo Connecting to %s %d bloggers like this:
673448a903ee5707
Who Killed Schrodinger’s Cat? In 1935 Erwin Schrödinger illustrated a hypothetical experiment to show that something is incorrect with the traditional analysis of Quantum Mechanics. Schrödinger’s cat quickly emerged as the most famous example of what is now called the measurement problem, “the most controversial problem in physics today”, [2] with more than 30 youtube video clips devoted to it. (Less well-known is that Einstein suggested a similar bomb experiment to make the same point, stating “a sort of blend of not-yet and already-exploded systems [can not be] a real state of affairs”. [3]) The measurement problem appears because QM does not offer a picture of reality when no one is looking. Instead we have particles that are neither here nor there, states that are in superpositions, and equations that merely provide probabilities. Most physicists strongly believe that these superpositions are real, and several even acknowledge that the cat can be both half dead and half alive. Then there are physicists who opt not to talk about reality. I am a positivist who believes that physical theories are just mathematical models we construct, and that it is meaningless to ask if they correspond to reality, just whether they predict observations.— Stephen Hawking. [4] Something was clearly missing. That something came along later in the form of Quantum Field Theory— a theory that does offer a picture of reality, even when no one is looking. However there are numerous explanations and understandings of Quantum Field Theory, while some physicists reject it completely. For instance, N. David Mermin wrote in Physics Today, “I hope you will agree that you are not a continuous field of operators on an infinite-dimensional Hilbert space, [5] and Meinard Kuhlmann wrote in Scientific American, “quantum field theory … sounds like a theory of fields. Yet the fields supposedly described by the theory are not what physicists classically understand by the term field”. [6] Among those who accept Quantum Field Theory, most observe Richard Feynman’s method based on particles and virtual particles, while Julian Schwinger’s (and Sin-Itiro Tomonaga’s) version, which is based only on fields, is much less well-known. [7] Surprisingly enough, Frank Wilczek discloses that Feynman later changed his mind: Feynman told me that when he realized that his theory of photons and electrons is mathematically equivalent to the usual theory, it crushed his deepest hopes … He gave up when, as he worked out the mathematics of his version of quantum electrodynamics, he found the fields, introduced for convenience, taking on a life of their own. He told me he lost confidence in his program of emptying space. [8] Although both approaches lead to the same equations, the physical pictures are very different. It is Schwinger’s Quantum Field Theory that we refer to in this article, but since this version is so little known, we need to first give a brief description. Definition of field. A field is a property of space. This idea was proposed by Michael Faraday in 1845 as an explanation for electric and magnetic forces. However the concept that space has properties was not easy to accept, so when James Maxwell predicted the presence of EM waves in 1864, an ether was invented to carry the waves. It took many years before the ether was dispensed with and physicists approved that space itself has properties: Click here and learn more. “Spooky action at a distance”, as Einstein called it, refers to the experimental fact that particles can impact each other instantly, even when separated by sizable distances. For example, if two photons are produced collectively in what is referred to as an entangled state and the angular momentum of one is altered, then the angular momentum of the other one will adjust in a corresponding manner at the same time, no matter how far away from each other the particles are. This “spooky” behavior has been known for almost a hundred years and still is a source of confusion. Still there is a theory in which the result is not spooky, but rather a natural consequence. I’m referring to Quantum Field Theory, which describes a world constructed only of fields, with no particles. What we call a particle is really a piece, or quantum, of a field. Quanta are not localized like particles, but are spread out through space. For example, photons are pieces of the electromagnetic field and protons are parts of the matter field. These quanta evolve in a deterministic way as per the basic field equations and there is a term in these equations that restrains the speed of propagation to the velocity of light. Even so the QFT equations don’t tell the whole story. There are events that are not explained by the field equations– for example, when a field quantum moves energy or momentum to a different object. This event is non-local in the sense that the change in, or even disappearance of, the quantum happens immediately, no matter how spread-out the field may be. It can even happen with two entangled quanta– no matter how much they are separated. In QFT, this is essential if each quanta is to act as a unit, as per the fundamental basis of QFT. There is a big difference between quantum collapse in QFT and wave-function collapse in QM. The former is a real physical change in the fields while the latter is a change in our knowledge. Even though we don’t have a theory to describe quantum collapse, there is nothing inconsistent about it. To quote from Fields of Color: The theory that escaped Einstein: In QFT the photon is a spread-out field, and the particle-like behavior takes place because each photon, or quantum of field, is consumed as a unit … It is a spread-out field quantum, but when it is taken in by an atom, the entire field disappears altogether, no matter how spread-out it is, and all its energy is placed into the atom. There is a big “whoosh” and the quantum is gone, like an elephant disappearing from a magician’s stage. Quantum collapse is not a very simple concept to accept– perhaps more difficult than the concept of a field. Here I have been working hard, trying to persuade you that fields are a real property of space– indeed, the only reality– and now I am seeking you to consider that a quantum of field, spread out as it may be, quickly disappears into a tiny absorbing atom. But still it is a process that can be visualized without inconsistency. In fact, if a quantum is an entity that lives and dies as a unit, which is the very meaning of quantized fields, then quantum collapse must occur. A quantum can not divide and put half its energy in one area and half in another; that would violate the fundamental quantum principle. While QFT does not provide an explanation for when or why collapse occurs, some day we may have a theory that does. In any case, quantum collapse is important and has been confirmed experimentally. Some physicists, including Einstein, have been bothered by the non-locality of quantum collapse, professing that it goes against a fundamental postulate of Relativity: that nothing can be transferred more quickly than the speed of light. Now Einstein’s postulate (which we must remember was only a guess) is certainly valid in relation to the evolution and propagation of fields as illustrated by the field equations. Having said that quantum collapse is not described by the field equations, so there is no reason to assume or to insist that it falls in the domain of Einstein’s postulate. Learn more here. Dear New York Times In the write-up (“With faint chirp, scientists prove Einstein correct”, p. A1, 2/12/16) we study that black holes were part of Einstein’s theory. The reality is quite different. “Einstein argued vigorously against black holes [as] incompatible with reality” (see “Black Holes” by R. Anderson) and his rivals held back their acceptance for many years. Einstein was also mistaken when he rejected Quantum Field Theory. According to his biographer A. Pais,” QFT was repugnant to him”. This is ironic because QFT, and only QFT, reveals and resolves the paradoxes of Relativity and Quantum Mechanics that most people struggle with (see “Fields of Color: The theory that escaped Einstein” by this writer). Quite possibly the most significant irony is the statement, “according to Einstein’s theory, gravity is caused by objects warping space and time”. While that is what everybody accepts today, the truth is that Einstein recognized gravity as a force field, similar to electromagnetic fields, except that it is produced by mass, not charge. That an oscillating mass generates gravitational waves is no more incomprehensible or unexpected than that electromagnetic waves are produced when electrons move back and forth in an antenna. To Einstein, curvature was actually a consequential result, similar to the changes in space and time produced by motion according to his Special theory of Relativity. Black holes. Contrary to many studies, black holes were actually not part of Einstein’s supposition. In fact Einstein argued strongly against black holes [as] incompatible with reality, and his opposition held back their approval for many years. Synopsis. Gravitational waves are easy to understand if you accept gravity as a force field, similar to the electromagnetic field (QFT). And while the contraction effect is more subtle, it is not that much different from the F-L contraction that has been accepted for over a hundred years. Read more here… Recent Physics Theory Solves Paradoxes By Rodney Brooks Julian Schwinger’s Insight to Physics And yet, there is a theory that makes perfect sense and can be understood by any person. This concept, with roots in the 1930s, was ultimately developed by Julian Schwinger, who once had been called “the heir-apparent to Einstein’s mantle”. This accomplishment happened a number of years after Schwinger had already achieved physics fame for solving the “renormalization” problem, defined by the NY Times as “the most important development in the last 20 years” and was duly awarded the Nobel prize. Still for Schwinger this was not good enough. He believed that Quantum Field Theory, as it stood then, was still lacking. His objective was to feature matter fields and force fields on an equivalent basis. After several years of hard work, he distributed a collection of five papers called “The theory of quantized fields” in 1951-54. Physicists have been combating a particles-vs.-fields battle for over 100 years. There have been 3 “rounds”, starting when Einstein’s concept of light as a particle (called photon) triumphed over Maxwell’s belief that light is a field. Round 2 happened when Schrödinger’s hope for a field theory of matter was overcome by the particle-like behavior that physicists could not ignore. And round 3 took place when Schwinger’s field-based solution of renormalization was usurped by Feynman’s easier-to-use particle based approach. For that reason, and others, Schwinger’s final development of Quantum Field Theory, which he regarded as far more noteworthy than his Nobel prize work, has been sadly ignored, and is indeed not known to most physicists– and to all of the general public. Fortunately there are signs that QFT, in the true Schwingerian sense is reemerging, so in this sense it is a “new” theory There have been numerous books and articles, such as “The Lightness of Being” by Nobel laureate Frank Wilczek, “There are no particles, there are only fields” by Art Hobson, and “Fields of Color- The theory that escaped Einstein” by Rodney Brooks. The last one explains QFT to a lay reader, without any equations, and shows how this terrific “new” theory” resolves the paradoxes of Relativity, Quantum Mechanics and physics that have confused so many people. Discover more here! By Rodney A. Brooks author of “Fields of Color: The Theory That Escaped Einstein”. The recent discovery of gravitational waves at LIGO (Laser Interferometer Gravitational-Wave Observatory) has captured the mind of the public. It will stand as one of the great accomplishments of experimental physics, in addition to the famous Michelson-Morley experiment of 1887 which it resembles. In fact by comparing these two experiments, you will see that understanding gravitational waves is not as difficult as you believe. Contraction. Michaelson and Morley measured the speed of light at different times as the earth moved around its orbit. To their – and everyone’s – surprise, the speed turned out to be continuous, separate of the earth’s motion. This breakthrough caused great consternation until George FitzGerald and Hendrick Lorentz came up with the sole feasible explanation: objects in motion compress. Einstein then showed that this contraction is a consequence of his Principles of Relativity, but without saying why they contract (other than a need to conform to his Principles). In fact Lorentz had previously provided a partial explanation by showing that motion affects the way the electromagnetic field interacts with charges, causing objects to contract. However it wasn’t until Quantum Field Theory came along that a full explanation was found. In QFT, at least in Julian Schwinger’s model, everything is made of fields, even space itself, and motion affects the way all fields interact. Waves. Electromagnetic waves, e.g., radio waves, have long been recognized and accepted as a natural phenomenon of fields. Now in QFT gravity is a field and, just as an oscillating electron in an antenna sends out radio waves, so a substantial mass moving back and forth will send out gravitational waves. But it didn’t take QFT to show this. Einstein also believed that gravity is a field that obeys his equations, just as the EM field obeys the equations of James Maxwell. In fact gravitational waves have been accepted by many physicists, from Einstein on down, who see gravity as a field. Curvature. But what about “curvature of space-time”, which many people today say is what produces gravity? You may be shocked to learn that’s not how Einstein saw it. He believed that the gravitational field triggers things, even space itself, to contract, comparable to the way motion causes contraction. In fact Einstein used this analogy to show the correlation between motion-induced and gravity-induced contraction: they both affect the way fields work together. It is this gravity-induced contraction that is sometimes knowned as “curvature”. Evidence. The first detection of gravitational waves was done at LIGO, using an apparatus similar to Michelson’s and Morley’s. In both experiments the time for light to travel along two perpendicular paths was examined, but because the gravitational field is much weaker than the EM field, the distances in the LIGO apparatus are much greater (miles instead of inches). Another difference is that while Michelson, not knowing about motion-induced contraction, anticipated to see a shift (and found none), the LIGO staff used the known gravity-induced contraction to view an alteration when a gravitational wave passed through. Fields of Color: The theory that escaped Einstein explains Quantum Field Theory to a lay audience, without any math. If you want to learn more about gravitational waves or about how QFT resolves the paradoxes of Relativity and Quantum Mechanics, read Chapters 1 and 2, which can be seen free at http://quantum-field-theory.net. Learn more here! Quantum Field Theory– A Solution to the “Measurement Problem”. Definition of the “Measurement Problem”. A significant question in physics these days is “the measurement problem”, likewise known as “collapse of the “wave-function”. The issue developed in the early days of Quantum Mechanics as a result of the probabilistic nature of the equations. Because the QM wave-function describes merely probabilities, the outcome of a physical measurement can only be calculated as a probability. This naturally brings about the question: When a measurement is made, at exactly what point is the ultimate result “decided upon”. Some folks believed that the role of the observer was critical, and that the “decision” was generated when someone looked. This led Schrödinger to design his well-known cat experiment to demonstrate how ludicrous such an idea was. It is not usually known, but Einstein also proposed a bomb experiment for the same reason, saying that “a sort of blend of not-yet and already-exploded systems. can not be a real state of affairs, for in reality there is just no intermediary between exploded and not-exploded.” At a later time, Einstein remarked, “Does the moon exist only when I look at it?”. The controversy continues to this day, with a few individuals still thinking that Schrödingers cat remains in a superposition of dead and alive until somebody looks. On the other hand most people believe that the QM wave-function “collapses” at some earlier point, before the uncertainty achieves a macroscopic level– with the definition of “macroscopic” being the primary question (e.g., GRW theory, Penrose Interpretation, Physics forum). A few people take the “many worlds” perspective, in which there is no “collapse”, but a splitting into various worlds that include every possible histories and futures. There have been a lot of experiments designed to address this question, e.g., “Towards quantum superposition of a mirror”. Schrodinger's Cat.gif We will now find that an unequivocal solution to this question is supplied by Quantum Field theory. But because this theory has been neglected or misunderstood by many physicists, we need to initially specify what we mean by QFT. Definition of Quantum Field Theory. The Quantum Field Theaory referred to within this post is the Schwinger version where there are no particles, there are only fields, not the Feynman version which is based on particles. * The two versions are mathematically equivalent, but the concepts backing them are very different, and it is the Feynman model that is chosen by the majority of Quantum Field Theory physicists. In Quantum Field Theory, as we will make use of the term henceforward, the world is comprised of fields and only fields. Fields are defined as characteristics of space or, to put it in a different way, space is comprised of fields. The field concept was introduced by Michael Faraday in 1845 as an illustration for electric and magnetic forces. Even so the idea was not easy for folks to accept and so when Maxwell showed that these particular equations predicted the existence of EM waves, the concept of an ether was introduced to carry the waves. These days, however, it is normally accepted that space can possess properties:. To deny the ether is essentially to presume that empty space has no physical qualities whatsoever. The key realities of mechanics do not harmonize with this view.– A. Einstein (R2003, p. 75). Moreover space-time on its own had emerged as a dynamical medium– an ether, if there ever was one.– F. Wilczek (“The persistence of ether”, Physics Today, Jan. 1999, p. 11). Although the Schrödinger equation is the non-relativistic limit of the Dirac equation for matter fields, there is an important and fundamental difference between Quantum Field Theory and Quantum Mechanics. One illustrates the strength of fields at a given point, the other describes the probability that particles could be found at that point, or that a given state exists….. For the rest of this interesting article visit the blog at Fields of Color! The Uncertainty Principle Uncertainty Principle, Fields of Color The probabilistic translation of Schrödinger’s formula ultimately brought about the uncertainty principle of Quantum Mechanics, formulated in 1926 by Werner Heisenberg. This principle specifies that an electron, or any other particle, can not have its specific position known, or even pointed out. More exactly, Heisenberg derived a formula that relates the uncertainty in position of a particle to the uncertainty of its momentum. So not only do we have wave-particle duality to take care of, we must take care of particles that might be here or may be there, but we just can’t say where. If the electron is actually a particle, then it only stands to reason that it must be someplace. Resolution. In Quantum Field Theory there are no particles (stop me if you have indeed heard this before) and hence no position– certain or uncertain. Alternatively there are blobs of field that are spread over space. As opposed to a particle that is either here or here or perhaps there, we have a field that is here and here and there. Extending out is one thing that only a field can do; a particle cannot do this. Actually Heinsenberg’s Uncertainty Principle is not very different from Fourier’s Theorem (found in 1807) that relates the spatial spread of any wave to the spread of its wave length. This does not mean that there is no uncertainty in Quantum Field Theory. There is uncertainty in relation to field collapse, but field collapse is not explained by the equations of QFT; Quantum Field Theory can just predict probabilities of when it happens. Nevertheless there is a significant distinction between field collapse in QFT and the corresponding wave-function collapse in QM. The former is an actual physical change in the fields; the latter is only a change in our understanding of precisely where the particle is…. For the full article visit the Fields of Color Blog.
15ea4b2f1e1cc489
Pick a flower on Earth and you move the farthest star. -Paul Dirac - Albert Einstein1 Paul Adrien Maurice Dirac was born in Bristol, England on August 8th, 1902. The second son of Charles Dirac, a Catholic Swiss citizen of French extraction, and Florence Dirac, née Holten, a British Methodist. Paul had an older brother, Felix, and a younger sister, Betty. Dirac's father, Charles, was a French teacher and a staunch disciplinarian; he insisted that young Paul address him only in strictly correct French. Dirac found that he preferred to remain silent. For the rest of his life, he was notoriously shy and notoriously quiet. He would often give one word answers to questions, or no answer at all.  It was said that this silence was a result of his childhood, when his father would allow him only to speak perfect French at meal times. That may be true, but I suspect he would have been silent even without that. But when he did speak, it was all the more worth hearing. - Stephen Hawking2 Dirac entered the University of Bristol in 1918, and studied electrical engineering. After graduating from Bristol he was unable to find a job. He was accepted by Cambridge but could not afford the tuition, and so he returned to Bristol University, where he studied applied math, for two years. In 1923 he received a scholarship from Cambridge that allowed him to study there. Even before receiving his doctorate, Dirac helped to establish the theoretical and mathematical foundations of the new quantum theory. While Dirac preferred to work alone and rarely collaborated on papers, the development of quantum mechanics in the 1920s was a group effort: Heisenberg, Born, Pauli, Jordan, Schrödinger, and a handful of others, were constantly reading each other's papers, trying to make sense of each others's various approaches, and competing against one another to solve the various mathematical, theoretical, and interpretation issues created by the development of the new theory, and competing to get their solutions in print first. Description: L-R: Dirac, Landau, Darwin, Leon Rosenkevich (standing), Richardson, Ivanenko, Frenkel, Frank, Debye, Pohl (cut off), on a boat on the Volga, Russia Date: June 1928  Credit: AIP Emilio Segre Visual Archives, Leon Brillioun Collection Person(s): Darwin, Charles Galton, Sir; Debye, Peter Josef William; Dirac, Paul Adrien Maurice; Frank, Philipp; Landau, Lev Davidovich; Pohl, Robert Wichard; Richardson, Owen Willams; Ivanenko, Dmitrii Dmitrievich Dirac was the first to recognize the relationship between the poisson bracket of classical Hamiltonian mechanics and the anti-commutation relations of operators in quantum mechanics. The exact nature of this relationship is still an active area in research (now known as ''deformation quantization''). In 1925, Dirac's older brother Felix, killed himself. Dirac's father Charles was devastated, never fully recovering from the blow. Dirac parents did not get along. HIs mother spoke only English and his father only French. His father treated his mother like a nurse and a maid, and secretly carried on affairs. Dirac hated going back to his parents house. In 1927, Dirac began to try to develop a version of quantum mechanics that was consistent with Einstein's special theory of relativity. When Dirac mentioned to Bohr that he was working on this problem, Bohr responded that Oscar Klein had already solved it. In fact, the equation now known as the Klein-Gordon equation had originally been derived by Schrödinger--who rejected it when it gave predictions inconsistent with experiment--before he derived what came to be known as ''Schrödinger's equation.'' Schrödinger had found an equation that gave the right results for non-relativistic quantum particles, but, because it was linear in time derivative, but quadratic in the space derivative, and hence treated space and time in fundamentally different ways, was not consistent with Einstein's special theory of relativity, in which space and time are related by Lorentz ''boosts'' which mix time and space in a certain algebraically well-defined manner. Any such attempt to mix time and space would turn the Schrödinger equation into gibberish. The Klein-Gordon equation was quadratic in both time and space derivatives, and so was invariant under Lorentz boosts, but it seemed to violate the tenets of quantum mechanics by leading to negative probability densities. Furthermore, it could not account for electron spin. Schrödinger had obtained his equation by means of an educated guess: in classical mechanics, for a particle with no potential energy, the total energy E, is related to its mass m, and momentum p, through the equation: Schrödinger teated energy and momentum as operators, with the correspondence (in natural units, where Plank's constant  = 1): Substituting these relations into the energy equation: gives the (time-dependent) Schrödinger equation (with zero potential energy): In special relativity, the energy and momentum of a particle (again with zero potential energy) together form a four-vector with length equal to the particle mass. In natural units, with c = the speed of light =1, this gives: Using Schrödinger's educated guess about the operators that correspond to energy and momentum in quantum mechanics leads to what is now called the Klein-Gordon equation: The Klein-Gordon equation is consistent with relativity in a way that Schrödinger's equation is not, but because it is not linear in the time derivative, it gives can give quantum mechanical results that make no sense. Dirac therefore sought a relativistic equation for the electron that was linear in the time derivative. Consider the equation: Where i is the square root of minus one, the the "gammas" are some set of not necessarily commutative operators, and "psi" is a wave function. As an operator equation this says: If you square both sides of this equation you obtain: This will satisfy the Klein-Gordon equation if: In other words, if the set of gamma operators satisfy the commutation relations: where alpha  and beta range from zero to three, and eta is the Minkowski metric, i.e.: This turns out to be the defining relation for a Clifford algebra. Dirac had learned about Hamilton's quaternions during his mathematical studies. Quaternions are one, particularly simple example of a Clifford algebra, (in fact they are simple in a strict mathematical sense) but he did not seem to have known about the general theory of Clifford algebras. He essentially rediscovered it. If the set of gamma operators  do indeed satisfy the above commutation relations, then the equation: is a relativistic equation of the electron that is consistent with quantum mechanics. It is the Dirac equation, discovered and published in 1928. Solutions of the Dirac equation split into two parts: a spin up part, and a spin down part. This is exactly what you want for a spin one-half particle, such as an electron--the only spin one-half particle known at the time--and is not true, in general, for a solution to the Klein-Gordon equation. So Dirac had found an equation that was both (special) relativistic, and quantum mechanical, and which accounted for electron spin. The solution to the Dirac equation is the periodic table. That is, if you ask the Dirac equation what energy levels electrons can have, the answer it gives you correspond to the energy levels of electrons in the various shells of atoms in the periodic table. This was a major triumph, one of the greatest in the history of science. There was, however, a problem. While the Dirac equation only gave positive probabilities, it also gave positive probabilities to states where electrons had negative energy. In classical mechanics this is not a problem-stars, for example, have negative total energy when considered classically--but in quantum mechanics this is a problem. An electron can jump to a lower energy level, and if negative energy levels are allowed, an electron can keep jumping down the ''energy well'' forever. If this were possible, the entire universe would vanish in a great explosion of light. Dirac attempted to resolve this paradox by offering a novel interpretation. In 1927, he had sought to develop a theory of multiple-particle (i.e. multiple electron and photon) quantum interactions--a predecessor to modern quantum field theory--by considering the quantum vacuum to be full of an infinity of unobservable photons. In this view, when an electron appears to emit a photon, that photon, which had previously been in an unobservable state, moves into an observable state and so seems to ''appear'' out of nowhere. Similarly, when an electron appears to absorb a photon, a photon that is observable shifts into an unobservable state. The light-quantum has the peculiarity that it apparently ceases to exist when it is in one of its stationary states, namely, the zero state, in which its momentum, and therefore also its energy, are zero. When a light-quantum is absorbed it can be considered to jump into this zero state, and when one is emitted it can be considered to jump from the zero state to one in which it is physically in evidence, so that it appears to have been created. Since there is no limit to the number of light-quanta that may be created in this way, we must suppose that there are and infinite number of light-quanta in the zero state3. . .  Dirac now proposed something similar for negative energy electrons. Because electrons obey the Pauli exclusion principle, no two electrons can be in the same state--or put differently--no more than two electrons (one spin up and one spin down) can be at the same energy level. Dirac therefore proposed that there is an unobservable negative energy electron sea that is already filled. Positive energy electrons cannot jump to negative energy states, because all those energy states are already filled. Occasionally, however, a negative energy state might ''open up.'' Dirac argued that this hole in the negative-energy electron sea would look like particle with the opposite charge as the electron. Dirac originally hoped that these ''holes'' in the negative-energy electron sea might be protons--electrons and protons and photons were the only known ''fundamental'' particles known at the time (it is now known that the proton is not fundamental). But Pauli and Hermann Weyl both demonstrated that Dirac's holes would have to have the same mass as the electron, whereas protons were known to be much more massive. Dirac was nervous about this result, and cautious about actually predicting a new, as-yet-unseen particle. But in fact, that's what his equation did. In particular, it predicted the existence of anti-matter. In 1932, Carl Anderson at Caltech discovered the positron, a fundamental particle with the same mass and spin states as the electron, but with opposite electric charge. Dirac's theory had been vindicated. In 1933, Dirac and Erwin Schrödinger were jointly awarded the Nobel Prize in physics. (Anderson was awarded the Nobel Prize in 1936.) The theory of the negative-energy electron sea, however, has now been replaced by the more general notion of the quantum field. From the perspective of field theory, both the electron and the positron are merely specific excitations of the quantum field. The negative-energy electron sea has been replaced by the (perhaps equally extravagant) infinite sea of ''virtual'' particles. While in Stockholm to receive the Nobel Prize, a journalist asked Dirac whether his theory had any practical ramifications, he answered: ''My work has no practical significance1.'' That may have been true in 1932, but today anti-matter is utilized for medical technology in the form of PET scans (the ''P'' stands for positron), it plays an important part in observational astronomy, may hold the key to propulsion for interstellar travel, and is related to one of the most fundamental mysteries in the standard model of cosmology, namely: ''why is there more matter in the observable universe than anti-matter?'' When a particle of matter meets a particle of anti-matter, the two particles annihilate each other, leaving only photons. (This is exactly what would have happened in Dirac's old infinite negative-energy electron sea model, when an electron filled a ''hole.'') The current theory is that there was a slight over-density of matter over anti-matter in the primordial synthesis of particles following the big bang, amounting to about one part in ten thousand. The matter and anti-matter then mutually annihilated, leaving us with the matter-dominated universe we see today, and the light which is now the 2.73K cosmic background microwave radiation. But why the over-density of matter over anti-matter? And is this a generic feature of the entire universe, or only of the part that we can observe? Dirac spent the 1934-1935 on sabbatical in Princeton. There he mat Manci Wigner, sister of the physicist Eugene Wigner. The two became friendly and continued to meet and exchange letters after Dirac returned to Cambridge. In 1937 they were married. This was Dirac's first serious relationship, but Manci's second marriage. She had two children from her previous marriage, and she and Dirac eventually had two more. Dirac was elected a Fellow of the Royal Society in 1930, and in 1932, was made Lucasian Professor of Mathematics at Cambridge, a post once occupied by Newton, and today occupied by Stephen Hawking. Dirac held this position until 1969, when he retired from Cambridge and moved to the United States. He spent the rest of his life in Florida, at the University of Miami, and at Florida State University in Tallahassee. Dirac died on October 20th 1984, in Tallahassee Florida. Dirac Heisenberg Dirac & Feynman The physicist, in his study of natural phenomena, has two methods of making progress: (1) the method of experiment and observation, and (2) the method of mathematical reasoning. The former is just the collection of selected data; the latter enables one to infer results about experiments that have not been performed. There is no logical reason why the second method should be possible at all, but one has found in practice that it does work and meets with reasonable success. This must be ascribed to some mathematical quality in Nature, a quality which the casual observer of Nature would not suspect, but which nevertheless plays an important role in Nature's scheme.5 As time goes on, it becomes increasingly evident that the rules which the mathematician finds interesting are the same as those which Nature has chosen. [1] Farmelo, Graham (2009). The Strangest Man: The Hidden Life of Paul Dirac, Mystic of the Atom. Basic Books, New York. [2] Goddard, Peter, et al, (1998) Paul Dirac, The Man and His Work. Cambridge University Press, Cambridge. [3] Dirac, P. A. M. (1927) The Quantum Theory of Emission and Absorption of Radiation. Reprinted in [6]. [4] Dirac, P.A.M., (1931). Quantised Singularities of the Electromagnetic Field. Reprinted in [6]. [5] Dirac, P.A.M., (1939). The Relation between Mathematics and Physics. [6] Dirac, P. A. M., edited by R. H. Dalitz (1995 The collected Works of P.A.M. Dirac 1924-1948. Cambridge University Press, Cambridge. [7] Gamow, George (1966). Thirty Years That Shook Physics. Dover, New York. [8] http://www.dirac.ch/PaulDirac.html [9] http://biogrph.wordpress.com/article/paul-dirac-1x4qvbqoz9orn-47/ [10] http://cerncourier.com/cws/article/cern/28693 [11] http://www.physics.fsu.edu/awards/nobel/dirac/RememberingDirac Additional Info • Birthday Date: Friday, 08 August 2014 Read 42909 times
cbb693e0b3a2fcac
Monday, 30 September 2013 Charles the Obscure, The one you never heard of, but should have. My good and generous friend, Dave Renfro, sometimes finds time in his busy writing and research schedule to send me copies of some of the old documents he's working through.  Recently a collection from him included a 1979 Isis article by J. B. Gough. One in particular, which I opened only weeks after the anniversary of the death of the unfortunate Jacques Charles, called the Geometer in his lifetime to avoid confusing him with Charles the Balloonist, and sometimes Charles the inventor, who is J. A. C Charles, and the namesake for the chemistry law that is sometimes, probably without merit, called Charles' Law.  Unfortunately, the point of Gough's article is that they did become confused, often due to lack of effort or interest on the part of historical writers, to the point that now you can find little or nothing about the "geometer" and much of what you find about the more famous Charles is, in fact, a mis-credit for the work of Charles the Geometer.  I would have assumed that articles like the one by Gough in 1979, and another by the famous science historial Roger Hahn a few years later would have set the record straight, but in fact as I scanned a couple of biographies on the internet they still contain the residue of the confusion. One of the first points of confusion is that you may see the date for the famous Charles as 1785.  This is off by almost a full decade, and the actual date of the induction of Charles the Geometer.   The famous Charles would be inducted in 1795, almost four years after the other Charles had gone to an early grave. A second, and even more common error is that you will often still see biographies of the famous Charles that list him as a mathematician, and sometimes add something like, "most of his papers were in mathematics."    Charles, the famous, it seems, was NOT a mathematician, and wrote almost nothing, including nothing about mathematics, and only the sketchiest outline of the law which, due to the graciousness of more capable scientists (you can read the name Joseph Louis Gay-Lussac here) would eventually bear his name.  J. B. Gough goes so far in his article in Isis to declare that this Charles was "nearly a mathematical illiterate."  He points out that of the eight articles credited to J. A. C Charles by Poggendorf, seven were actually by the more obscure (and more mathematical) Charles. Gay-Lussac, in his published paper about the law credits Charles with this statement (English translation) "Before going further, I must jump ahead. Although I had recognized on many occasions that the gases oxygen, nitrogen, hydrogen, carbonic acid, and atmospheric air all expand identically from 0° to 80°, citizen Charles had noticed the same property in these gases 15 years ago; however, since he never published his results, it is only by great luck that I knew it. He had also sought to determine the expansion of water-soluble gas, and he had found for each a particular dilation different from that of other gases. In this respect, my experiments differ strongly from his". Gough points as far back as 1870 with evidence to the ongoing confusion.  A donation of the physics lectures of the more famous Charles to the Institute de France prompted a notice in Comptes Rendes with a brief description of Charles life and career on February 7 of 1870.  Shortly after the publication a letter to the Perpetual Secretary questioned if the article had not confused Charles the balloonist with the geometer.  A followup with a brief description of the lifes of both men was given in Comptes Rendes on March 7 of the same year. So what of the mathematical Charles, who has so sadly been overlooked for several hundred years?  It seems that he was born around 1752 in Cluny, France in the Burgendy region of France.  He seems to have attempted to gain entry to the Paris Academy of Science, to which both Charleses would eventually belong, at the ripe age of about 18 while still living in Cluny.  His article, on a problem in Algebra, probably reflecting his youth, was rejected by the academy as being too elementary.  Two years later, he would submit a second paper two years later, "sur le dynamique" impressed the judges who infered that the author must be aware of Euler's differential calculus.  When it was read to the full meeting of the  academy, Lavoisier's minutes of the meeting list Charles as a Professor of Mathematics at the school at Nanterre, most probably referring to a popular academy in that suburb of Paris that trained young Nobles who were intending to proceed to Engineering colleges.  Over the period from 1779 to 1785, Charles continued to submit articles to the Academy.  In all he submitted seven articles all of which were deemed appropriate for publication.  After the seventh, Condercet, who had reviewed the paper for the Academy, pointed out that this, and any of the previous six, certainly merited his admission to the Academy.  His major obstacle seems to have been the opposition to his appointment by Laplace, who was motivated more by his rivalry with Charles' sponsor, Bossut.  Finally a vote on May 11, 1785 (this date is often given as May 12, I use Hahn's date as few have better records to the history of the Paris Academy) secured Charles his membership. Charles, through his association with Bossut, had already obtained the position as the Chair of Hydrolics, which brought with it, admission to the Paris Academy of Architecture, which made Charles a duel academician. Somewhere around 1789 Charles was onset with a paralysis which greatly affected his ability to write.  It is said that he had, for a short while,  to request another member to sign him in at meetings.  He did manage to learn to write with his other hand, but never with full control.  A few years later, in 1791, he died apparently from the same paralytic problem.  Only sketchy records exist of his death and burial due to the confusion created by events related to the revolution.  It appears he died on (or near) August 20, 1791 and it is reported that he was buried at Saint-Germain-l'Auxilles on the 22nd of the same month.  A memorial service was held at the Oratoire on Dec 29,1971.  Due to the events of the revolution, no M'emoires of the Paris Academy were produced that year, and hence no obituary for members who died. I am still trying to learn more about the actual writings of Jacques Charles, the Geometer and would love to hear from those who have greater knowledge on this subject, and the man himself, to share. On This Day in Math - September 30 Big whirls have little whirls, That feed on their velocity; And little whirls have lesser whirls, And so on to viscosity. ~Lewis Richardson The 273rd day of the year; 273oK(to the nearest integer)is the freezing point of water, or 0oC 1717 Colin Maclaurin (1698–1746), age 19, was appointed to the Mathematics Chair at Marischal College, Aberdeen, Scotland. This is the youngest at which anyone has been elected chair (full professor) at a university. (Guinness) In 1725 he was made Professor at Edinburgh University on the recommendation of Newton. *VFR 1810 The University of Berlin opened. *VFR It is now called The Humboldt University of Berlin and is Berlin's oldest university. It was founded as the University of Berlin (Universität zu Berlin) by the liberal Prussian educational reformer and linguist Wilhelm von Humboldt, whose university model has strongly influenced other European and Western universities.*Wik 1890 In his desk notes Sir George Biddell Airy writes about his disappointment on finding an error in his calculations of the moon’s motion. “ I had made considerable advance ... in calculations on my favourite numerical lunar theory, when I discovered that, under the heavy pressure of unusual matters (two transits of Venus and some eclipses) I had committed a grievous error in the first stage of giving numerical value to my theory. My spirit in the work was broken, and I have never heartily proceeded with it since.” *George Biddell Airy and Wilfrid Airy (ed.), Autobiography of Sir George Biddell Airy (1896), 350. 1893 Felix Klein visits Worlds fair in Chicago, then visits many colleges. On this day the New York Mathematical society had a special meeting to honor him. *VFR 1921 William H Schott patented the "hit-and-miss synchronizer for his clocks. The Shortt-Synchronome free pendulum clock was a complex precision electromechanical pendulum clock invented in 1921 by British railway engineer William Hamilton Shortt in collaboration with horologist Frank Hope-Jones, and manufactured by the Synchronome Co., Ltd. of London, UK. They were the most accurate pendulum clocks ever commercially produced, and became the highest standard for timekeeping between the 1920s and the 1940s, after which mechanical clocks were superseded by quartz time standards. They were used worldwide in astronomical observatories, naval observatories, in scientific research, and as a primary standard for national time dissemination services. The Shortt was the first clock to be a more accurate timekeeper than the Earth itself; it was used in 1926 to detect tiny seasonal changes (nutation) in the Earth's rotation rate. *Wik 1939 an early manned rocket-powered flight was made by German auto maker Fritz von Opel. His Sander RAK 1 was a glider powered by sixteen 50 pound thrust rockets. In it, Opel made a successful flight of 75 seconds, covering almost 2 miles near Frankfurt-am-Main, Germany. This was his final foray as a rocket pioneer, having begun by making several test runs (some in secret) of rocket propelled vehicles. He reached a speed of 238 km/h (148 mph) on the Avus track in Berlin on 23 May, 1928, with the RAK 2. Subsequently, riding the RAK 3 on rails, he pushed the world speed record up to 254 km/h (158 mph). The first glider pilot to fly under rocket power, was another German, Friedrich Staner, who flew about 3/4-mile on 11 Jun 1928.*TIS 2012 A Blue moon, Second of two full moons in a single month. August had full moons on the 2nd, and 31st. September had full moons on the 1st and 30th. After this month you have to wait until July of 2015 for the next blue moon. (The Farmer's Almanac uses a different notation for "blue moon", the third full moon in a season of four full moons.)*Wik The next blue moon under the modern definition will occur on July 31, 2015; but in 2013 there was a blue moon using the old Farmer's Almanac definition on August 21 1550 Michael Maestlin (30 September 1550, Göppingen – 20 October 1631, Tübingen) was a German astronomer who was Kepler's teacher and who publicised the Copernican system. Perhaps his greatest achievement (other than being Kepler's teacher) is that he was the first to compute the orbit of a comet, although his method was not sound. He found, however, a sun centerd orbit for the comet of 1577 which he claimed supported Copernicus's heliocentric system. He did show that the comet was further away than the moon, which contradicted the accepted teachings of Aristotle. Although clearly believing in the system as proposed by Copernicus, he taught astronomy using his own textbook which was based on Ptolemy's system. However for the more advanced lectures he adopted the heliocentric approach - Kepler credited Mästlin with introducing him to Copernican ideas while he was a student at Tübingen (1589-94).*SAU The first known calculation of the reciprocal of the golden ratio as a decimal of "about 0.6180340" was written in 1597 by Maestlin in a letter to Kepler. He is also remembered for : Occultation of Mars by Venus on 13 October 1590, seen by Maestlin at Heidelberg. *Wik 1715 Étienne Bonnot de Condillac (30 Sep 1715; 3 Aug 1780) French philosopher, psychologist, logician, economist, and the leading advocate in France of the ideas of John Locke (1632-1704). In his works La Logique (1780) and La Langue des calculs (1798), Condillac emphasized the importance of language in logical reasoning, stressing the need for a scientifically designed language and for mathematical calculation as its basis. He combined elements of Locke's theory of knowledge with the scientific methodology of Newton; all knowledge springs from the senses and association of ideas. Condillac devoted careful attention to questions surrounding the origins and nature of language, and enhanced contemporary awareness of the importance of the use of language as a scientific instrument.*TIS 1775 Robert Adrain born. Although born in Ireland he was one of the first creative mathematicians to work in America. *VFR Adrain was appointed as a master at Princeton Academy and remained there until 1800 when the family moved to York in Pennsylvania. In York Adrain became Principal of York County Academy. When the first mathematics journal, the Mathematical Correspondent, began publishing in 1804 under the editorship of George Baron, Adrain became one of its main contributors. One year later, in 1805, he moved again this time to Reading, also in Pennsylvania, where he was appointed Principal of the Academy. After arriving in Reading, Adrain continued to publish in the Mathematical Correspondent and, in 1807, he became editor of the journal. One has to understand that publishing a mathematics journal in the United States at this time was not an easy task since there were only two mathematicians capable of work of international standing in the whole country, namely Adrain and Nathaniel Bowditch. Despite these problems, Adrain decided to try publishing his own mathematics journal after he had edited only one volume of the Mathematical Correspondent and, in 1808, he began editing his journal the Analyst or Mathematical Museum. With so few creative mathematicians in the United States the journal had little chance of success and indeed it ceased publication after only one year. After the journal ceased publication, Adrain was appointed professor of mathematics at Queen's College (now Rutgers University) New Brunswick where he worked from 1809 to 1813. Despite Queen's College trying its best to keep him there, Adrain moved to Columbia College in New York in 1813. He tried to restart his mathematical journal the Analyst in 1814 but only one part appeared. In 1825, while he was still on the staff at Columbia College, Adrain made another attempt at publishing a mathematical journal. Realising that the Analyst had been too high powered for the mathematicians of the United States, he published the Mathematical Diary in 1825. This was a lower level publication which continued under the editorship of James Ryan when Adrain left Columbia College in 1826. *SAU 1870 Jean-Baptiste Perrin (30 Sep 1870; 17 Apr 1942) was a French physicist who, in his studies of the Brownian motion of minute particles suspended in liquids, verified Albert Einstein's explanation of this phenomenon and thereby confirmed the atomic nature of matter. Using a gamboge emulsion, Perrin was able to determine by a new method, one of the most important physical constants, Avogadro's number (the number of molecules of a substance in so many grams as indicated by the molecular weight, for example, the number of molecules in two grams of hydrogen). The value obtained corresponded, within the limits of error, to that given by the kinetic theory of gases. For this achievement he was honoured with the Nobel Prize for Physics in 1926.*TIS 1882 Hans Wilhelm Geiger  (30 Sep 1882; 24 Sep 1945) was a German physicist who introduced the Geiger counter, the first successful detector of individual alpha particles and other ionizing radiations. After earning his Ph.D. at the University of Erlangen in 1906, he collaborated at the University of Manchester with Ernest Rutherford. He used the first version of his particle counter, and other detectors, in experiments that led to the identification of the alpha particle as the nucleus of the helium atom and to Rutherford's statement (1912) that the nucleus occupies a very small volume in the atom. The Geiger-Müller counter (developed with Walther Müller) had improved durability, performance and sensitivity to detect not only alpha particles but also beta particles (electrons) and ionizing electromagnetic photons. Geiger returned to Germany in 1912 and continued to investigate cosmic rays, artificial radioactivity, and nuclear fission.*TIS 1883 Ernst David Hellinger (1883 - 1950) introduced a new type of integral: the Hellinger integral . Jointly with Hilbert he produced an important theory of forms. *SAU 1894 Dirk Jan Struik (30 Sept 1894 , 21 Oct 2000) Dirk Jan Struik (September 30, 1894 – October 21, 2000) was a Dutch mathematician and Marxian theoretician who spent most of his life in the United States. In 1924, funded by a Rockefeller fellowship, Struik traveled to Rome to collaborate with the Italian mathematician Tullio Levi-Civita. It was in Rome that Struik first developed a keen interest in the history of mathematics. In 1925, thanks to an extension of his fellowship, Struik went to Göttingen to work with Richard Courant compiling Felix Klein's lectures on the history of 19th-century mathematics. He also started researching Renaissance mathematics at this time. Struik was a steadfast Marxist. Having joined the Communist Party of the Netherlands in 1919, he remained a Party member his entire life. When asked, upon the occasion of his 100th birthday, how he managed to pen peer-reviewed journal articles at such an advanced age, Struik replied blithely that he had the "3Ms" a man needs to sustain himself: Marriage (his wife, Saly Ruth Ramler, was not alive when he turned one hundred in 1994), Mathematics, and Marxism. It is therefore not surprising that Dirk suffered persecution during the McCarthyite era. He was accused of being a Soviet spy, a charge he vehemently denied. Invoking the First and Fifth Amendments of the U.S. Constitution, he refused to answer any of the 200 questions put forward to him during the HUAC hearing. He was suspended from teaching for five years (with full salary) by MIT in the 1950s. Struik was re-instated in 1956. He retired from MIT in 1960 as Professor Emeritus of Mathematics. Aside from purely academic work, Struik also helped found the Journal of Science and Society, a Marxian journal on the history, sociology and development of science. In 1950 Stuik published his Lectures on Classical Differential Geometry. Struik's other major works include such classics as A Concise History of Mathematics, Yankee Science in the Making, The Birth of the Communist Manifesto, and A Source Book in Mathematics, 1200-1800, all of which are considered standard textbooks or references. Struik died October 21, 2000, 21 days after celebrating his 106th birthday. *Wik 1905 Sir Nevill F. Mott (30 Sep 1905; 8 Aug 1996) English physicist who shared (with P.W. Anderson and J.H. Van Vleck of the U.S.) the 1977 Nobel Prize for Physics for his independent researches on the magnetic and electrical properties of amorphous semiconductors. Whereas the electric properties of crystals are described by the Band Theory - which compares the conductivity of metals, semiconductors, and insulators - a famous exception is provided by nickel oxide. According to band theory, nickel oxide ought to be a metallic conductor but in reality is an insulator. Mott refined the theory to include electron-electron interaction and explained so-called Mott transitions, by which some metals become insulators as the electron density decreases by separating the atoms from each other in some convenient way.*TIS 1913 Samuel Eilenberg (September 30, 1913 – January 30, 1998) was a Polish and American mathematician born in Warsaw, Russian Empire (now in Poland) and died in New York City, USA, where he had spent much of his career as a professor at Columbia University. Eilenberg was a member of Bourbaki and with Henri Cartan, wrote the 1956 book Homological Algebra, which became a classic. Later in life he worked mainly in pure category theory, being one of the founders of the field. The Eilenberg swindle (or telescope) is a construction applying the telescoping cancellation idea to projective modules. Eilenberg also wrote an important book on automata theory. The X-machine, a form of automaton, was introduced by Eilenberg in 1974. *Wik 1916 Richard Kenneth Guy (born September 30, 1916, Nuneaton, Warwickshire - ) is a British mathematician, and Professor Emeritus in the Department of Mathematics at the University of Calgary. He is best known for co-authorship (with John Conway and Elwyn Berlekamp) of Winning Ways for your Mathematical Plays and authorship of Unsolved Problems in Number Theory, but he has also published over 100 papers and books covering combinatorial game theory, number theory and graph theory. He is said to have developed the partially tongue-in-cheek "Strong Law of Small Numbers," which says there are not enough small integers available for the many tasks assigned to them — thus explaining many coincidences and patterns found among numerous cultures. Additionally, around 1959, Guy discovered a unistable polyhedron having only 19 faces; no such construct with fewer faces has yet been found. Guy also discovered the glider in Conway's Game of Life. Guy is also a notable figure in the field of chess endgame studies. He composed around 200 studies, and was co-inventor of the Guy-Blandford-Roycroft code for classifying studies. He also served as the endgame study editor for the British Chess Magazine from 1948 to 1951. Guy wrote four papers with Paul Erdős, giving him an Erdős number of 1. He also solved one of Erdős problems. His son, Michael Guy, is also a computer scientist and mathematician. *Wik 1918 Leslie Fox (30 September 1918 – 1 August 1992) was a British mathematician noted for his contribution to numerical analysis. *Wik 1953 Lewis Fry Richardson, FRS (11 October 1881 - 30 September 1953) was an English mathematician, physicist, meteorologist, psychologist and pacifist who pioneered modern mathematical techniques of weather forecasting, and the application of similar techniques to studying the causes of wars and how to prevent them. He is also noted for his pioneering work on fractals and a method for solving a system of linear equations known as modified Richardson iteration.*Wik 1985 Dr. Charles Francis Richter (26 Apr 1900, 30 Sep 1985) was an American seismologist and inventor of the Richter Scale that measures earthquake intensity which he developed with his colleague, Beno Gutenberg, in the early 1930's. The scale assigns numerical ratings to the energy released by earthquakes. Richter used a seismograph (an instrument generally consisting of a constantly unwinding roll of paper, anchored to a fixed place, and a pendulum or magnet suspended with a marking device above the roll) to record actual earth motion during an earthquake. The scale takes into account the instrument's distance from the epicenter. Gutenberg suggested that the scale be logarithmic so, for example, a quake of magnitude 7 would be ten times stronger than a 6.*TIS Credits : *CHM=Computer History Museum *FFF=Kane, Famous First Facts *NSEC= NASA Solar Eclipse Calendar *RMAT= The Renaissance Mathematicus, Thony Christie *SAU=St Andrews Univ. Math History *TIA = Today in Astronomy *TIS= Today in Science History *VFR = V Frederick Rickey, USMA *Wik = Wikipedia *WM = Women of Mathematics, Grinstein & Campbell Sunday, 29 September 2013 On This Day in Math - September 29 ~Enrico Fermi The 272nd day of the year; 272 = 24·17, and is the sum of four consecutive primes (61 + 67 + 71 + 73). 272 is also a Pronic or Heteromecic number, the product of two consecutive factors, 16x17 (which makes it twice a triangular #). 1609 Almost exactly a year after the first application for a patent of the telescope, Giambaptista della Porta, the Neapolitan polymath, whose Magia Naturalis of 1589, well known all over Europe, because of a tantalizing hint at what might be accomplished by a combination of a convex and concave lens: ‘With a concave you shall see small things afar off, very clearly; witha convex, things neerer to be greater, but more obscurely: if you know how to fit them both together, you shall see both things afar off, and things neer hand, both greater and clearly.’sends a letter to the founder of the Accademia dei Lincei, Prince Federico Cesi in Rome, with a sketch of an instrument that had just reached him, and he wrote:" It is a small tube of soldered silver, one palm in length, and three finger breadths in diameter, which has a convex glass in the end. There is another tube of the same material four finger breadths long, which enters into the first one, and in the end. It has a concave [glass], which is secured like the first one. If observed with that first tube, faraway things are seen as if they were near, but because the vision does not occur along the perpendicular, they appear obscure and indistinct. When the other concave tube, which produces the opposite effect, is inserted, things will be seen clear and erect and it goes in an out, as in a trombone, so that it adjusts to the eyesight of [particular] observers, which all differ. *Albert Van Helden, Galileo and the telescope; Origins of the Telescope, Royal Netherlands Academy of Arts andSciences, 2010 (I assume that we can safely date the invention of the trombone prior to 1609 also) 1801 Gauss’s Disquisitiones Arithmeticae published. It is a textbook of number theory written in Latin by Carl Friedrich Gauss in 1798 when Gauss was 21 and first published in 1801 when he was 24. In this book Gauss brings together results in number theory obtained by mathematicians such as Fermat, Euler, Lagrange and Legendre and adds important new results of his own. The book is divided into seven sections, which are : Section I. Congruent Numbers in General Section II. Congruences of the First Degree Section III. Residues of Powers Section IV. Congruences of the Second Degree Section V. Forms and Indeterminate Equations of the Second Degree Section VI. Various Applications of the Preceding Discussions Section VII. Equations Defining Sections of a Circle. Sections I to III are essentially a review of previous results, including Fermat's little theorem, Wilson's theorem and the existence of primitive roots. Although few of the results in these first sections are original, Gauss was the first mathematician to bring this material together and treat it in a systematic way. He was also the first mathematician to realize the importance of the property of unique factorization (sometimes called the fundamental theorem of arithmetic), which he states and proves explicitly. From Section IV onwards, much of the work is original. Section IV itself develops a proof of quadratic reciprocity; Section V, which takes up over half of the book, is a comprehensive analysis of binary quadratic forms; and Section VI includes two different primality tests. Finally, Section VII is an analysis of cyclotomic polynomials, which concludes by giving the criteria that determine which regular polygons are constructible i.e. can be constructed with a compass and unmarked straight edge alone. *Wik In 1988, the space shuttle Discovery blasted off from Cape Canaveral, Fla., marking America's return to manned space flight following the Challenger disaster. *TIS 1994 HotJava ---- Programmers first demonstrated the HotJava prototype to executives at Sun Microsystems Inc. A browser making use of Java technology, HotJava attempted to transfer Sun's new programming platform for use on the World Wide Web. Java is based on the concept of being truly universal, allowing an application written in the language to be used on a computer with any type of operating system or on the web, televisions or telephones.*CHM 1561 Adriaan van Roomen (29 Sept 1561 , 4 May 1615) is often known by his Latin name Adrianus Romanus. After studying at the Jesuit College in Cologne, Roomen studied medicine at Louvain. He then spent some time in Italy, particularly with Clavius in Rome in 1585. Roomen was professor of mathematics and medicine at Louvain from 1586 to 1592, he then went to Würzburg where again he was professor of medicine. He was also "Mathematician to the Chapter" in Würzburg. From 1603 to 1610 he lived frequently in both Louvain and Würzburg. He was ordained a priest in 1604. After 1610 he tutored mathematics in Poland. One of Roomen's most impressive results was finding π to 16 decimal places. He did this in 1593 using 230 sided polygons. Roomen's interest in π was almost certainly as a result of his friendship with Ludolph van Ceulen. Roomen proposed a problem which involved solving an equation of degree 45. The problem was solved by Viète who realised that there was an underlying trigonometric relation. After this a friendship grew up between the two men. Viète proposed the problem of drawing a circle to touch 3 given circles to Roomen (the Apollonian Problem) and Roomen solved it using hyperbolas, publishing the result in 1596. Roomen worked on trigonometry and the calculation of chords in a circle. In 1596 Rheticus's trigonometric tables Opus palatinum de triangulis were published, many years after Rheticus died. Roomen was critical of the accuracy of the tables and wrote to Clavius at the Collegio Romano in Rome pointing out that, to calculate tangent and secant tables correctly to ten decimal places, it was necessary to work to 20 decimal places for small values of sine, see [2]. In 1600 Roomen visited Prague where he met Kepler and told him of his worries about the methods employed in Rheticus's trigonometric tables. *SAU 1803 Jacques Charles-François Sturm (29 Sep 1803; 18 Dec 1855) French mathematician whose work resulted in Sturm's theorem, an important contribution to the theory of equations. .While a tutor of the de Broglie family in Paris (1823-24), Sturm met many of the leading French scientists and mathematicians. In 1826, with Swiss engineer Daniel Colladon, he made the first accurate determination of the velocity of sound in water. A year later wrote a prizewinning essay on compressible fluids. Since the time of René Descartes, a problem had existed of finding the number of solutions of a given second-order differential equation within a given range of the variable. Sturm provided a complete solution to the problem with his theorem which first appeared in Mémoire sur la résolution des équations numériques (1829; “Treatise on Numerical Equations”). Those principles have been applied in the development of quantum mechanics, as in the solution of the Schrödinger equation and its boundary values. *TIS Sturm is also remembered for the Sturm-Liouville problem, an eigenvalue problem in second order differential equations.*SAU 1812 Gustav Adolph Göpel (29 Sept 1812, 7 June 1847) Göpel's doctoral dissertation studied periodic continued fractions of the roots of integers and derived a representation of the numbers by quadratic forms. He wrote on Steiner's synthetic geometry and an important work, Theoriae transcendentium Abelianarum primi ordinis adumbratio levis, published after his death, continued the work of Jacobi on elliptic functions. This work was published in Crelle's Journal in 1847. *SAU 1895 Harold Hotelling​, 29 September 1895 - 26 December 1973   He originally studied journalism at the University of Washington, earning a degree in it in 1919, but eventually turned to mathematics, gaining a PhD in Mathematics from Princeton in 1924 for a dissertation dealing with topology. However, he became interested in statistics that used higher-level math, leading him to go to England in 1929 to study with Fisher. Although Hotelling first went to Stanford University in 1931, he not many years afterwards became a Professor of Economics at Columbia University, where he helped create Columbia's Stat Dept. In 1946, Hotelling was recruited by Gertrude Cox​ to form a new Stat Dept at the University of North Carolina at Chapel Hill. He became Professor and Chairman of the Dept of Mathematical Statistics, Professor of Economics, and Associate Director of the Institute of Statistics at UNC-CH. (When Hotelling and his wife first arrived in Chapel Hill they instituted the "Hotelling Tea", where they opened their home to students and faculty for tea time once a month.) Dr. Hotelling's major contributions to statistical theory were in multivariate analysis, with probably his most important paper his famous 1931 paper "The Generalization of Student's Ratio", now known as Hotelling's T^2, which involves a generalization of Student's t-test for multivariate data. In 1953, Hotelling published a 30-plus-page paper on the distribution of the correlation coefficient, following up on the work of Florence Nightingale David in 1938. *David Bee 1931 James Watson Cronin (29 Sep 1931, ) American particle physicist, who shared (with Val Logsdon Fitch) the 1980 Nobel Prize for Physics for "the discovery of violations of fundamental symmetry principles in the decay of neutral K-mesons." Their experiment proved that a reaction run in reverse does not follow the path of the original reaction, which implied that time has an effect on subatomic-particle interactions. Thus the experiment demonstrated a break in particle-antiparticle symmetry for certain reactions of subatomic particles.*TIS 1935 Hillel (Harry) Fürstenberg (September 29, 1935, ..)) is an American-Israeli mathematician, a member of the Israel Academy of Sciences and Humanities and U.S. National Academy of Sciences and a laureate of the Wolf Prize in Mathematics. He is known for his application of probability theory and ergodic theory methods to other areas of mathematics, including number theory and Lie groups. He gained attention at an early stage in his career for producing an innovative topological proof of the infinitude of prime numbers. He proved unique ergodicity of horocycle flows on a compact hyperbolic Riemann surfaces in the early 1970s. In 1977, he gave an ergodic theory reformulation, and subsequently proof, of Szemerédi's theorem. The Fürstenberg boundary and Fürstenberg compactification of a locally symmetric space are named after him. *Wik 1939 Samuel Dickstein (May 12, 1851 – September 29, 1939) was a Polish mathematician of Jewish origin. He was one of the founders of the Jewish party "Zjednoczenie" (Unification), which advocated the assimilation of Polish Jews. He was born in Warsaw and was killed there by a German bomb at the beginning of World War II. All the members of his family were killed during the Holocaust. Dickstein wrote many mathematical books and founded the journal Wiadomości Mathematyczne (Mathematical News), now published by the Polish Mathematical Society. He was a bridge between the times of Cauchy and Poincaré and those of the Lwów School of Mathematics. He was also thanked by Alexander Macfarlane for contributing to the Bibliography of Quaternions (1904) published by the Quaternion Society. He was also one of the personalities, who contributed to the foundation of the Warsaw Public Library in 1907.*Wik 1941 Friedrich Engel (26 Dec 1861, 29 Sept 1941)Engel was taught by Klein who recognized that he was the right man to assist Lie. At Klein's suggestion Engel went to work with Lie in Christiania (now Oslo) from 1884 until 1885. In 1885 Engel's Habilitation thesis was accepted by Leipzig and he became a lecturer there. The year after Engel returned to Leipzig from Christiania, Lie was appointed to succeed Klein and the collaboration of Lie and Engel continued. In 1889 Engel was promoted to assistant professor and, ten years later he was promoted to associate professor. In 1904 he accepted the chair of mathematics at Greifswald when his friend Eduard Study resigned the chair. Engel's final post was the chair of mathematics at Giessen which he accepted in 1913 and he remained there for the rest of his life. In 1931 he retired from the university but continued to work in Giessen. The collaboration between Engel and Lie led to Theorie der Transformationsgruppen a work on three volumes published between 1888 and 1893. This work was, "... prepared by S Lie with the cooperation of F Engel... " In many ways it was Engel who put Lie's ideas into a coherent form and made them widely accessible. From 1922 to 1937 Engel published Lie's collected works in six volumes and prepared a seventh (which in fact was not published until 1960). Engel's efforts in producing Lie's collected works are described as, "... an exceptional service to mathematics in particular, and scholarship in general. Lie's peculiar nature made it necessary for his works to be elucidated by one who knew them intimately and thus Engel's 'Annotations' completed in scope with the text itself. " Engel also edited Hermann Grassmann's complete works and really only after this was published did Grassmann get the fame which his work deserved. Engel collaborated with Stäckel in studying the history of non-euclidean geometry. He also wrote on continuous groups and partial differential equations, translated works of Lobachevsky from Russian to German, wrote on discrete groups, Pfaffian equations and other topics. *SAU 1955 L(ouis) L(eon) Thurstone (29 May 1887, 29 Sep 1955) was an American psychologist who improved psychometrics, the measurement of mental functions, and developed statistical techniques for multiple-factor analysis of performance on psychological tests. In high school, he published a letter in Scientific American on a problem of diversion of water from Niagara Falls; and invented a method of trisecting an angle. At university, Thurstone studied engineering. He designed a patented motion picture projector, later demonstrated in the laboratory of Thomas Edison, with whom Thurstone worked briefly as an assistant. When he began teaching engineering, Thurstone became interested in the learning process and pursued a doctorate in psychology. *TIS 2003 Ovide Arino (24 April 1947 - 29 September 2003) was a mathematician working on delay differential equations. His field of application was population dynamics. He was a quite prolific writer, publishing over 150 articles in his lifetime. He also was very active in terms of student supervision, having supervised about 60 theses in total in about 20 years. Also, he organized or coorganized many scientific events. But, most of all, he was an extremely kind human being, interested in finding the good in everyone he met. *Euromedbiomath 2010 Georges Charpak (1 August 1924 – 29 September 2010) was a French physicist who was awarded the Nobel Prize in Physics in 1992 "for his invention and development of particle detectors, in particular the multiwire proportional chamber". This was the last time a single person was awarded the physics prize. *Wik Credits : *CHM=Computer History Museum *FFF=Kane, Famous First Facts *NSEC= NASA Solar Eclipse Calendar *RMAT= The Renaissance Mathematicus, Thony Christie *SAU=St Andrews Univ. Math History *TIA = Today in Astronomy *TIS= Today in Science History *VFR = V Frederick Rickey, USMA *Wik = Wikipedia *WM = Women of Mathematics, Grinstein & Campbell Saturday, 28 September 2013 On This Day in Math - September 28 But in the present century, thanks in good part to the influence of Hilbert, we have come to see that the unproved postulates with which we start are purely arbitrary. They must be consistent, they had better lead to something interesting. ~ Julian Lowell Coolidge The 271st day of the year; 271 is a prime number and is the sum of eleven consecutive primes (7 + 11 + 13 + 17 + 19 + 23 + 29 + 31 + 37 + 41 + 43). 490 B.C. In one of history’s great battles, the Greeks defeated the Persians at Marathon. A Greek soldier was dispatched to notify Athens of the victory, running the entire distance and providing the name and model for the modern “marathon” race. *VFR 1695 After fitting several comets data using Newton's proposal that they followed parabolic paths, Edmund Halley was "inspired" to test his own measurements of the 1682 comet against an elliptical orbit. He writes to Newton, "I am more and more confirmed that we have seen that Comet now three times since Ye Year 1531." *David A Grier, When Computer's Were Human 1791 Captain George Vancouver observed this Wednesday morning a partial solar eclipse. He went on the name the barren rocky cluster of isles, by the name of Eclipse Islands. *NSEC 1858, Donati's comet (discovered by Giovanni Donati, 1826-1873) became the first to be photographed. It was a bright comet that developed a spectacular curved dust tail with two thin gas tails, captured by an English commercial photographer, William Usherwood, using a portrait camera at a low focal ratio. At Harvard, W.C. Bond, attempted an image on a collodion plate the following night, but the comet shows only faintly and no tail can be seen. Bond was subsequently able to evaluate the image on Usherwood's plate. The earliest celestial daguerreotypes were made in 1850-51, though after the Donati comet, no further comet photography took place until 1881, when P.J.C. Janssen and J.W. Draper took the first generally recognized photographs of a comet*TIS “William Usherwood, a commercial photographer from Dorking, Surrey took the first ever photograph of a comet when he photographed Donati’s comet from Walton Common on the 27th September 1858, beating George Bond from Harvard Observatory by a night! Unfortunately, the picture taken by Usherwood has been lost.” *Exposure web site  1917 Richard Courant wrote to Nina Runge, his future wife, that he finally got the opportunity to talk to Ferdinand Springer about “a publishing project” and that things looked promising. This meeting led to a contract and a series of books now called the "Yellow Series". *VFR 1938 Paul Erdos boards the Queen Mary bound for the USA. Alarmed by Hitler's demands to annex the Sudatenland, Euler hurriedly left Budapest and made his way through Italy and France to London. He would pass through Ellis Island on his way to a position at Princeton's Institute for Advanced Study on October 4. * Bruce Schechter, My Brain is Open: The Mathematical Journeys of Paul Erdos 1969 Murchison meteorite , a meteorite fell over Murchison, Australia. Only 100-kg of this meteorite have been found. Classified as a carbonaceous chondrite, type II (CM2), this meteorite is suspected to be of cometary origin due to its high water content (12%). An abundance of amino acids found within this meteorite has led to intense study by researchers as to its origins. More than 92 different amino acids have been identified within the Murchison meteorite to date. Nineteen of these are found on Earth. The remaining amino acids have no apparent terrestrial source. *TIS 2009: goes online. *Peter Krautzberger, comments 2011 President Barack Obama announced that Richard Alfred Tapia was among twelve scientists to be awarded the National Medal of Science, the top award the United States offers its researchers. Tapia is currently the Maxfield and Oshman Professor of Engineering; Associate Director of Graduate Studies, Office of Research and Graduate Studies; and Director of the Center for Excellence and Equity in Education at Rice University. He is a renowned American mathematician and champion of under-represented minorities in the sciences. *Wik 551 B.C. Birthdate of the Chinese philosopher and educator Confucius. His birthday is observed as “Teacher’s Day” in memory of his great contribution to the Chinese Nation. His most famous aphorism is: “With education there is no distinction between classes or races of men.” *VFR 1605 Ismael Boulliau (28 Sept 1605 , 25 Nov 1694) was a French clergyman and amateur mathematician who proposed an inverse square law for gravitation before Newton. Boulliau was a friend of Pascal, Mersenne and Gassendi and supported Galileo and Copernicus. He claimed that if a planetary moving force existed then it should vary inversely as the square of the distance (Kepler had claimed the first power), "As for the power by which the Sun seizes or holds the planets, and which, being corporeal, functions in the manner of hands, it is emitted in straight lines throughout the whole extent of the world, and like the species of the Sun, it turns with the body of the Sun; now, seeing that it is corporeal, it becomes weaker and attenuated at a greater distance or interval, and the ratio of its decrease in strength is the same as in the case of light, namely, the duplicate proportion, but inversely, of the distances that is, 1/d2. *SAU 1651 Johann Philipp von Wurzelbau (28 September 1651 in Nürnberg; 21 July 1725 Nürnberg )was a German astronomer. A native of Nuremberg, Wurzelbauer was a merchant who became an astronomer. As a youth, he was keenly interested in mathematics and astronomy but had been forced to earn his living as a merchant. He married twice: his first marriage was to Maria Magdalena Petz (1656–1713), his second to Sabina Dorothea Kress (1658–1733). Petz bore him six children. He first published a work concerning his observations on the great comet of 1680, and initially began his work at a private castle-observatory on Spitzenberg 4 owned by Georg Christoph Eimmart (completely destroyed during World War II), the director of Nuremberg's painters' academy. Wurzelbauer was 64 when he began this second career, but proved himself to be an able assistant to Eimmart. A large quadrant from his days at Eimmart's observatory still survives. After 1682, Wurzelbauer owned his own astronomical observatory and instruments, and observed the transit of Mercury, solar eclipses, and worked out the geographical latitude of his native city. After 1683, he had withdrawn himself completely from business life to dedicate himself to astronomy. By 1700, Wurzelbauer had become the most well-known astronomer in Nuremberg. For his services to the field of astronomy, he was ennobled in 1692 by Leopold I, Holy Roman Emperor and added the von to his name. He was a member of the French and the Prussian academies of the sciences. The crater Wurzelbauer on the Moon is named after him. *Wik 1698 Pierre-Louis Moreau de Maupertuis (28 Sep 1698; 27 Jul 1759)French mathematician, biologist, and astronomer. In 1732 he introduced Newton's theory of gravitation to France. He was a member of an expedition to Lapland in 1736 which set out to measure the length of a degree along the meridian. Maupertuis' measurements both verified Newton's predictions that the Earth would be an oblate speroid, and they corrected earlier results of Cassini. Maupertuis published on many topics including mathematics, geography, astronomy and cosmology. In 1744 he first enunciated the Principle of Least Action and he published it in Essai de cosmologie in 1850. Maupertuis hoped that the principle might unify the laws of the universe and combined it with an attempted proof of the existence of God.*TIS 1761 François Budan de Boislaurent (28 Sept 1761, 6 Oct 1840) was a Haitian born amateur mathematician best remembered for his discovery of a rule which gives necessary conditions for a polynomial equation to have n real roots between two given numbers. Budan is considered an amateur mathematician and he is best remembered for his discovery of a rule which gives necessary conditions for a polynomial equation to have n real roots between two given numbers. Budan's rule was in a memoir sent to the Institute in 1803 but it was not made public until 1807 in Nouvelle méthode pour la résolution des équations numerique d'un degré quelconque. In it Budan wrote, "If an equation in x has n roots between zero and some positive number p, the transformed equation in (x - p) must have at least n fewer variations in sign than the original." *SAU (Sounds like a nice followup extension to Descartes Rule of signs in Pre-calculus classes. Mention the history, how many times do your students hear about a Haitian mathematician?) 1824 George Johnston Allman (28 September 1824 – 9 May 1904) was an Irish professor, mathematician, classical scholar, and historian of ancient Greek mathematics.*Wik 1873  Julian Lowell Coolidge. (28 Sep 1873; 5 Mar 1954) After an education at Harvard (B.A. 1895), Oxford (B.Sc. 1897), Turin (with Corrado Serge) and Bonn (with Eouard Study, Ph.D. 1904), he came back to Harvard to teach until he retired in 1940. He was an enthusiastic teacher with a flair for witty remarks. [DSB 3, 399] *VFR He published numerous works on theoretical mathematics along the lines of the Study-Segre school. He taught at Groton School, Conn. (1897-9) where one of his pupils was Franklin D Roosevelt, the future U.S. president. From 1899 he taught at Harvard University. Between 1902 and 1904, he went to Turin to study under Corrado Segre and then to Bonn where he studied under Eduard Study. His Mathematics of the Great Amateurs is perhaps his best-known work. *TIS 1881 Edward Ross studied at Edinburgh and Cambridge universities. After working with Karl Pearson in London he was appointed Professor of Mathematics at the Christian College in Madras India. Ill health forced him to retire back to Scotland. *SAU 1925 Martin David Kruskal (September 28, 1925 – December 26, 2006) was an American mathematician and physicist. He made fundamental contributions in many areas of mathematics and science, ranging from plasma physics to general relativity and from nonlinear analysis to asymptotic analysis. His single most celebrated contribution was the discovery and theory of solitons. His Ph.D. dissertation, written under the direction of Richard Courant and Bernard Friedman at New York University, was on the topic "The Bridge Theorem For Minimal Surfaces." He received his Ph.D. in 1952. In the 1950s and early 1960s, he worked largely on plasma physics, developing many ideas that are now fundamental in the field. His theory of adiabatic invariants was important in fusion research. Important concepts of plasma physics that bear his name include the Kruskal–Shafranov instability and the Bernstein–Greene–Kruskal (BGK) modes. With I. B. Bernstein, E. A. Frieman, and R. M. Kulsrud, he developed the MHD (or magnetohydrodynamic) Energy Principle. His interests extended to plasma astrophysics as well as laboratory plasmas. Martin Kruskal's work in plasma physics is considered by some to be his most outstanding. In 1960, Kruskal discovered the full classical spacetime structure of the simplest type of black hole in General Relativity. A spherically symmetric black hole can be described by the Schwarzschild solution, which was discovered in the early days of General Relativity. However, in its original form, this solution only describes the region exterior to the horizon of the black hole. Kruskal (in parallel with George Szekeres) discovered the maximal analytic continuation of the Schwarzschild solution, which he exhibited elegantly using what are now called Kruskal–Szekeres coordinates. This led Kruskal to the astonishing discovery that the interior of the black hole looks like a "wormhole" connecting two identical, asymptotically flat universes. This was the first real example of a wormhole solution in General Relativity. The wormhole collapses to a singularity before any observer or signal can travel from one universe to the other. This is now believed to be the general fate of wormholes in General Relativity. Martin Kruskal was married to Laura Kruskal, his wife of 56 years. Laura is well known as a lecturer and writer about origami and originator of many new models.[3] Martin, who had a great love of games, puzzles, and word play of all kinds, also invented several quite unusual origami models including an envelope for sending secret messages (anyone who unfolded the envelope to read the message would have great difficulty refolding it to conceal the deed). 1925 Seymour R. Cray (28 Sep 1925; 5 Oct 1996) American electronics engineer who pioneered the use of transistors in computers and later developed massive supercomputers to run business and government information networks. He was the preeminent designer of the large, high-speed computers known as supercomputers. *TIS Cray began his engineering career building cryptographic machinery for the U.S. government and went on to co-found Control Data Corporation​ (CDC) in the late 1950s. For over three decades, first with CDC then with his own companies, Cray consistently built the fastest computers in the world, leading the industry with innovative architectures and packaging and allowing the solution of hundreds of difficult scientific, engineering, and military problems. Many of Cray's supercomputers are on exhibit at The Computer Museum History Center. Cray died in an automobile accident in 1996.*CHM 1961 Enrique Zuazua Iriondo (September 28, 1961 'Eibar, Gipuzkoa, Basque Country, Spain - ) is a Research Professor at Ikerbasque, the Basque Foundation for Science in BCAM - Basque Center for Applied Mathematics that he founded in 2008 as Scientific Director. He is also the Director of the BCAM Chair in Partial Differential Equations, Control and Numerics and Professor in leave of Applied Mathematics at the Universidad Autónoma de Madrid (UAM). His domains of expertise in Applied Mathematics include Partial Differential Equations, Control Theory and Numerical Analysis. These subjects interrelate and their final aim is to model, analyse, computer simulate, and finally contribute to the control and design of the most diverse natural phenomena and all fields of R + D + i. Twenty PhD students got the degree under his advice and they now occupy positions in centres throughout the world: Brazil, Chile, China, Mexico, Romania, Spain, etc. He has developed intensive international work having led co-operation programmes with various Latin American countries, as well as with Portugal, the Maghreb, China and Iran, amongst others. *Wik 1694 Gabriel Mouton was a French clergyman who worked on interpolation and on astronomy.*SAU 1953 Edwin Powell Hubble (20 Nov 1889, 28 Sep 1953)American astronomer, born in Marshfield, Mo., who is considered the founder of extragalactic astronomy and who provided the first evidence of the expansion of the universe. In 1923-5 he identified Cepheid variables in "spiral nebulae" M31 and M33 and proved conclusively that they are outside the Galaxy. His investigation of these objects, which he called extragalactic nebulae and which astronomers today call galaxies, led to his now-standard classification system of elliptical, spiral, and irregular galaxies, and to proof that they are distributed uniformly out to great distances. Hubble measured distances to galaxies and their redshifts, and in 1929 he published the velocity-distance relation which is the basis of modern cosmology.*TIS 1992 John Leech is best known for the Leech lattice which is important in the theory of finite simple groups.*SAU 2004 Jacobus Hendricus ("Jack") van Lint (1 September 1932, 28 September 2004) was a Dutch mathematician, professor at the Eindhoven University of Technology, of which he was rector magnificus from 1991 till 1996. His field of research was initially number theory, but he worked mainly in combinatorics and coding theory. Van Lint was honored with a great number of awards. He became a member of Royal Netherlands Academy of Arts and Sciences in 1972, received four honorary doctorates, was an honorary member of the Royal Netherlands Mathematics Society (Koninklijk Wiskundig Genootschap), and received a Knighthood.*Wik Credits : *CHM=Computer History Museum *FFF=Kane, Famous First Facts *NSEC= NASA Solar Eclipse Calendar *RMAT= The Renaissance Mathematicus, Thony Christie *SAU=St Andrews Univ. Math History *TIA = Today in Astronomy *TIS= Today in Science History *VFR = V Frederick Rickey, USMA *Wik = Wikipedia *WM = Women of Mathematics, Grinstein & Campbell Friday, 27 September 2013 On This Day in Math - September 27 Algebra exists only for the elucidation of geometry. ~William Edge The 270th day of the year; the harmonic mean of the factors of 270 is an integer. The first three numbers with this property are 1, 6, and 28.. what is the next one? 14 A.D.: A total lunar eclipse marked the death of Augustus: "The Moon in the midst of a clear sky became suddenly eclipsed; the soldiers who were ignorant of the cause took this for an omen referring to their present adventures: to their labors they compared the eclipse of the planet, and prophesied 'that if to the distressed goodness should be restored her wonted brightness and splendor, equally successful would be the issue of their struggle.' Hence they made a loud noise, by ringing upon brazen metal, and by blowing trumpets and cornets; as she appeared brighter or darker they exulted or lamented" - Tacitus *NASA Lunar Eclipses 1830 American Statesman Charles Sumner (1811-1874) paid little attention as an undergraduate at Harvard, but a year after graduation he became convinced that mathematics was a necessary part of a complete education. To a classmate he wrote: “Just a week ago yesterday, I commenced Walker’s Geometry, and now have got nearly half through. All those problems, theorems, etc., which were such stumbling-blocks to my Freshman-year career, unfold themselves as easily as possible now. You will sooner have thought, I suppose, that fire and water would have embraced than mathematics and myself; but, strange to tell, we are close friends now. I really get geometry with some pleasure. I usually devote four hours in the forenoon to it.” Quoted from Florian Cajori’s Mathematics in Liberal Education (1928), p. 115. *VFR (Sumner was nearly beaten to death by a South Carolina Congressional Representative after a vindictive speech attacking the Kansas-Nebraska act, and it's authors. His speech included direct insults, sexual innuendo, and made fun of South Carolina Senator Andrew Butler, one of the authors, by imitating his stroke impaired speech and mannerisms. Butler's Nephew, Preston Brooks, having decided that a duel could not take place between a gentleman (himself) and a drunk-lout(Sumner) stopped by Sumner's desk to confront him and nearly beat him to death with his cane. Sumner lost the fight, but the incident put his star on the rise in the Northern states.) In 1831, the first annual meeting of the British Association for the Advancement of Science was held in York. The British Association had been established in the same year by Sir David Brewster, R.I. Murchison and others. One of the association's main objectives was to "promote the intercourse of those who cultivate science with each other." The second annual meeting was held at Oxford (1832), and in following years at Cambridge, Edinburgh, Dublin, Bristol, Liverpool, Newcastle, Birmingham, Glasgow, Plymouth, Manchester and Cork respectively, until returning to York in 1844. It is incorporated by Royal Charter dated 21 Apr 1928.*TIS 1892 Mykhailo Pilipovich Krawtchouk (27 Sept 1892 in Chovnitsy, (now Kivertsi) Ukraine - 9 March 1942 in Kolyma, Siberia, USSR) In 1929 Krawtchouk published his most famous work, Sur une généralisation des polynômes d'Hermite. In this paper he introduced a new system of orthogonal polynomials now known as the Krawtchouk polynomials, which are polynomials associated with the binomial distribution. However his mathematical work was very wide and, despite his early death, he was the author of around 180 articles on mathematics. He wrote papers on differential and integral equations, studying both their theory and applications. Other areas he wrote on included algebra (where among other topics he studied the theory of permutation matrices), geometry, mathematical and numerical analysis, probability theory and mathematical statistics. He was also interested in the philosophy of mathematics, the history of mathematics and mathematical education. Krawtchouk edited the first three-volume dictionary of Ukrainian mathematical terminology. *SAU 1905 E=mc2 the day that Einstein's paper outlining the significance of the equation arrived in the offices of the German journal Annalen der Physik. "Does the inertia of a body depend on its energy content?" 1919 Einstein writes to his ailing mother that "H. A. Lorenztz has just telegraphed me that the British Expeditions have definitely confirmed the deflection of light by the sun." He adds consolation on her illness and wishes her "good days", and closes with "affectionately, Albert *Einstein Archives In 1922, scientists at the Naval Aircraft Radio Laboratory near Washington, D.C., demonstrated that if a ship passed through a radio wave being broadcast between two stations, that ship could be detected, the essentials of radar. *TIS 1996 Kevin Mitnick, 33, was indicted on charges resulting from a 2 ½-year hacking spree. Police accused the hacker, who called himself "Condor," of stealing software worth millions of dollars from major computer corporations. The maximum possible sentence for his crimes was 200 years. *CHM Mitnick served five years in prison — four and a half years pre-trial and eight months in solitary confinement — because, according to Mitnick, law enforcement officials convinced a judge that he had the ability to "start a nuclear war by whistling into a pay phone". He was released on January 21, 2000. During his supervised release, which ended on January 21, 2003, he was initially forbidden to use any communications technology other than a landline telephone. Mitnick fought this decision in court, eventually winning a ruling in his favor, allowing him to access the Internet. Under the plea deal, Mitnick was also prohibited from profiting from films or books based on his criminal activity for seven years. Mitnick now runs Mitnick Security Consulting​ LLC, a computer security consultancy. *Wik 2011 President Obama today named seven eminent researchers as recipients of the National Medal of Science and five inventors as recipients of the National Medal of Technology and Innovation, the highest honors bestowed by the United States government on scientists, engineers, and inventors. The recipients will receive their awards at a White House ceremony later this year. This year’s recipients are listed below. National Medal of Science Jacqueline K. Barton California Institute of Technology Ralph L. Brinster University of Pennsylvania For his fundamental contributions to the development and use of transgenic mice.  His research has provided experimental foundations and inspiration for progress in germline genetic modification in a range of species, which has generated a revolution in biology, medicine, and agriculture. Shu Chien University of California, San Diego For pioneering work in cardiovascular physiology and bioengineering, which has had tremendous impact in the fields of microcirculation, blood rheology and mechanotransduction in human health and disease. Rudolf Jaenisch Whitehead Institute for Biomedical Research and Massachusetts Institute of Technology For improving our understanding of epigenetic regulation of gene expression: the biological mechanisms that affect how genetic information is variably expressed.  His work has led to major advances in our understanding of mammalian cloning and embryonic stem cells. Peter J. Stang University of Utah Richard A. Tapia Rice University Srinivasa S.R. Varadhan New York University National Medal of Technology and Innovation Rakesh Agrawal Purdue University B. Jayant Baliga North Carolina State University C. Donald Bateman Yvonne C. Brill RCA Astro Electronics (Retired) Michael F. Tompsett For pioneering work in materials and electronic technologies including the design and development of the first charge-coupled device (CCD) imagers. *The White House 1677 Johann Doppelmayr was a German mathematician who wrote on astronomy, spherical trigonometry, sundials and mathematical instruments.*SAU 1719 Abraham Kästner was a German mathematician who compiled encyclopaedias and wrote text-books. He taught Gauss. His work on the parallel postulate influenced Bolyai and Lobachevsky*SAU 1814 Daniel Kirkwood (27 Sep 1814; 11 Jun 1895) American mathematician and astronomer who noted in about 1860 that there were several zones of low density in the minor-planet population. These gaps in the distribution of asteroid distances from the Sun are now known as Kirkwood gaps. He explained the gaps as resulting from perturbations by Jupiter. An object that revolved in one of the gaps would be disturbed regularly by the planet's gravitational pull and eventually would be moved to another orbit. Thus gaps appeared in the distribution of asteroids where the orbital period of any small body present would be a simple fraction of that of Jupiter. Kirwood showed that a similar effect accounted for gaps in Saturns rings.*TIS The asteroid 1951 AT was named 1578 Kirkwood in his honor and so was the lunar impact crater Kirkwood, as well as Indiana University's Kirkwood Observatory. He is buried in the Rose Hill Cemetery in Bloomington, Indiana, where Kirkwood Avenue is named for him. *Wik 1824 Benjamin Apthorp Gould (27 Sep 1824; 26 Nov 1896) American astronomer whose star catalogs helped fix the list of constellations of the Southern Hemisphere Gould's early work was done in Germany, observating the motion of comets and asteroids. In 1861 undertook the enormous task of preparing for publication the records of astronomical observations made at the US Naval Observatory since 1850. But Gould's greatest work was his mapping of the stars of the southern skies, begun in 1870. The four-year endeavor involved the use of the recently developed photometric method, and upon the publication of its results in 1879 it was received as a signicant contribution to science. He was highly active in securing the establishment of the National Academy of Sciences.*TIS 1843 Gaston Tarry was a French combinatorialist whose best-known work is a method for solving mazes.*SAU 1855 Paul Appell (27 September 1855 – 24 October 1930), also known as Paul Émile Appel, was a French mathematician and Rector of the University of Paris. The concept of Appell polynomials is named after him, as is rue Paul Appell in the 14th arrondissement of Paris.*Wik 1876 Earle Raymond Hedrick (September 27, 1876 – February 3, 1943), was an American mathematician and a vice-president of the University of California. He worked on partial differential equations and on the theory of nonanalytic functions of complex variables. He also did work in applied mathematics, in particular on a generalization of Hooke's law and on transmission of heat in steam boilers. With Oliver Dimon Kellogg he authored a text on the applications of calculus to mechanics.*Wik 1918 Sir Martin Ryle (27 Sep 1918; 14 Oct 1984) British radio astronomer who developed revolutionary radio telescope systems and used them for accurate location of weak radio sources. Ryle helped develop radar for British defense during WW II. Afterward, he was a leader in the development of radio astronomy. With his aperture synthesis technique of interferometry he and his team located radio-emitting regions on the sun and pinpointed other radio sources so that they could be studied in visible light. Ryle's 1C - 5C Cambridge catalogues of radio sources led to the discovery of numerous radio galaxies and quasars. Using this technique, eventually radio astronomers surpassed optical astronomers in angular resolution. He observed the most distant known galaxies of the universe. For his aperture synthesis technique, Ryle shared the Nobel Prize for Physics in 1974 (with Antony Hewish), the first in recognition of astronomical research. He was the 12th Astronomer Royal (1972-82).*TIS He received the Turing Award in 1970 "for his research in numerical analysis to facilitate the use of the high-speed digital computer, having received special recognition for his work in computations in linear algebra and 'backward' error analysis." In the same year, he also gave the John von Neumann Lecture at the Society for Industrial and Applied Mathematics. The J. H. Wilkinson Prize for Numerical Software is named in his honour.*Wik 1783 Étienne Bézout was a French mathematician who is best known for his theorem on the number of solutions of polynomial equations.*SAU 1997 William Edge graduated from Cambridge and lectured at Edinburgh University. He wrote many papers in Geometry. He became President of the EMS in 1944 and an honorary member in 1983. *SAU Bézout's theorem for polynomials states that if P and Q are two polynomials with no roots in common, then there exist two other polynomials A and B such that AP+BQ=1. *Wik Credits : *CHM=Computer History Museum *FFF=Kane, Famous First Facts *NSEC= NASA Solar Eclipse Calendar *RMAT= The Renaissance Mathematicus, Thony Christie *SAU=St Andrews Univ. Math History *TIA = Today in Astronomy *TIS= Today in Science History *VFR = V Frederick Rickey, USMA *Wik = Wikipedia *WM = Women of Mathematics, Grinstein & Campbell Thursday, 26 September 2013 On This Day in Math - September 26 "mathematics is not yet ready for such problems" ~Paul Erdos in reference to Collatz's problem This is the 269th day of the year, the date is written 26/9 in much of Europe. This is the only day of the year which presents itself in this way. (Are there any days that work using month/day?) 269 is prime and is a regular prime, an Eisenstein prime with no imaginary part, a long prime, a Chen prime, a Pillai prime, a Pythagorean prime, a twin prime, a sexy prime, a Higgs prime, a strong prime, and a highly cototient number. So many new terms to look up... Well? Look them up. 1679 On September 26, 1679, a fierce fire consumed the Stellaburgum — Europe’s finest observatory, built by the pioneering astronomer Johannes Hevelius in the city of Danzig, present-day Poland, decades before the famous Royal Greenwich Observatory and Paris Observatory existed. *Maria Popova at 1874 James Clerk Maxwell in a letter to Professor Lewis Campbell describes Galton, "Francis Galton, whose mission it seems to be to ride other men's hobbies to death, has invented the felicitous expression 'structureless germs'. " *Lewis Campbell and William Garnett (eds.), The Life of James Clerk Maxwell (1884), 299. 1991 The first two year closed mission of Biosphere 2 began just outside Tucson, Arizona. *David Dickinson ‏@Astroguyz  2011 Astronauts had this view of the aurora on September 26, 2011. Credit: NASA We’ve had some great views of the aurora submitted by readers this week, but this one taken from the International Space Station especially highlights the red color seen by many Earth-bound skywatchers, too. Karen Fox from the Goddard Space Flight Center says the colors of the aurora depend on which atoms are being excited by the solar storm. In most cases, the light comes when a charged particle sweeps in from the solar wind and collides with an oxygen atom in Earth’s atmosphere. This produces a green photon, so most aurora appear green. However, lower-energy oxygen collisions as well as collisions with nitrogen atoms can produce red photons — so sometimes aurora also show a red band as seen here. *Universe Today 1754 Joseph-Louis Proust (26 Sep 1754; 5 Jul 1826) French chemist who proved (1808) that the relative quantities of any given pure chemical compound's constituent elements remain invariant, regardless of the compound's source, and thus provided crucial evidence in support of John Dalton's “law of definite proportions,” which holds that elements in any compound are present in fixed proportion to each other. *TIS 1784 Christopher Hansteen (26 Sep 1784; 15 Apr 1873) Norwegian astronomer and physicist noted for his research in geomagnetism. In 1701 Halley had already published a map of magnetic declinations, and the subject was studied by Humboldt, de Borda, and Gay-Lussac, among others. Hansteen collected available data and also mounted an expedition to Siberia, where he took many measurements for an atlas of magnetic strength and declination.*TIS 1854 Percy Alexander MacMahon (26 Sept 1854 , 25 Dec 1929) His study of symmetric functions led MacMahon to study partitions and Latin squares, and for many years he was considered the leading worker in this area. His published values of the number of unrestricted partitions of the first 200 integers which proved extremely useful to Hardy and Littlewood in their own work on partitions. He gave a Presidential Address to the London Mathematical Society on combinatorial analysis in 1894. MacMahon wrote a two volume treatise Combinatory analysis (volume one in 1915 and the second volume in the following year) which has become a classic. He wrote An introduction to combinatory analysis in 1920. In 1921 he wrote New Mathematical Pastimes, a book on mathematical recreations. *SAU 1887 Sir Barnes (Neville) Wallis (26 Sep 1887; 30 Oct 1979) was an English aeronautical designer and military engineer whose famous 9000-lb bouncing "dambuster" bombs of WW II destroyed the German Möhne and Eder dams on 16 May 1943. He designed the R100 airship, and the Vickers Wellesley and Wellington bombers. The specially-formed RAF 617 Squadron precisely delivered his innovative cylindrical bombs which were released from low altitude, rotating backwards at high speed that caused them to skip along the surface of the water, right up to the base of the dam. He later designed the 5-ton Tallboy and 10-ton Grand Slam earthquake bombs (which used on many enemy targets in the later years of the war). Postwar, he developed ideas for swing-wing aircraft. *TIS (His courtship with his wife has been written by his daughter, Mary Stopes-Roe from the actual courtship in the entertaining, but perhaps overpriced book, Mathematics With Love: The Courtship Correspondence of Barnes Wallis, Inventor of the Bouncing Bomb.) 1891 Hans Reichenbach (September 26, 1891, April 9, 1953) was a leading philosopher of science, educator and proponent of logical empiricism. Reichenbach is best known for founding the Berlin Circle, and as the author of The Rise of Scientific Philosophy.*Wik 1924 Jean Hoerni, a pioneer of the transistor, is born in Switzerland. A physicist, Hoerni in 1959 invented the planar process, which, combined with Robert Noyce's technique for placing a layer of silicon dioxide on a transistor, led to the creation of the modern integrated circuit. Hoerni's planar process allowed the placement of complex electronic circuits on a single chip. *CHM 1926 Colin Brian Haselgrove (26 September 1926 , 27 May 1964) was an English mathematician who is best known for his disproof of the Pólya conjecture in 1958. the Pólya conjecture stated that 'most' (i.e. more than 50%) of the natural numbers less than any given number have an odd number of prime factors. The conjecture was posited by the Hungarian mathematician George Pólya in 1919.. The size of the smallest counter-example is often used to show how a conjecture can be true for many numbers, and still be false. *Wik 1927 Brian Griffiths (26 Sept 1927 , 4 June 2008) He was deeply involved in the 'School Mathematics Project', he served as chairman of the 'Joint Mathematical Council', and chaired the steering group for the 'Low Attainers Mathematics Project' from 1983 to 1986. This project became the 'Raising Achievement in Mathematics Project' in 1986 and he chaired this from its foundation to 1989. *SAU 1766 Giulio Carlo Fagnano dei Toschi died. He is important for the identity \pi = 2i\log{1 - i \over 1 +i} and for his rectification of the lemmiscate. *VFR An Italian mathematician who worked in both complex numbers and on the geometry of triangles.*SAU The lemniscate is of particular interest because, even if it has little relevance today, it the catalyst for immeasurably important mathematical development in the 18th and 19th centuries. The figure 8-shaped curve first entered the minds of mathematicians in 1680, when Giovanni Cassini presented his work on curves of the form, appropriately known as the ovals of Cassini. Only 14 years later, while deriving the arc length of the lemniscate, Jacob Bernoulli became the first mathematician in history to define arc length in terms of polar coordinates. The first major result of work on the lemniscate came in 1753, when, after reading Giulio Carlo di Fagnano’s papers on dividing the lemniscate using straightedge and compass, Leonhard Euler proved that: Jacobi called December 23,1751 "the birthday of elliptic functions", as this was the day that Euler began reviewing the papers of Fagnanao who was being considered for membership in the Berlin Academy. *Raymond Ayoub, The lemniscate and Fagnano's contributions to elliptic integrals 1775 John Adams writes to his wife to entreat her to teach his children geometry and... "I have seen the Utility of Geometry, Geography, and the Art of drawing so much of late, that I must intreat you, my dear, to teach the Elements of those Sciences to my little Girl and Boys. It is as pretty an Amusement, as Dancing or Skaiting, or Fencing, after they have once acquired a Taste for them. No doubt you are well qualified for a school Mistress in these Studies, for Stephen Collins tells me the English Gentleman, in Company with him, when he visited Braintree, pronounced you the most accomplished Lady, he had seen since he left England.—You see a Quaker can flatter, but dont you be proud. *Natl. Archives 1802 Jurij Vega (23 Mar 1754, 26 Sept 1802) wrote about artillery but he is best remembered for his tables of logarithms and trigonometric functions. Vega calculated π to 140 places, a record which stood for over 50 years. This appears in a paper which he published in 1789. In September 1802 Jurij Vega was reported missing. A search was unsuccessful until his body was found in the Danube near Vienna. The official cause of death was an accident but many suspect that he was murdered. *SAU 1867 James Ferguson (31 Aug 1797, 26 Sep 1867) Scottish-American astronomer who discovered the first previously unknown asteroid to be detected from North America. He recorded it on 1 Sep 1854 at the U.S. Naval Observatory, where he worked 1848-67. This was the thirty-first of the series and is now known as 31 Euphrosyne, named after one of the Charites in Greek mythology. It is one of the largest of the main belt asteroids, between Mars and Jupiter. He was involved in some of the earliest work in micrometry was done at the old U.S. Naval Observatory at Foggy Bottom in the midst of the Civil War using a 9.6 inch refractor. He also contributed to double star astronomy. Earlier in his life he was a civil engineer, member of the Northwest Boundary Survey, and an assistant in the U.S. Coast Survey *TIS 1868 August Ferdinand Mobius died. He discovered his famous strip in September 1858. Johann Benedict Listing discovered the same surface two months earlier.*VFR (It is somewhat amazing that we call it after Mobius when Listing discovered it first and published, and it seems, Mobius did not. However Mobius did seem to have thought on the four color theorem before Guthrie, or anyone else to my knowledge.) 1877 Hermann Günther Grassmann (15 Apr 1809, 26 Sep 1877) German mathematician chiefly remembered for his development of a general calculus of vectors in Die lineale Ausdehnungslehre, ein neuer Zweig der Mathematik (1844; "The Theory of Linear Extension, a New Branch of Mathematics"). *TIS 1910 Thorvald Nicolai Thiele (24 Dec 1838, 26 Sept 1910) He is remembered for having an interpolation formula named after him, the formula being used to obtain a rational function which agrees with a given function at any number of given points. He published this in 1909 in his book which made a major contribution to numerical analysis. He introduced cumulants (under the name of "half-invariants") in 1889, 1897, 1899, about 30 years before their rediscovery and exploitation by R A Fisher. *SAU 1976 Paul (Pál) Turán (18 August 1910, 26 September 1976) was a Hungarian mathematician who worked primarily in number theory. He had a long collaboration with fellow Hungarian mathematician Paul Erdős, lasting 46 years and resulting in 28 joint papers. *SAU 1978 Karl Manne Georg Siegbahn (3 Dec 1886, 26 Sep 1978) Swedish physicist who was awarded the Nobel Prize for Physics in 1924 for his discoveries and investigations in X-ray spectroscopy. In 1914 he began his studies in the new science of x-ray spectroscopy which had already established from x-ray spectra that there were two distinct 'shells' of electrons within atoms, each giving rise to groups of spectral lines, labeled 'K' and 'L'. In 1916, Siegbahn discovered a third, or 'M', series. (More were to be found later in heavier elements.) Refining his x-ray equipment and technique, he was able to significantly increase the accuracy of his determinations of spectral lines. This allowed him to make corrections to Bragg's equation for x-ray diffraction to allow for the finer details of crystal diffraction. *TIS 1990 Lothar Collatz​ (July 6, 1910, , September 26, 1990) was a German mathematician. In 1937 he posed the famous Collatz conjecture, which remains unsolved. The Collatz-Wielandt formula for positive matrices important in the Perron–Frobenius theorem is named after him. *Wik The Collatz conjeture is an iteration problem that deals with the following algorithm.. If a number n is odd, then f(n)= 3n+1 if n is even, then f(n) = 1/2 (n) Each answer then becomes the new value to input into the function. The problem, or should I say problems, resolve around what happens to the sequence of outcomes when we keep putting the answer back into the function. For example if we begin with 15 we get the following sequence, also called the orbit of the number: One of the unproven conjectures is that for any number n, the sequence will always end in the number 1. This has been shown to be true for all numbers up to just beyond 1016. A second interesting question is how long it takes for a number to return to the value of 1. For the example above, the number 15 took 17 steps to get back to the unit value. Questions such as which three (or other n) digit number has the longest orbit. There are many vairations of the problem, but if you are interested in a good introduction, check this link from Simon Fraser University" Collatz's Problem is often also called the Syracuse Algorithm, Hasse's problem, Thwaite's problem, and Ulam's problem after people who have worked and written on the problem. It is unclear where the problem originated, as it seems to have had a long history of being passed by word of mouth before it was ever written down. It is often attributed to Lothar Collatz from the University of Hamburg who wrote about the problem as early as 1932. The name "Syracuse Problem" was applied by after H. Hasse, an associate of Collatz, visited and discussed the problem at Syracuse University in the 1950's. During the 1960's Stan Ulam circulated the problem at Los Alamos laboratory. One famous quote about the problem is from Paul Erdos who stated, "mathematics is not yet ready for such problems". *Personal notes Credits : *CHM=Computer History Museum *FFF=Kane, Famous First Facts *NSEC= NASA Solar Eclipse Calendar *RMAT= The Renaissance Mathematicus, Thony Christie *SAU=St Andrews Univ. Math History *TIA = Today in Astronomy *TIS= Today in Science History *VFR = V Frederick Rickey, USMA *Wik = Wikipedia *WM = Women of Mathematics, Grinstein & Campbell
c7c5f624ba9b0eda
Write For Us Why Vacations Are Essential For Physics By Avishai Bitton, Semesterz Vacations are good for your health, they allow you to get away from the daily grind and let yourself unwind. They are vital in enabling you to recharge your batteries and get your psyche away from work, work, work. While they allow us to forget about the office for a bit, they can also help to stimulate us to create new and innovative ideas. Such an event occurred for a young German physicist struggling to make the breakthrough he was so very close to in 1925. Werner Heisenberg needed a break. Something’s got to give He had experienced a mental block and to make matters worse, he was suffering from a horrendous bout of hayfever. Heisenberg resided in Göttingen, and during one summer he was tortured by persistent allergic reactions, so something had to change. So he went on vacation to Helgoland, a tiny island in the middle of the North Sea to give his sinuses a rest more than anything else. A eureka moment Changing location really helped him as the change of scenery allowed him to breathe and think more clearly. Finding inspiration for his research, he realized he was employing a technique that would not permit him to measure his results, so he reformatted the mathematics into a quantity he could measure. Upon his return to Göttingen, his research partner managed to connect the dots, and the German research team took the tentative steps into what is now known as modern quantum mechanics. Hey! I’m tired too The strategic taking of a vacation was repeated with similar progress in the same year, 1925, with an equal measure of success. Erwin Schrödinger was working on his own quantum problem, regarding states within atoms. Desperately trying to make a breakthrough in his equations, he kept finding himself confronted with mathematical hurdles. After months of working and not getting anywhere fast, Schrödinger took a skiing vacation with one of his lady friends. This bout of rest and recreation was just the ticket, and, after hitting the slopes during the day and working over a desk in the evenings, he had found the equation he was so desperately looking for. That equation is now known as the Schrödinger equation, and it allows us to describe states of electrons found in hydrogen using de Broglie’s terms of electron waves. Mental fatigue is not your friend Clearly, physicists have demanding jobs and affording themselves a break every so often will enable them to refresh and reboot, something which can boost the creative process. Instead of slogging away for 12 hours a day in a lab, it would be useful to understand you aren’t necessarily getting anywhere and you need to come back in the next day or two with a fresher pair of eyes. Please, boss, it’ll help you too While not every physicist will figure out such groundbreaking theories, using the examples of the two scientists above shows that even the most brilliant mind needs time to stop working and chill out. Often bosses want more and more from their employees but think that going at it for hours upon hours will get the job done, sometimes you need to take a step, or two, backward to move forward. So if you’re stuck on a problem in your work, see if your boss will give you a little break, it might prove to be the best solution to yours and their problem. More From Semesterz: Recommended by Career Advice: Why Vacations Are Essential For Physics Why Vacations Are Essential For Physics Career Advice
c845edd8502122e7
3 The Schrödinger equation If the electron in an atom of hydrogen is a standing wave, as de Broglie had assumed, why should it be confined to a circle? After the insight that particles can behave like waves, which came ten years after Bohr’s quantization postulate, it took less than three years for the full-fledged (albeit still non-relativistic) quantum theory to be formulated, not once but twice in different mathematical attire, by Werner Heisenberg in 1925 and by Erwin Schrödinger in 1926. Let’s take a look at where the Schrödinger equation, the centerpiece of non-relativistic quantum mechanics, comes from. Figure 2.3.1 illustrates the properties of a traveling wave ψ of Amplitude A and phase φ = kx − t. The wavenumber k is defined as 2π/λ; the angular frequency ω is given by 2π/T. Hence we can also write φ = 2π [(xλ) − (t/T)]. Keeping t constant, we see that a full cycle (2π, corresponding to 360°) is completed if x increases from 0 to the wavelength λ. Keeping x constant, we see that a full cycle is completed if t increases from 0 to the period T. (The reason why 2π corresponds to 360° is that it is the circumference of a circle or unit radius.) Figure 2.3.1 The slanted lines represent the alternating crests and troughs of ψ. The passing of time is indicated by the upward-moving dotted line, which represents the temporal present. It is readily seen that the crests and troughs move toward the right. By focusing on a fixed time, one can see that a cycle (crest to crest, say) completes after a distance λ. By focusing on a fixed place, one can see that a cycle completes after a time T. The mathematically simplest and most elegant way to describe ψ is to write ψ = [A:φ] = [A:kx − ωt]. This is a complex number of magnitude A and phase φ. It is also a function ψ(x,t) of one spatial dimension (x) and time t. We now introduce the operators ∂x and ∂t. While a function is a machine that accepts a number (or several numbers) and returns a (generally different) number (or set of numbers), an operator is a machine that accepts a function and returns a (generally different) function. All we need to know about these operators at this point is that if we insert ψ into ∂x, out pops ikψ, and if we insert ψ into ∂t, out pops −iωψ: xψ = ikψ,     ∂tψ = −iωψ. If we feed ikψ back into ∂x, out pops (not unexpectedly) (ik)2ψ = −k2ψ. Thus (∂x)2ψ = −k2ψ. Using Planck’s relation E = ω and de Broglie’s relation p = h/λ = k to replace ω and k by E and p, we obtain tψ = −i(E/)ψ,     ∂xψ = i(p/)ψ,   (∂x)2ψ = −(p/)2ψ, (2.3.1)   Eψ = itψ,     pψ = (/i)∂xψ,   p2ψ = −2(∂x)2ψ. We now invoke the classical, non-relativistic relation between the energy E and the momentum p of a freely moving particle, (2.3.2)   E = p2/2m, where m is the particle’s mass. We shall discover the origin of this relation when taking on the relativistic theory. The right-hand side is the particle’s kinetic energy. Multiplying Eq. (2.3.2) by ψ and using Eqs. (2.3.1), we get (2.3.3)   itψ = −(2/2m) (∂x)2ψ. This is the Schrödinger equation for a freely moving particle with one degree of freedom — a particle capable of moving freely up and down the x-axis. We shouldn’t be surprised to find that Eq. (2.3.3) imposes the following constraint on ψ: (2.3.4)   ω = k2/2m. This is nothing else than Eq. (2.3.2) with E and p replaced by ω and k according to the relations of Planck and de Broglie. We have started with a specific wave function ψ. What does the general solution of Eq. (2.3.3) look like? The question is readily answered by taking the following into account: If ψ1 and ψ2 are solutions of Eq. (2.3.3), then for any pair of complex numbers a,b the function ψ = aψ1 + bψ2 is another solution. The general solution, accordingly, is (2.3.5)   ψ(x,t) = (1/√(2π)) ∫dk [a(k):kx − ω(k)t]. The factor (1/√(2π)) ensures that the probabilities calculated with the help of ψ are normalized (that is, the probabilities of all possible outcomes of any given measurement add up to 1). The symbol ∫dk indicates a summation over all values of k from k=−∞ to k=+∞: every value contributes a complex number a(k)[1:kx − ω(k)t], where ω(k) is given by Eq. (2.3.4). If the particle is moving under the influence of a potential V, the potential energy qV (q being the particle’s charge) needs to be added to the kinetic energy (the right-hand side of Eq. 2.3.2). The Schrödinger equation then takes the form (2.3.6)   itψ = −(2/2m) (∂x)2ψ + qVψ. Its generalization to three-dimensional space is now straightforward: (2.3.7)   itψ = −(2/2m) [(∂x)2 + (∂y)2 + (∂z)2]ψ + qVψ.
32e805a17607e6be
Wednesday, August 31, 2016 Expanded Reproduction in an Abstract Capitalist Society In the previous post, 'Simple Reproduction in an Abstract Capitalist Society', we looked at a basic model of how a capitalist society engaged in generalised commodity production can reproduce itself - but without growing. If you didn't read that post, now would be a good time - we use it below. Unlike previous modes of production which were mainly focused on the production of use values, capitalism as practiced by capitalists is motivated purely by the search for surplus value (i.e. growth in capital). The production of specific use values is a matter of indifference as long as the commodities concerned can be sold in the market, thereby releasing their monetary value. Capitalists will not invest unless they think they can grow their capital, so a properly-functioning capitalist society is a growing one. This has posed a problem for some Marxist economists. How, they argue, can capitalism grow when the workers are paid only a portion (v out of v+s) of the value they produce, and the capitalists - although they live well - need to keep most of their capital gains for further investment? This has been termed the 'underconsumption theory of capitalist crises' and was the subject of a historical dispute between Nikolai Bukharin and Rosa Luxemburg in the 1920s. Luxemburg thought that capitalism could only grow (via realising the value of an increasing mass of commodities) through vigorous expansion into new markets, and that this explained 'imperialism'. Bukharin put her right. So here is Bukharin's model in spreadsheet form - click on image to make larger.. As before we have Department 1, making machines and raw materials, and Department 2, making consumables to keep workers and capitalists alive for another day's toil. We split the surplus value created by workers into three categories: that proportion consumed unproductively by the capitalists, (a); that proportion which is capital re-invested in machines, (δc); and that proportion invested in increased labour (δv). All of the variables here measure capital value, so that δv is increased capital allocated to wages. This could be more workers to use extra machines or raw materials, or more highly-paid (more highly-skilled and productive) workers to use more sophisticated machines. The constraint between Departments 1 and 2 to ensure that reproduction can occur is a simple generalisation of the previous case: c2 = v1+a1 and δc2 = δv1. This equates the constant capital in Department 2 with the payments to workers and capitalists in Department 1 through the endless cycles of capitalist reproduction of the relations of production. The model is very, very simple. It is assumed that the capitalists don't increase their consumption iteration-on-iteration .. though they probably would. Also, the incremental growth of constant and variable capital is held constant, although it would probably be increasing geometrically. These details don't invalidate the 'in principle' character of the model.   Bukharin comments: "In other words, the following grow: • the constant capital of society,  • the consumption of the workers,  • the consumption of the capitalists (everything taken in values).  "In this connexion we will not make any further analysis of the relation in which this growth of the various above-listed values proceeds. This question needs to be treated separately. "Here we must mention, even if only briefly, the following circumstances: along with the growth of production, the market of this production grows too, the market of means of production expands, and the consumer demand grows also (since, taken in absolute terms, the capitalists' consumption grows as well as that of the workers). "In other words, here the possibility is given of, on the one hand, an equilibrium between the various parts of the total social production and, on the other, an equilibrium between production and consumption. "In this process the equilibrium between production and consumption is for its part conditioned by the production equilibrium, i.e. the equilibrium between the various parts of the functioning capital and its various branches. "In the above analysis we neglect at first a series of highly important, specifically capitalist moments, e.g. money-circulation. "This resulted in a series of the most serious mistakes, it resulted further in the denial of the existence of contradictions within capitalism, finally a direct apology for the capitalist system, an apology which attempts – to use a Marxist word – to ‘reason away' the crises, the over-production, the mass misery and so on. ‘It must never be forgotten, that in capitalist production what matters is not the immediate use value but the exchange value, and in particular, the expansion of the surplus value.' Here, Bukharin is writing as a typical soviet Bolshevik, echoing Marx's extrapolations of the inevitable fate of capitalism. Reality was to turn out very differently, to the point where it is a genuine and profound question of Marxist analysis as to whether capitalism is indeed subject to structural crises (not just regular business cycles) which could catalyse a revolutionary dynamic towards a higher mode of production. The Marxist theory of crises will be examined here later (cf. Simon Clarke's book, "Marx's Theory of Crisis", available as a Word document here). Tuesday, August 30, 2016 Simple Reproduction in an Abstract Capitalist Society Link to online PDF Capitalism is characterised within the Marxist tradition as generalised commodity production; in Marx’s view, a correct understanding of the commodity encapsulates its fundamentals. Key is the concept of labour power and surplus value. In the following extract from Michael Heinrich’s “An Introduction to the Three Volumes of Karl Marx’s Capital” (Chapter 5, The Capitalist Process of Production), the term ‘means of production’ relates to machinery and raw materials. “With regard to the value of the newly produced commodities, the means of production and labour-power play completely different roles. “The value of the means of production consumed in the creation of a commodity constitutes part of the value of the newly produced commodity. If means of production are completely used up in the process of production, then the value of these means of production is completely transferred to the newly produced mass of commodities. “But if means of production such as tools or machines are not completely used up, then only a part of their value is transferred. If for example a particular machine has a life span of ten years, then one-tenth of its value is transferred to the mass of commodities produced within a year.  The portion of capital laid out in means of production will, under normal conditions, not change value during the production process, but a portion of its value will constitute a portion of the value of the commodities produced. “Marx calls this portion of capital constant capital, or c for short. “Things are different with labour-power. The value of labour-power is not all transferred to the commodities produced. The value newly generated by the “consumption” of labour-power, that is, by labour expenditure, is what is transferred to the value of the newly created commodities. “How much value the worker adds to the product of labour does not depend upon the value of labour-power, but upon the extent to which the labour expended counts as value-creating, abstract labour. The difference between the newly added value and the value of labour-power is the surplus value, or s. “Or to put it differently, the newly added value is equal to the sum of the value of labour-power and surplus value. Marx calls the portion of capital used to pay wages variable capital, or v for short. This portion of capital changes value during the production process; the workers are paid with v, but produce new value in the amount of v + s. “The value of a mass of commodities produced within a specific period of time (a day or even a year) can therefore be expressed as: c + v + s Here c indicates the value of the constant capital consumed, that is, the value of the raw materials and the proportionate share of the value of tools and machines, insofar as they are used. “The valorisation of capital results solely from its variable component. The level of valorisation can therefore be measured by relating the surplus value to the variable capital: Marx calls the quantity s/v the rate of surplus value. It is simultaneously also the measure of the exploitation of labour-power. “The rate of surplus value is usually given as a percentage. For example, if s = 40 and v = 40, then one does not speak of a rate of surplus value of 1, but rather of a rate of surplus value of 100 percent. If s = 20 and v = 40, than the rate of surplus value amounts to 50 percent.” An exercises in Marxist economics is to show how capitalism can reproduce itself. In the most basic case, we look at an idealised steady state situation, where capitalists appropriate surplus value and consume it without re-investment. Expanded reproduction will be modelled in the next post. The economy is divided into two departments: Department 1 is the sector which creates means of production (machines and/or raw materials); this department provides and reproduces the ‘c’ in commodity value. Department 2 produces means of consumption:  food, shelter and all the other necessities for the survival and continuing existence of the workers and capitalists. It underpins the ‘v + s’ in commodity value. For simple reproduction to occur, the following relation must hold*: c2 = v1 + s1 This says that the value of the constant capital in Department 2 (means of consumption) must be equal to the variable + surplus value in Department 1. All other levels of capital may be chosen freely to reflect the size of the economy, the amount of constant capital and labour-power employed and the degree of exploitation**. The economy will turn-over and reproduce itself provided the above relationship holds. Here is an example spreadsheet, followed through 9 iterations. As you can see, it never changes and equivalent values (Exch) are exchanged between Department 1 and Department 2. Department 1 has to buy means of consumption for its own workers and capitalists (v1 + s1) from Department 2 (it makes its own constant capital c1); Department 2 has to buy its constant capital c2 from Department 1, but can produce the necessaries of life for its own workers and capitalists (v2 + c2) itself. Examine the first row of the spreadsheet above. Department 1 (creator of means of production) creates a value of 12 (in some units) in cycle one. This will purchase the next round of machines and raw materials in the second cycle: 3 units of value available for Department 1 and 9 for Department 2 (the creator of means of consumption). Department 2 creates 22 units of value which supply workers and capitalists: 9 units required for Department 1 and 13 units for Department 2. The column 'Exch' indicates the values exchanged between Departments 1 and 2. Department 1 'exports' a value of 9 to Department 2 to maintain/'depreciate' its machinery and raw materials; Department 2 'exports' a value of 9 also to keep the workers and capitalists in Department 1 in shape (5 + 4). In a certain sense, Department 1 'exports' its surplus machine + raw material value while Department 2 'exports' its surplus necessaries of living value. The two have to match in the market place, where they share the common value of 9. What counts is that the value created in Department 1 over and above what's needed to reproduce its own constant capital (c1 + v1 + s1 - c1 = v1 + s1) is exactly the same as the constant capital recurrently employed in Department 2 (c2). When that is the case, Department 1 finds a market for its output in Department 2 and can therefore afford to buy its necessaries of life from Department 2 (v1 + s1). Department 2 will employ sufficient workers, at a cost of v2, to properly employ the means of production (of value c2). The case for expanded reproduction exploits exactly the same procedure. This proves that capitalist equilibrium (at least in this ever-so-simple model) is possible in principle; in reality capitalists make independent and non-centralised decisions so coordination cannot be as exact as in the spreadsheet. This will eventually lead us into a theory of crises. Things get a little more complex and interesting when we consider expanded reproduction, the typical case of a capitalist economy in growth. The subject of the next post: "Expanded Reproduction in an Abstract Capitalist Society". * See 'Imperialism and the Accumulation of Capital, Bukharin 1925' for more details. ** 'Exploitation' is a pejorative word but should be here understood analytically. In any form of society which is economically growing, workers will receive less to spend than the value of their work-production. Otherwise, where is the infrastructure of civilisation to come from? In the case of capitalism, that 'surplus value' is appropriated privately by the capitalist. In feudalism it was appropriated mainly by the aristocracy, and in slave societies it was directly, coercively owned by the slave's master. In any society where humans work to society's benefit, there will be a social surplus product .. but it may not take the form of surplus value if labour is not commodified. Who knows whether that will ever come about? Thursday, August 25, 2016 Proxima Centauri b I found out about it yesterday, from Paul Gilster's blog, Centauri Dreams. It had all the right information: The key question: observe it better with giant space telescopes (maybe a new push for the FOCAL mission using the sun as a gravitational lens) .. or send a probe? Might a country (the US or China, say?) embark upon a high-prestige, multi-decade-long programme to send such a mission? It encounters that old starflight paradox: the later launches - so much more technologically advanced - overtake the first ones. I think it's possible that the youngest children on Earth might live to see close-in imaging of this planet .. and/or a mission that we might have figured out by then how to slow down. Related: this sad little tale from Alastair Reynolds. "Help Eliza, I'm in trouble!" I'm something of a subscriber to the view: 'AI's the solution .. so what's the problem?' The problem under consideration today is that of child abuse, mentioned in this post about Internet paedophiles yesterday, and prominent in continuing revelations about abuse at Ampleforth College. [Wikipedia: "Ampleforth College is a coeducational independent day and boarding school in the village of Ampleforth, North Yorkshire, England. It opened in 1802 as a boys' school, and is run by the Benedictine monks and lay staff of Ampleforth Abbey. Let me remind you about Eliza, the original chatbot developed by Joseph Weizenbaum. "ELIZA worked by simple parsing and substitution of key words into canned phrases. Depending upon the initial entries by the user, the illusion of a human writer could be instantly dispelled, or could continue through several interchanges. "It was sometimes so convincing that there are many anecdotes about people becoming very emotionally caught up in dealing with [ELIZA] for several minutes until the machine's true lack of understanding became apparent. "Weizenbaum's own secretary reportedly asked him to leave the room so that she and ELIZA could have a real conversation. "As Weizenbaum later wrote, "I had not realized ... that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people." Eliza works by matching text input against a large database of templates. Each input template is linked to one or more possible output templates, with variables which can be instantiated to the substantive words from the input. Eliza might, for example, In addition to crafting a reply, Eliza could easily have updated a user-database with the information it was receiving. It's easy to see how this could be applied to helping victims of child abuse. A key design principle is that the abuser must not become aware that the child is passing on information: this rules out a tailored 'abuse app'. I suggest a special WhatsApp-connected chatbot with a widely publicised name - let's say Help!. The child contacts Help! on WhatsApp and the first thing he or she is asked to do is choose a name, say Peter, which is what will appear (instead of Help!) on their WhatsApp contacts list. I think the history of chats with Peter is going to have to vanish too, replaced with harmless confected froth. The child is typing to an Eliza-like chatbot (maybe more like IBM's Watson than Eliza) which has been trained on scripts from charities like Childline. Like Weizenbaum, we know that people of all ages are especially likely to confide in an AI agent. The database which Help! constructs is a transcript of alleged abuse. The real problem is what to do with it. No doubt it's encrypted and identity-protected but at some point someone has to assess whether this is a real or false allegation, and figure out how to proceed. But these are problems charities already have to deal with. I think they should move on the app. There's already one for carers. Wednesday, August 24, 2016 Croquet at the Bishop's Palace, Wells What a busy day today! I hoovered the house then walked across to the gym for my weight room induction session: bench-press machines and dumbbells; Clare got to mop the floor during this. Back home Clare cut my hair to something appropriate to iron-pumping and then we strolled to the Bishop's Palace for a late picnic lunch. Croquet in front of Wells Cathedral We're fed and watching the croquet intently I'm told that croquet only looks genteel; in reality it's hyper-confrontational and aggressive. What would I know? I thought they played it with flamingos. Blue Labour - so disappointing I was really prepared to like Blue Labour and to that end bought this book. Amazon link I read about half the essays before finally choking on the conceptual equivalent of mushy bran. Here's the story from Wikipedia. "Labour peer and London Metropolitan University academic Maurice Glasman launched Blue Labour in April 2009 at a meeting in Conway Hall, Bloomsbury. He called for "a new politics of reciprocity, mutuality and solidarity", an alternative to the post 1945 centralising approach of the Labour Party. Chuka Umunna, one of the high priests of metro-liberalism, a Blue Labour guru? Give me a break. "Blue Labour suggests that abstract concepts of equality and internationalism have held back the Labour Party from linking with the real concerns of many voters, the concept of equality leading to an 'obsession with the postcode lottery' and internationalism ignoring fears of low paid workers about immigration. The essays, highly overlapping and uniformly light on conceptual depth and rigour, feature: • An ethical/cultural critique of liberal individualism • An economic critique of rampant globalism • A religious critique of secular atomisation. So what is to be done? Way too many essays base themselves on Catholic Social Teaching (various Popes get extensive name checks), a naive idealisation of the increasingly dysfunctional German social and economic model (workers on boards, apprenticeships etc) and the communitarian history of the British labour and cooperative movements over the last couple of hundred years, going back to the Romantic tradition of John Clare (yes, he gets a name check too). There is no analysis of just why metro-liberal politics, economics and culture have proved so hegemonic over the last few decades everywhere in the western world, and no plausible political programme for going forwards - nothing beyond tinkering at the edges (community organising, anyone?). No, Blue Labour is a superficial nostalgia-fest built on sand. I also bought this which I have yet to read: Amazon link I have no hopes. Update (28th August 2016). I've completed Rowenna Davis's book and it's an easier read than the essay collection above, as well as being way more insightful. It covers the 18 months stretching from the end of Gordon Brown's premiership to the Miliband brothers election and Ed's first year. During this time Blue Labour emerged and then became dominated by Maurice Glasman once he had been elevated (by Ed) to the Lords. Glasman emerges as an argumentative, self-willed and naive radical activist who fell out with many of his co-thinkers. Interestingly, these were mostly Oxford academics. The other major support groups for Blue Labour were community activists (Citizens UK, led by a patrician bunch of Oxford graduates) and faith groups (Catholics and Muslims predominated). When the book ends, in 2011, Blue Labour is imploding due to Glasman's egotistical gaffes. Plainly, it recovered later, but in the age of Corbyn its profile today is invisible. I'm left with the lasting impression that, despite the academic credentials of its founders, the Blue Labour movement is distinctly lacking both in intellectual depth (in general) and in any analysis of 21st century capitalist dynamics (specifically). It is possible that the Labour Party, as we know it, has no viable go-forward mission at all. One small error on the Internet ... The paedophile site Following that flimsy thread was, it turned out, enough. Tuesday, August 23, 2016 The worst army in the world? Not entirely From The Telegraph today: "EU leaders want their own army, but can't agree on much else - five things we learned from the Renzi-Hollande-Merkel summit". In fact the official statement seemed quite vague, but a European Army would plainly be a most ineffectual institution: • No common language • No common esprit de corps • No cohesive leadership • No common experience of combat • No overall political master. Would there ever be political agreement to get this 'army' to do anything involving real combat? Look on the bright side. It's probably a way to get national armies in Europe to converge to similar doctrines, equipment and command-and-control protocols. European military cooperation will surely be necessary in the future yet today the ability of European nations to fight effectively together (in theory within NATO) is lamentably poor. The ideology of the 'United States of Europe' and 'no more European wars' is clearly alive, well and part of the rationale for this initiative. Although full federalism a la USA is never going to happen, the 'European Army' project should nevertheless aid national military renewal projects. In a Europe which is today pacifistic, disarmed and vaguely helpless (leaving aside the UK and France) this is to be welcomed. At least we know that in principle the Europeans can fight proper state-on-state wars: two world wars proved that. Compare and contrast the situation with the Arabs. I guess we should be relieved. A two mile circular walk from Priddy The walk goes along Dursdon Drove and the West Mendip Way. Parking at the Queen Victoria pub. Here's the route: we walked clockwise starting from the Queen Vic pub top left You could say it was hot - 32 degrees in the pub car park, and 26 on the open trail this afternoon. About an hour. Here are some pictures. The southern part of the trail, looking east Clare in camo-chic with wrap-shades Strangely-hued sheep - could be a savanna shot Your author recovering at the Queen Vic, Priddy Monday, August 22, 2016 How will AIs become politically correct? In my recent post, "Gloria Hunniford and the case for AI biometrics", I advocated the use of AI facial recognition systems in bank branches to check for scammers. They would be more effective than cashiers because 'AI systems don't have to be polite.' But of course they do. Hardly a day goes by without some story appearing about an AI system which 'noticed' certain unfortunate connections and had to be tweaked. Some of these stories reflect genuine issues of training sets and algorithm-configuration; others expose the system's aspie-like tendency to blurt out uncomfortable truths. And there are plenty of them - truths which fall outside the famous Overton window. I think it will be a very smart AI which can keep two sets of books: the accurate model of the world it generates from its deep learning and the acceptable model which it has to use and pay homage to in public. Since the acceptable model is ideological rather than based on evidence, it's a non-trivial process to concoct the politically-correct version from the data trawled exhaustively from reality. How would an AI handle this? Till we get AI self-deception really locked down, I see a long spell of high-pay-grade tweaking from specialists at Google, Facebook and the like, carefully guided by their in-house commissars. Kamm, Corbyn and NATO There's something about establishment vilification which makes a person reconsider old certainties. In today's Times, Oliver Kamm takes Jeremy Corbyn to task for his 'pacifism' (as usual with JC, nothing is really for sure). According to Kamm, what did Jeremy say? "... asked at a leadership hustings how he would respond if a Nato ally was invaded by Russia, Mr Corbyn replied: “I would want to avoid us getting involved militarily by building up the diplomatic relationships and also trying to not isolate any country in Europe . . .” He added: “I don’t wish to go to war. What I want to do is achieve a world where we don’t need to go to war, where there is no need for it.” We wearily recall Trotsky: "You may not be interested in war, but war is interested in you." Kamm continues: "President Putin’s regime has already unilaterally altered the boundaries of Europe by force on preposterous pretexts. Mr Corbyn has in effect announced to this aggressive and expansionist power that if he is in charge there will be no costs and no resistance if Russia adopts the same methods against allies to which we are bound by treaty obligations. "A few weeks ago, Nato announced plans to increase its strength in Poland and the Baltic states. Under a Corbyn government, those democratic allies won’t be able to rely on us." One should generally listen to Oliver Kamm's very trenchant, neoliberal/neocon views and then adopt precisely the opposite. Going to war with a serious opponent (Russia) is an existential business. This is not something any state does lightly. You may recall that America, our supposed great ally, took its time coming to our assistance in both the first and second world wars. Strangely, they took account of their own national interests. The problem with NATO is that its mutual self-defence treaty locks in a supposed commonality of national interests which cannot in fact exist. When NATO attacks some ultra-weak foe in a discretionary war, this is obscured - the war effort by some NATO members may be purely notional. This renders moot Oliver Kamm's point: "Imagine that Mr Corbyn’s wish had been acted on. In 1998, the UN Security Council voted three times to identify the crisis in Kosovo as a threat to international peace and security and demand a response by the government of Slobodan Milosevic in Serbia. Milosevic’s forces intensified their persecution of Kosovan Albanians, driving hundreds of thousands from their homes. "Belatedly, Nato launched a bombing campaign against Serbia in 1999. It thereby prevented a humanitarian catastrophe in Europe. Nato’s intervention rescued a threatened population and put Kosovo’s fate in the hands of a UN administration." We say NATO, but in reality this was a coalition of the neocon-willing: no-one in the US or the UK feared retaliation from the Serbs. NATO was a convenient figleaf. If Russia gets into a border war in Eastern Europe with a NATO member, does anyone really think NATO will automatically go to war on its behalf? General Sir Richard Shirreff pointed out in his recent book, "War with Russia", that for real wars NATO is an archaic, hollowed-out shell - an ineffectual paper tiger. In truth we go to war when that's the only way to further advance our national interests. Treaties which would attempt to drag us into war against such interests are merely foolish pretences. Would it really be such a catastrophe to let NATO go? Then we (and the rest of Europe) could get real about what we do, and do not, existentially care about. Saturday, August 20, 2016 Gloria Hunniford and the case for AI biometrics Here is how The Telegraph covered it: "Rip Off Britain presenter Gloria Hunniford was the victim of a £120,000 fraud by an imposter posing as the star. "The 76-year-old Loose Women panelist's bank account was emptied just days after the woman arrived at a Santander branch with her "daughter" and "grandson". "Personal banker Aysha Davis, 28, said the woman told her she had "a few bob" in there and had come to add the teenager as a signatory because she had been ill." Here's a picture of the glamorous Gloria Hunniford and the rather-less-so scammer. Should we condemn the unfortunate bank staffer Aysha Davis, who was charged (and rapidly acquitted) as an accomplice? The percentage of people who engage with banks using fake photo-ID must be miniscule. Say 1 in 10,000. How many of the 9,999 bona fide customers happen to look rather unlike their photos? Quite a lot, I'd say. So how many bank staff are going to say, "You look nothing like this glamourous photo, so I'm going to have to run a security check," given the overwhelming chances that the mismatch is actually a false positive? Davis said in court, "... as they had all the correct ID documents and paperwork it wasn't [my] job to pry for fear of causing offence." What would work is AI facial recognition, which now works better than the human eye - and doesn't have to be polite. However, outfitting every bank branch with a camera linked to an AI database (let alone building the customer facial database in the first place) would be a hard sell to customers as well as a major capital cost. This scam merely cost Santander a £120,000 refund. However, if there was an independent case for facial-ID biometrics in the banking industry (and pretty much everyone has access to a smartphone now, so there could easily be an app) then it looks rather more doable. I suggest that's the way to go. In related news: "Police officers in the US have arrested a fugitive after seeing through his elaborate disguise as an elderly man. "They surrounded a house in South Yarmouth, Massachusetts, and ordered Shaun "Shizz" Miller out. "He walked outside in disguise and when they realised the "elderly man" was actually the 31-year-old they were looking for, they arrested him. "He had been on the run since being charged with heroin trafficking offences in April." The police don't have to be polite ... Friday, August 19, 2016 The 10,000 year view Amazon link Richard Feynman once wrote: What should we say about the other centuries? The seventeenth century, in 10,000 years time, will be remembered principally for Isaac Newton's laws of dynamics: And universal gravitation:  F = Gm1m2/r2  - plus calculus, co-discovered with Leibnitz. The eighteenth century was not rich in epoch-spanning discoveries, but future historians of science will recall it for Rev. Thomas Bayes, whose profound theorem will power the great AI learning engines down the ages. The nineteenth century we've already mentioned. Here are Maxwell's equations in the vector form he would not easily have recognised. The twentieth century is a cornucopia of fundamental science, but I think the most truly foundational, revolutionary and influential discovery has to be the Schrödinger equation, which explains .. well, almost everything around us. But I doubt the 10,000 year future will have forgotten Einstein - or BohrHeisenbergDirac, ... . Sean Carroll has a related list of his seven favourite equations here. Thursday, August 18, 2016 Reality and the MWI From "Many Worlds? An Introduction" by Simon Saunders. “As Popper once said, physics has always been in crisis, but there was a special kind of crisis that set in with quantum mechanics. For despite all its obvious empirical success and fecundity, the theory was based on rules or prescriptions that seemed inherently contradictory. There never was any real agreement on these matters among the founding fathers of the theory. “In what sense are the rules of quantum mechanics contradictory? They break down into two parts. One is the unitary formalism, notably the Schrödinger equation, governing the evolution of the quantum state. It is deterministic and encodes spacetime and dynamical symmetries. “Whether for a particle system or a system of fields, the Schrödinger equation is linear: the sum of two solutions to the equation is also a solution (the superposition principle). This gives the solution space of the Schrödinger equation the structure of a vector space (Hilbert space). “However, there are also rules for another kind of dynamical evolution for the state, which is - well, none of the above. These rules govern the collapse of the wavefunction. They are indeterministic and non-linear, respecting none of the spacetime or dynamical symmetries. And unlike the unitary evolution, there is no obvious route to investigating the collapse process empirically. “Understanding state collapse, and its relationship to the unitary formalism, is the measurement problem of quantum mechanics. There are other conceptual questions in physics, but few if any of them are genuinely paradoxical. None, for their depth, breadth, and longevity, can hold a candle to the measurement problem. “Why not say that the collapse is simply irreducible, ‘the quantum jump’, something primitive, inevitable in a theory which is fundamentally a theory of chance? Because it isn’t only the collapse process itself that is under-specified: the time of the collapse, within relatively wide limits, is undefined, and the criteria for the kind of collapse, linking the set of possible outcomes of the experiment to the wavefunction, are strange. “They either refer to another theory entirely - classical mechanics - or worse, they refer to our ‘intentions’, to the ‘purpose’ of the experiment. “They are the measurement postulates - (‘probability postulates’ would be better, as this is the only place where probabilities enter into quantum mechanics). One is the Born rule, assigning probabilities (as determined by the quantum state) to macroscopic outcomes; the other is the projection postulate, assigning a new microscopic state to the system measured, depending on the macroscopic outcome. “True, the latter is only needed when the measurement apparatus is functioning as a state-preparation device, but there is no doubt that something happens to the microscopic system on triggering a macroscopic outcome. “Whether or not the projection postulate is needed in a particular experiment, the Born rule is essential. It provides the link between the possible macroscopic outcomes and the antecedent state of the microscopic system. As such it is usually specified by giving a choice of vector basis - a set of orthogonal unit vectors in the state space - whereupon the state is written as a superposition of these. The modulus square of the amplitude of each term in the superposition, thus defined, is the probability of the associated macroscopic outcome. “But what dictates the choice of basis? What determines the time at which this outcome happens? How does the measurement apparatus interact with the microscopic system to produce these effects? From the point of view of the realist the answer seems obvious. The apparatus itself should be modelled in quantum mechanics, then its interaction with the microscopic system can be studied dynamically. But if this description is entirely quantum mechanical, if the dynamics is unitary, it is deterministic. Probabilities only enter the conventional theory explicitly with the measurement postulates. The straightforwardly physicalistic strategy seems bound to fail. How are realists to make sense of this? “The various solutions that have been proposed down the years run into scores, but they fall into two broadly recognizable classes. One concludes that the wavefunction describes not the microscopic system itself, but our knowledge of it, or the information we have available of it (perhaps ‘ideal’ or ‘maximal’ knowledge or information). No wonder modelling the apparatus in the wavefunction is no solution: that only shifts the problem further back, ultimately to ‘the observer’ and to questions about the mind, or consciousness, or information - all ultimately philosophical questions. “Anti-realists welcome this conclusion; according to them, we neglect our special status as the knowing subject at our peril. But from a realist point of view this just leaves open the question of what the goings-on at the microscopic level, thus revealed, actually are. By all means constrain the spatiotemporal description (by the uncertainty relations or information-theoretic analogues), but still some spatiotemporal description must be found, down to the length-scales of cells and complex molecules at least, even if not all the way to atomic processes. “That leads to the demand for equations for variables that do not involve the wavefunction, or, if none is to be had in quantum mechanics, to something entirely new, glimpsed hitherto only with regard to its statistical behaviour. This was essentially Einstein’s settled view on the matter. “The only other serious alternative (to realists) is quantum state realism, the view that the quantum state is physically real, changing in time according to the unitary equations and, somehow, also in accordance with the measurement postulates. “How so? Here differences in views set in. Some advocate that the Schrödinger equation itself must be changed (so as to give, in the right circumstances, collapse as a fundamental process). They are for a collapse theory. “Others argue that the Schrödinger equation can be left alone if only it is supplemented by additional equations, governing ‘hidden’ variables. These, despite their name, constitute the real ontology, the stuff of tables and chairs and so forth, but their behaviour is governed by the wavefunction. This is the pilot-wave theory. “Collapse in a theory like this is only ‘effective’, as reflecting the sudden irrelevance (in the right circumstances) of some part of the wavefunction in its influence on these variables. And once irrelevant in this way, always irrelevant: such parts of the wavefunction can simply be discarded. This explains the appearance of collapse. “But for others again, no such additional variables are needed. The collapse is indeed only ‘effective’, but that reflects, not a change in the influence of one part of the quantum state on some hidden or ‘real’ ontology, but rather the change in dynamical influence of one part of the wavefunction over another - the decoherence of one part from the other. “The result is a branching structure to the wavefunction, and again, collapse only in a phenomenological, effective sense. But then, if our world is just one of these branches, all these branches must be worlds. Thus the many worlds theory - worlds not spatially, but dynamically separated.” Saunders' introductory chapter from the book, "Many Worlds?" underlines the central puzzle of quantum mechanics. What would reality have to be like to make the theory of quantum mechanics so incredibly accurate? Realists driven to the 'Many Worlds Interpretation' can still make no sense of it (Sean Carroll is a consistent defender, though). As Saunders observes on page 20, “How does talk of macroscopic objects so much as get off the ground? What is the deep-down ontology in the Everett interpretation? It can’t just be wavefunction [...]; it is simply unintelligible to hold that a function on a high-dimensional space represents something physically real, unless and until we are told what it is a function of  - of what inhabits that space, what the elements of the function’s domain are. “If they are particle configurations, then there had better be particle configurations, in which case not only the wavefunction is real.” And so I have bought "The Many Worlds of Hugh Everett III: Multiple Universes, Mutual Assured Destruction, and the Meltdown of a Nuclear Family" by Peter Byrne. Wednesday, August 17, 2016 I'm with David Daniel Finkelstein has this interesting comment piece in The Times today. "When some years ago David Owen, one of the SDP’s founders, sent me an early draft of his memoirs, I understood for the first time that he had seen the SDP as essentially doomed — certainly in deep trouble — before I even joined it at the beginning of 1982. What had doomed it, in his view, was the decision to form a tight alliance with the Liberal Party. "Owen’s conception of the SDP, which was formed in 1981, is that it would be a tough-minded, hawkish party of the left. It would appeal to an aspirational working class, particularly in the north, who had tired of bureaucratic socialism and saw the point of Margaret Thatcher, but were not Tories. "When the future Labour foreign secretary was a student working on a building site he had been struck by the reaction of his fellow workers to the Suez crisis. It had been instinctively nationalist, uninterested in political protocol, and robust. It was these people he wanted the SDP to appeal to. "Roy Jenkins, former Labour chancellor but also biographer of the Liberal prime minister HH Asquith, wanted a centre party that reflected his own liberal instinct. This would be a southern party of the middle class, disdainful of Thatcher, fastidious rather than bulldog-like on international issues, avowedly centrist. "Everything about this Jenkins view — the electoral relationship with the Liberals in particular, but also the claret-drinking image — drove Owen crazy. But for all that he later did to shape the party, Owen was right that by 1982 Jenkins had won the battle. The SDP would be a liberal party. It lost almost all its northern and working-class seats, was not able to compete in the south because the Liberal Party took all the best constituencies, and ended up being swallowed up by its partner. "Owen and Jenkins were rowing over whether liberalism and being a Labour moderate or even a centrist were the same thing. Jenkins felt that practically and philosophically they were. Owen felt that practically and philosophically they were not." No-one in the current leadership cadre of the Labour Party, neither left, centrist or right, espouses David Owen's political views - with the possible exception of John Mann. And so they will not reconnect with their millions-strong working class roots. If Theresa May can find a way to overcome the respectable working class's tribal anti-Toryism, Labour are electoral toast till the end of time. Blue Labour website and Wikipedia article. Cheddar reservoir Like my wrap-around mirror shades? That, and the linearity of quantum mechanics. Why the Great Stagnation? What next? This is what my CV says in the couple of years leading up to the great crash of 2008. Programme Management - BT Wireless Cities: May 2006 - Sep 2007 A lucrative sixteen month contract, rolling out urban WiFi for BT across major cities in the UK. Network architecture consultant to Dubai World Central: Jan 2008 - July 2008 A seven month contract in Dubai designing the network from scratch for a new ultra-wired airport/city complex. We completed the design and then the crash arrived .. and we flew home. Network architecture consultant to Media City, Manchester: Dec 2008 - Jan 2009 Security Accreditation (IL2/IL3) at C&W and other clients: Jan 2010 - Sep 2010 Managed an RFQ for an international law firm, London: Jan 2012 - July 2012. These were worthwhile but small-scale pieces of work. After that, things did not get any better. I was pleased to retire from network design in March 2014. The UK economy normally rebounds from dips within three years (12 quarters) as this chart shows, but as you can see, the 2008 crash was something special. The rate of growth was clearly negative for about a year and a half (six quarters) and after that - anaemic. This chart - same source - looks at the rate of change of GDP (ie growth) over the period 1949-2012 (during the last 3 years UK annual GDP growth has fluctuated between 2% and 3%). I read that financial crises always exhibit a longer recovery period, as people have to pay down their debts, but it's now been eight years since the big crash and growth rates are still subdued. What's going on? Larry Summers suggested an answer in his essay, "The Age of Secular Stagnation". "Most observers expected the unusually deep recession to be followed by an unusually rapid recovery, with output and employment returning to trend levels relatively quickly. Yet even with the U.S. Federal Reserve’s aggressive monetary policies, the recovery (both in the United States and around the globe) has fallen significantly short of predictions and has been far weaker than its predecessors. "Had the American economy performed as the Congressional Budget Office fore­cast in August 2009 - after the stimulus had been passed and the recovery had started—U.S. GDP today would be about $1.3 trillion higher than it is." So what went wrong? "The key to understanding this situation lies in the concept of secular stagnation, first put forward by the economist Alvin Hansen in the 1930s. The economies of the industrial world, in this view, suffer from an imbalance resulting from an increasing propensity to save and a decreasing propensity to invest. The result is that excessive saving acts as a drag on demand, reducing growth and inflation, and the imbalance between savings and investment pulls down real interest rates. "When significant growth is achieved, meanwhile—as in the United States between 2003 and 2007 - it comes from dangerous levels of borrowing that translate excess savings into unsustainable levels of investment (which in this case emerged as a housing bubble)." But why are people be so keen to save, rather than invest? "Greater saving has been driven by: • increases in inequality and in the share of income going to the wealthy,  • increases in uncertainty about the length of retirement and the availability of benefits,  • reductions in the ability to borrow (especially against housing), and  • a greater accumulation of assets by foreign central banks and sovereign wealth funds.  "Reduced investment has been driven by: • slower growth in the labor force,  • the availability of cheaper capital goods, and  • tighter credit (with lending more highly regulated than before). "Perhaps most important, the new economy tends to conserve capital. Apple and Google, for example, are the two largest U.S. companies and are eager to push the frontiers of technology forward, yet both are awash in cash and are under pressure to distribute more of it to their shareholders. "Think about Airbnb’s impact on hotel construction, Uber’s impact on automobile demand, Amazon’s impact on the construction of malls, or the more general impact of information technology on the demand for copiers, printers, and office space. "And in a period of rapid technological change, it can make sense to defer investment lest new technology soon make the old obsolete." So how do we get out of this? It seems that austerity (clamping down on public expenditure to claw-back massive Government debt) has few friends left. Summers' remarks are addressed to a US audience, but are equally applicable to the UK. "... primary responsibility for addressing secular stagnation should rest with fiscal policy. An expansionary fiscal policy can reduce national savings, raise neutral real interest rates, and stimulate growth. "Fiscal policy has other virtues as well, particularly when pursued through public investment. A time of low real interest rates, low materials prices, and high construction unemployment is the ideal moment for a large public investment program. It is tragic, therefore, that in the United States today, federal infrastructure investment, net of depreciation, is running close to zero, and net government investment is lower than at any time in nearly six decades. "It is true that an expansionary fiscal policy would increase deficits, and many worry that running larger deficits would place larger burdens on later generations, who will already face the challenges of an aging society. But those future generations will be better off owing lots of money in long-term bonds at low rates in a currency they can print than they would be inheriting a vast deferred maintenance liability." Finally we come to the politics. With our ever-expanding university sector, we're seriously in the business of elite overproduction. New graduates, particularly those articulate, idealistic young people with arts degrees, can't get high-status, well-rewarded jobs. They naturally channel their unhappiness into political activism. "Secular stagnation and the slow growth and financial instability associated with it have political as well as economic consequences. If middle-class living standards were increasing at traditional rates, politics across 
the developed world would likely be far less surly and dysfunctional. So mitigating secular stagnation is of profound importance. "Writing in 1930, in circumstances far more dire than those we face today, Keynes still managed to summon some optimism. Using a British term for a type of alternator in a car engine, he noted that the economy had what he called “magneto trouble.” "A car with a broken alternator won’t move at all - yet it takes only a simple repair to get it going. In much the same way, secular stagnation does not reveal a profound or inherent flaw in capitalism. Raising demand is actually not that difficult, and it is much easier than raising the capacity to produce. The crucial thing is for policymakers to diagnose the problem correctly and make the appropriate repairs." I'm interested in what Peter Turchin is going to make of all this in September with his new book, "Ages of Discord". Many of the recoveries we have seen in the past were driven by massive investment in new, productivity-raising technologies: electrification, petrol engines, scientific management, computers and the Internet. In every case, it took a good few years for the new technologies to develop, be perfected and for people to learn how to use them to increase productivity. It was only then that the economic tipping point occurred. The next revolution will be driven by new technologies such as AI, new sensors, robotics and VR, bound together by high-speed ubiquitous networks; also genetic engineering and genomics. These technologies will surely launch a huge boom, but plainly we're in the earliest days. So I expect a good decade or so of bouncing around in 'the new normal' before the next lift-off, deficit spending or no. Tyler Cowen of Marginal Revolution (amongst many others) has written about this too. Tuesday, August 16, 2016 Last Friday was our penultimate day in Hereford. The rest of the house had departed to Symonds Yat to do some canoeing, leaving the house to quiet and to me. I sat in the sunshine and listened to this. As a consequence, it's now become an Ohrwurm. An upcoming post will talk about "The Great Stagnation" for three reasons: 1. We're living through it and it's blighting many lives 2. It seems difficult to understand why we're stuck in it 3. Leftist groups have characterised it as the final crisis of capitalism. My point of departure is Larry Summer's influential essay on 'The Age of Secular Stagnation'. Monday, August 15, 2016 Paul Mason and PostCapitalism There are just a few people whose public personas I have rather taken against. Number one on my current list is Owen Smith. He, you may recall with difficulty, is the insincere and glib little weasel who resembles a mini-me version of François Hollande and who is attempting, for reasons of petty ambition, a doomed campaign to displace Jeremy Corbyn. We move on. Whenever I saw Paul Mason on BBC's Newsnight or Channel 4 news, I observed my twitching hand reach unconsciously for the channel-flip, impelled by some combination of his northern lad with a chip on his shoulder schtick, his self-righteous anger at every policy trying to fix the economy combined with a fanboy gullibility as regards the modish antics of Occupy and every other middle-class angst-fest. It was Kevin the Teenager reprised at fifty-something. I understand the type only too well: too smart, idealistic and empathic to fit in with his working class contemporaries; too working class to be accepted into the well-born elite. His perpetual estrangement from power and influence fueling an inchoate rage channelled into left wing rebellion. Paul Mason was a trotskyist in one of the more cerebral outfits, Workers Power, which has now dissolved/entered into the Corbynista mass to rebuild under the motif of Red Flag. But Paul Mason noticed that none of the trotskyist predictions of proletarian revolution ever came good. Being of independent mind, he conceptualised an alternative road to communism; at least one reviewer bathetically called him 'the new Marx'. Amazon link His book is uneven: historical discussions of subsequent misinterpretations of Marx echo those of Michael Heinrich (blogged about here) who discussed 'worldview Marxism' as a coarsening of Marxist theory. Mason believes, I think correctly, that Lenin and Trotsky fundamentally misunderstood what really happened in Russia in 1917, documenting the reasons for that failure in convincing detail. It's when he starts to advance his own ideas for post-capitalist transition that wishful thinking and blind hostility come to the fore. Owen Hatherley's review nails it. "The organised factory proletariat in the US, Europe and Japan never carved out a path to post-capitalism – or socialism as it was then known – but Occupy, Maidan, Tahrir Square, and even the protests against the Workers’ Party government in Brazil, ‘are evidence that a new historical subject exists. It is not just the working class in a different guise; it is networked humanity. "The ‘new gravedigger’ produced by capitalism consists of ‘the networked individuals who have camped in the city squares, blockaded the fracking sites, performed punk rock on the roofs of Russian cathedrals, raised defiant cans of beer in the face of Islamism on the grass of Gezi Park’ etc. This is kitsch, but more significant is Mason’s failure to analyse the political content of the movements of the young. "Not a lot of people in any of them considered ‘capitalism’ their main enemy, probably less so than the average striker in the 1930s or 1970s. They are a disparate bunch, from all manner of class backgrounds, advocating various positions across the political spectrum, but all united apparently by their use of Twitter and their distrust of ‘old elites’ and hierarchies. "Since they carry no baggage, it isn’t worth investigating why, say, the protests in Brazil so easily passed over into racism, why some in Tahrir Square preferred a new general to an elected Islamist, why both sides in Ukraine’s unrest had a crucial far-right element, or why the descendants of Occupy in London and New York now find themselves campaigning for ageing, old-school leftist social democrats. "Mason sweeps all this away on a tide of goofy utopianism." Taking Wikipedia as your model for post-capitalist relations of production is to completely miss the intrinsically parasitic, hobbyist and career-furthering (let alone corporate) nature of so much open-source activity. It's never going to shake its shoulders and sweep aside all those mundane commoditised relations of production which coordinate activities to keep us fed, sheltered, defended, powered-up and online. Mason would have been more acute had he observed that, while Marx gave a very good conceptual account of capitalism in terms of systematised and recurrent patterns of human economic and political activity (process rather than structural models, if you will), he had considerably less to say about why capitalism was either inherently bad news for humanity or precisely why it would necessarily create the conditions for its own supersession. Due to the inadequate development of the productive forces it inherited, capitalism was truly awful for its human participants (disproportionately for the working class, of course) in Marx's time and as recently as the second world war - but since then it has, by historical standards, not been so bad. Ask the Chinese or the Vietnamese. And don't blame capitalism for Africa or the Middle-East. Capitalism still seems pretty efficient at developing the forces of production as Mason, a fan of automation, is happy to concede. So what's going to light the fires of mass revolutionary zeal? Apparently nothing - so we're left with incremental socialism-creep within the interstices of capitalism, Good luck with that. Good try, Paul, but we need look elsewhere for possible paths to humanity's future. Free Weights vs Resistance Machines So this is the question I have recently been asking myself: for four years I have done the circuit of aerobic and resistance machines at the gym .. and resolutely walked past the weight room. Am I missing something important? 'Dr. Mercola' writes, "The primary difference between free weights and machines, however, is the fact that when using free weights, you can move in three dimensions: forward, backward, horizontally, and vertically. This is important, because this is how your body normally moves in daily life. "When you use free weights, you therefore end up engaging more muscles, as you have to work to stabilize the weight while lifting it. The drawback is that you’re at an increased risk of injury unless you maintain proper form. "Machines, on the other hand, are fixed to an axis that will only allow you to move in one or two planes. If used exclusively, this could lead to a lack of functional fitness, which can translate into injuries outside the gym. "Simply stepping off the sidewalk could result in a knee or ankle injury if stabilizing muscles have been ignored in favor of only working your larger muscle groups. On the upside, a machine will allow you to lift heavier weights, and allow you to target specific muscle groups." Other commentators noted that resistance machines tend to under-develop the 'core', which includes the abdominal and back muscles. Since I have had the odd twinge (some might call it a weakness) in my back, I am seriously thinking about doing some free weight training. But it's so complicated! I don't know anything about weights, apparatuses or forms. Still, when in doubt, buy the book. Obviously free weights can be done at the gym, but another thought occurred to me. As we walked back from our Bishop's Palace picnic today, I subtly murmured to Clare, "If you like, you can use my weights, when they arrive." (I have not in fact ordered any weights; the ground must first be prepared). This is what I heard: the house is not to be made into a gym; the last thing needed is a testosterone-heavy male around (I thought there already was one); and some remark about sweat I didn't quite catch. No real problems then. I emphasised that weight training is mostly kind and gentle, like yoga. Michael O'Neal eat your heart out; I will pump iron!! Bishop's Palace, Wells and the Dragon's-Lair A 'Spanish Plume' of warm air has sent us scampering to the Bishop's Palace today for a picnic lunch. They have just opened the 'Dragon's lair' for the summer holidays. The Dragon - 'Come hither, tasty children!' Clare explores the maze, where it's hard to get lost The author with picnic The garden fronting the cathedral
35a9b52c7fc9cc1b
Friday, 27 June 2014 Nuclear women in Bulgaria One thing that my partner pointed out to me about the group of people attending this workshop in Bulgaria is that there are a lot of women present.  I guess it is true.  I had a look through the delegate list at the back of the booklet, and counted 27 male and 16 female attendees.  I don't know if that really counts as a lot, but it must say something about most physics gatherings she has been in that the group assembled here seemed out of the ordinary.  Good for Bulgaria!  The number of female nuclear theorists with permanent positions in Bulgaria (pop 7m, GDP USD0.1t) who are attending this small workshop is around the same as the number of nuclear theorists of both sexes in the entirety of the UK (pop 70m, GDP USD1.5t) with permanent positions.   Anyway... A conference update:  I have generally enjoyed all the talks, but particularly I enjoyed learning a neat mathematical trick from Nikolay Minkov to do with factorising the Schrödinger equation in a way I will save describing further until I have successfully got mathjax working in Blogger.  I liked Xavier Viñas's work on attempting to write down a nuclear energy density functional based on a matching of a polynomial form of the density functional to give a realistic equations of state, with small additions to give good results for more or less all finite nuclei.  This is the sort of spirit in which energy density functionals should probably be used, rather than what I tend to do, starting from the Skyrme interaction.  It was nice to hear about a new facility being set up in Yerevan, in a talk by Roza Avetisyan.  A cyclotron mainly for medical isotope generation is being set up, with a beam line for nuclear physics experiments which will be a good place to train students in the arts of nuclear techniques, and some interesting ideas of reactions to look at were presented. Yesterday saw our excursion day.  It was quite a long day, running from 9:15 to 19:15,  taking in a reconstituted Roman hill fort near Samokov, then lunch, then a tour round the Rila Monastery.  Alba, my 8mo daughter did an admirable job of coping with the long coach journey, the being carried round the sites, and the cabbage-rich lunch.  She continued to be a more or less welcome diversion to the other attendees, never getting to the stage of screaming constantly in a confined coach for hours on end, which would no doubt have changed other people's ideas about having babies as accompanying people at conferences... The picture is a view from the hill fort.  Mostly, of course, it's just a tree.  You can see a bit of reconstructed wall, and some indication from the plain below of how high up we were.  The funicular was broken for the journey up, by the way, and pushing the pram up the path was hard work.  Special thanks to Rajdeep Chatterjee from Roorkee for helping here! Monday, 23 June 2014 Rila, again Like last year and the year before, I'm at the Nuclear Theory Workshop, organised by the group in Sofia, and held in a rustic hotel in the Rila Mountains.  It's day one, and I was scheduled to speak in the first session, like last year.  My talk followed talks by Andrzej Góźdź, and his student Aleksandra Pędrak (from Lublin, Poland) concerning collective Hamiltonians, and one by Attila Krasznahorkay (Debrecen, Hungary) on resonance states in calcium isotopes.  I enjoyed all the talks, and with Attila's experimental talk being relevant to things I can calculate, I got a few ideas of things to do. My talk was a kind of advert for our recently-published computer code, Sky3D, with some details of the kind of physics problems one can solve with it -- quite a wide range from nuclear structure and dynamics -- and some technical details of implementation and usage.  I got a reasonable amount of interest out of the talk and a fair few questions.  Hopefully, having published the code, we'll get plenty of people interested in running it.  It's good to have given the talk on the first day, partly because now I can relax more and enjoy the other talks, and chatting to people, rather than tinkering with my talk, but also because there is now plenty of time for me to sit down with people and infect their computers with my code install my code on their computers. This year, I brought travelling companions, as seen in the picture.  Eagle-eyed regular readers may notice that the mountains in the background haven't changed much, but the climbing frames in the playground have had a lick of paint.  My daughter Alba, in the picture, is a little young for the climbing frame, but she has already proved to be the star of the conference.  Her first plane journey went well,  It was on a pretty busy plane, and we had two seats in a group of three.  Our fellow passenger, seeing that he was sitting next to a family with an 8 month old baby graciously begged the stewardess to be allowed to sit elsewhere so as to give us a little more space.  We certainly didn't mind... Friday, 20 June 2014 Where did the isospin sign convention come from? Perhaps a reader may be able to help with this conundrum, which came to my attention on Tuesday following a seminar at Surrey from Mike Bentley, from York, which was all about isospin. In 1932, Werner Heisenberg introduced the concept of isospin [1].  At least, that's what we call it these days, though it was Wigner, in a 1937 paper [2], who first referred to Heisenberg's idea as isotopic spin, which we've since shortened to isospin. Heisenberg's idea was that protons and neutrons are really very similar objects - both about the same mass, and having a close link via beta decay in which a neutron can turn into a proton and an electron.  Together neutrons and protons constitute atomic nuclei, and can be termed nucleons.  Heisenberg wondered if it would be possible to conceive a theory where one dealt with just nucleons, but had some way of distinguishing them as either protons or neutrons.  He said (excuse my translation) Each particle in the nucleus would be characterised in five dimensions:  The three spatial coordinates (x,y,z), the spin in the z-direction, and through a fifth number, ρξ, for which the values of +1 and -1 are possible.  ρξ = +1 would mean that the particle were a neutron, ρξ = -1 that it were a proton. The whole concept can just be considered a mathematical convenience;  now one can write equations in a higher-dimensional space, but without having to have a notation with 'p' and 'n' subscripts everywhere for proton and neutron states.  However, it also helps notate an apparent underlying symmetry;  that protons and neutrons nearly behave as mirror particles.  The purpose of my post is not about anything as deep as that, but rather about the choice of +1 for neutrons and -1 for protons.   It is just an arbitrary choice, but it's the one originally made by Heisenberg, and repeated by Wigner shortly after. If I look more or less in any modern textbook, or the Wikipedia article on isospin, one finds the opposite sign definition.  Here I quote from the textbook Nuclear and Particle Physics, by Burcham and Jobes, which I bought while an Undergradute (so you may argue it is not "modern"): Heisenberg introduced an internal degree of freedom, the isospin I, in complete analogy with the ordinary intrinsic spin s.  The two orientations of the isospin I (I=½) in a notional isospin space, namely I= +½ and I= -½, would correspond with the proton and neutron respectively The factor of ½ difference I understand, but where did the sign flip come from?  Anyone know?  In Mike's talk on Tuesday, he used the Heisenberg convention, and this is the norm for nuclear physicists, but particle physicists use the opposite sign, as the textbook does. [1] Über den Bau der Atomkerne. I., W. Heisenberg, Zeitschrift für Physik 77, 1 (1932) Thursday, 19 June 2014 Unexpected Liverpool I'm in Liverpool today, thanks to various tedious reasons to do with getting a passport for my daughter to be able to come with me on a conference trip to Bulgaria next week.  Her application had been sitting in the Liverpool passport office for a few weeks without being processed, and it worked out that the only practical way of ensuring that it was in my hands in time for the trip was to come up here today.   I came up last night and stayed in a hotel overnight, so as to be able to go to the passport office when it opened at 8am.  Fortunately, it all seemed to go okay there, and I should be able to pick the passport up this afternoon. Being in such a fine city as Liverpool means that I can go to its wonderful Central Library - refurbished at a time when such spaces are being closed down elsewhere.  I got there a little before 9, and there were a group of people waiting outside for it to open.  Once inside, I wandered round it a bit, went to its cafe for a cup of coffee then settled in to one of the reading rooms in the old part of the library.  The picture attached to the post is taken from where I sat to work.   It's heartening to see that not only are there parts of the country where community facilities are retained, but that the library is so well-used, with what seems to be to be quite a cross-section of people;  people engaging in scholarship, like me, and what appear to be college-age students;  people coming to study the Financial Times; many people sat at the computers; tourists walking round with cameras; people coming to indulge a hobby;  a group of disabled women come to hang out in the cafe together; and who knows for what other reasons (I didn't actually interrogate anyone about their reasons for coming).   It might seem an unnecessary extravagance to some to pay for libraries these days.  I would have been fine without it, as I'd have been able to get in to one of Liverpool's University libraries, but with poverty getting worse, society more divided, and study-space at home at a premium for many, I am very happy to see such a wonderful library thriving, and paid for by me and the rest of the taxpayers. Wednesday, 11 June 2014 Research for its own sake Reading the Independent on Sunday (on Sunday), I came across an article headed "Universities are 'not just for getting a job'".  In some ways it was a bit of a non-story; a story about the opinion of someone who had made the remark.  I was pleased to see that the person saying it was the Vice Chancellor of my own institution, who is also the chair of the Vice-Chancellors' supergroup Universities UK. The headline seems a rather uncontroversial statement (to me, at least), but the idea of charging students large tuition fees was predicated on the fact that University students earn more money, on average, than those not going to University, and so charging them high fees is therefore justified.  I never much liked that argument.  I mean, we have a graduated income tax to account for that time of thing, and it always smacked of the politics of envy.  We should fund from taxation anything we think is worth having in a society.  The last couple of governments seem to have decided that we do want people to be educated up to sixth form level, but that's enough, and anything else is a kind of personal luxury.  What I don't like most of all about it, though, is the assumption that Universities only exist for people to serve their own financial self-interest.   What of the people who want to go because there is so much to know?  How do we account for the fact that this desire to push the boundaries of knowledge is part of what makes us human?   I may be doing a subject which has a lot of positive financial benefits, but I also want to live in a society where we have Professors of Medieval Poetry, just because such a society enriches us in ways beyond money. I'm glad my vice-chancellor thinks so too.  Thursday, 5 June 2014 Video test post As the title suggests, I'm making this post to see how to include videos most easily within Blogger-hosted blogs.  This one is uploaded via Blogger's tool.  It doesn't do anything in preview mode, hence I am publishing it.   It's a simulation of a nuclear fusion reaction between Oxygen-16 and Zirconium-64 If anyone has a good guide to including videos in web-pages, in a Blogger-compatible way, I'd welcome some pointers.  I can host the videos elsewhere and edit the blog posts in pure html mode, if it helps you make suggestions. Monday, 2 June 2014 Conference of the week: ARIS2014 [ARIS 2014 logo] I'm at home this week, but many people I know from the nuclear physics community are at ARIS2014, in Tokyo.  In fact, there are enough people there that if you follow the twitter hashtag #ARIS2014 you can keep up with nuclear physics news (so people at ARIS, please tweet news!)
949f7a5fd5d8a279
Excited state quantum phase transitions in many-body systems M. A. Caprio P. Cejnar F. Iachello Center for Theoretical Physics, Sloane Physics Laboratory, Yale University, New Haven, Connecticut 06520-8120, USA Institute of Particle and Nuclear Physics, Faculty of Mathematics and Physics, Charles University, V Holešovičkách 2, 180 00 Praha, Czech Republic European Centre for Theoretical Studies in Nuclear Physics and Related Areas, Strada delle Tabarelle 286, 38050 Villazzano (Trento), Italy Corresponding author. Phenomena analogous to ground state quantum phase transitions have recently been noted to occur among states throughout the excitation spectra of certain many-body models. These excited state phase transitions are manifested as simultaneous singularities in the eigenvalue spectrum (including the gap or level density), order parameters, and wave function properties. In this article, the characteristics of excited state quantum phase transitions are investigated. The finite-size scaling behavior is determined at the mean field level. It is found that excited state quantum phase transitions are universal to two-level bosonic and fermionic models with pairing interactions. 03.65.Fd, 03.65.Sq, 64.60.-i journal: Annals of Physics (N.Y.) , , and 1 Introduction Quantum phase transitions (QPTs), or singularities in the evolution of the ground state properties of a system as a Hamiltonian parameter is varied, have been extensively studied for various many-body systems (e.g., Refs. [1, 2, 3]). Recently, analogous singular behavior has been noted for states throughout the excitation spectrum of certain many-body models [4, 5, 6, 7, 8, 9, 10], namely the Lipkin model [11] and the interacting boson model (IBM) for nuclei [12]. These singularities have been loosely described as “excited state quantum phase transitions” (ESQPTs) [9]. In this article, we more closely and systematically examine the characteristics of such excited state singularities as phase transitions, to provide a foundation for future investigations. It is found that excited state quantum phase transitions occur in a much broader class of many-body models than previously identified. Ground state QPTs are characterized by a few distinct but related properties. The QPT occurs as a “control parameter” , controlling an interaction strength in the system’s Hamiltonian , is varied, at some critical value . For specificity, we take the Hamiltonian to have the conventional form . At the critical value: (1) The ground state energy is nonanalytic as a function of the control parameter at . (2) The ground state wave function properties, expressed via “order parameters” such as the ground state expectation values or , are nonanalytic at . These two properties are not independent, since the evolution of the ground state energy and that of the order parameters are directly related by the Feynman-Hellmann theorem [13], which gives . (3) The gap between the ground state and the first excited state vanishes at . (Here we consider only continuous phase transitions. More specifically, the systems considered in this article undergo second-order phase transitions, in which discontinuity occurs in the second derivative of the ground state energy and the first derivatives of the order parameters.) Singularities strictly only occur for an infinite number of particles in the many-body system, but precursors can be observed even for very modest numbers of particles. For finite particle number , the defining characteristic of the QPT is therefore not the presence of a true singularity but rather well-defined scaling behavior of the relevant quantities towards their singular large- limits [14]. For the systems which exhibit excited state QPTs, the vanishing gap between the ground state and first excited state at the ground state QPT does not occur in isolation. Rather, there is a bunching of levels near the ground state, that is, a vanishing of the average level spacing or an infinite local level density . The infinite level density, moreover, propagates to higher excitation energy (as illustrated for a two-level fermionic pairing model in Fig. 1) as the order parameter is varied from , hence the concept of a continuation of the QPT to excited states. The singular level density occurs simultaneously with singularities in other properties of the excited states , such as the level energy and the expectation values and . Excitation energies for the two-level fermionic pairing model ( Figure 1: Excitation energies for the two-level fermionic pairing model (2.7) with particle number , at half filling and zero seniority, as a function of the control parameter . First, we review the essential properties of the two-level pairing many-body models, for both bosonic and fermionic constituents (Sec. 2). We find that ESQPTs are universal to these models, suggesting that the ESQPT phenomena may be broadly relevant, at least to systems dominated by pairing interactions. The semiclassical analysis of a “sombrero” potential provides a basis for understanding many of the properties of the quantum many-body ESQPT [6, 9]. The semiclassical analysis of Refs. [6, 9] is extended in Sec. 3 to address several properties relevant to the definition of phase transitions. In particular, the singularity in the eigenvalue spectrum and the finite-size scaling behavior for the ESQPT are determined at the mean field level. Numerical calculations for the full quantum problem are considered in Sec. 4, where we investigate manifestations of the ESQPT in the excitation spectrum and in the properties of “order parameters” for the excited states. Finally, we consider the ESQPT as a boundary between qualitatively distinct “phases” (Sec. 5). The relationship between the two-level boson models and the two-level pairing model is established for arbitrary dimension in the appendices, where some further mathematical definitions and identities are also provided for reference. 2 Bosonic and fermionic two-level models Ground state QPTs have been studied extensively (e.g., Refs. [15, 16, 2, 17]) for the two-level boson models, or - models, defined in terms of a singlet boson and a -fold degenerate boson [Fig. 2(a)]. Models in this class include the interacting boson model (IBM) for nuclei ([12], which is defined in terms of and bosons, and the vibron model for molecules ([18]. Also, the Lipkin model [11] has several isomorphic realizations, defined variously in terms of systems of interacting fermions, interacting spins, or interacting bosons (Schwinger realization). This last realization falls into the two-level boson model categorization, as the case. So far, excited state QPTs have been considered in the Lipkin model [5, 6] and the IBM [7, 8, 9], both of which are examples of - two-level models. Single-particle level degeneracies for the various classes of two-level models considered: (a) the Figure 2: Single-particle level degeneracies for the various classes of two-level models considered: (a) the - boson models, (b) the more general two-level bosonic pairing models, and (c) the two-level fermionic pairing models. The - two-level models are described by the algebraic structure where . The generators are given in tensor form by , , , and (see Appendix A for detailed definitions). If the Hamiltonian is simply taken as the Casimir operator (A.11) of either of the subalgebras, or , a dynamical symmetry is obtained. The symmetry is geometrically related to the -dimensional harmonic oscillator, the symmetry to the -dimensional rotator-vibrator (e.g., Ref. [19]). The ground state QPT in the two-level boson models arises as the Hamiltonian is varied linearly between the two dynamical symmetries, for instance, by varying in the Hamiltonian where is the -boson occupancy, , and . This Hamiltonian yields the symmetry for and the symmetry for . The Hamiltonian is invariant under the common algebra in (2.1) and therefore conserves a -dimensional angular momentum quantum number . As is increased from , the increasing strength of the interaction between and levels changes the structure of the ground state from a pure -boson condensate to a condensate involving both types of bosons. For asymptotically large values of the total particle number , the change is abrupt. A second-order ground state QPT is well known to occur for , with all the properties enumerated in Sec. 1. [The coefficients in (2.2) are scaled by appropriate powers of to guarantee that the location of the critical point is independent of in the large limit.] With more complex interactions in the Hamiltonian, first-order QPTs, such as the physically important phase transition in the IBM, may also be obtained [15, 2, 20]. The conditions under which such first-order phase transitions occur in an arbitrary model are outlined in Ref. [21]. However, only second-order ground state QPTs will be considered here. We observe, moreover, that the two-level boson models are special cases of an even larger family of models, the two-level pairing models with quasispin Hamiltonians. Two-level pairing models can be defined for systems of either bosons or fermions. The two-level pairing models undergo a second-order ground state QPT [22]. Therefore, it is natural to consider the possibility that excited state QPTs may occur within the context of this broader family of models as well. The quasispin pairing Hamiltonian is of the form where the summation indices and run over the single-particle levels, and and run over their substates. The may represent either bosonic operators and [Fig. 2(b)] or fermionic operators operators and [Fig. 2(c)], as appropriate. Although the Hamiltonian (2.3) superficially appears quite different from the two-level boson model Hamiltonian (2.2), the two are in fact equivalent [23, 24]. The detailed relationship between the models is established for arbitrary in Appendix A. It is well known that the pairing Hamiltonian (2.3) can be expressed in terms of the generators , , and of a quasispin algebra (A.2), as where , and the upper and lower signs apply in the bosonic and fermionic cases, respectively. The algebra is either an algebra if the operators are bosonic [25] or an algebra if the operators are fermionic [26]. However, the pairing models are also characterized by an overlaid algebraic structure, described further in Ref. [27], either in the bosonic case (with and ) or in the fermionic case (with and ), directly generalizing the algebraic structure (2.1) of the - boson models. The generators are of the form , , , and . The and [or and ] algebras provide conserved -dimensional and -dimensional angular momentum quantum numbers ( and ), which are equal to the seniority quantum numbers defined in the quasispin formulation. The ground state QPT in the general two-level pairing models is between the or dynamical symmetry and the dynamical symmetry. To choose a transitional Hamiltonian for the general pairing models consistent with the Hamiltonian already used for the - boson models, we observe that the Hamiltonian (2.2) may be reexpressed (see Appendix A) in pairing form as to within an additive constant, where the full relation is given explicitly in (A.17). With this form of Hamiltonian for the pairing models, the ground state QPT again occurs at . Since QPTs occur in the limit of large particle number, an important distinction arises between bosonic and fermionic models. Arbitrarily large particle number can be achieved in the bosonic models, even for fixed level degeneracies, simply by increasing the total occupancy. For a fermionic model, however, the total occupancy is limited by Pauli exclusion to the total degeneracy []. Therefore, the limit of large particle number can only be achieved if the number of available substates in each level is simultaneously increased. For two fermionic levels of equal degeneracy (), half-filling is achieved for . It will be convenient to make extensive use of the two-dimensional vibron model [28, 29, 30] for illustration in this article. The vibron model is the simplest two-level model which still retains a nontrivial angular momentum or seniority quantum number (unlike the Lipkin model). Eigenvalue spectra for the Figure 3: Eigenvalue spectra for the vibron model states (), for several specific values of the Hamiltonian parameter . Eigenvalues are plotted with respect to the scaled excitation quantum number . First, note that single-particle levels in bosonic pairing models [Fig. 2(a,b)] are only restricted to odd degeneracies (i.e., with integer) if a physical three-dimensional angular momentum subalgebra is required in (2.1) or (2.5). The pairing interaction only requires the definition of time-reversed pairs. It therefore suffices to have an “” quantum number, with pairs , without necessity for an “” quantum number. The pairing interaction can therefore be defined for an even number of bosons, and the interaction within each level is described by with even.111However, pairing is not well-defined for the converse situation, a fermionic level of odd degeneracy. With an odd number of substates, one (“”) must necessarily be its own conjugate under time reversal. Creation of a time-reversed pair involving this substate is Pauli forbidden. The corresponding algebra, with odd, is not defined. Bosonic levels of even degeneracy arise naturally in problems lacking three-dimensional rotational invariance. The vibron model may be obtained by considering the vibron model () and eliminating the substate . This leaves a algebraic structure, with and dynamical symmetries. The geometrical coordinates associated with the model describe three-dimensional dipole motion (as in a linear dipole molecule). However, eliminaton of “freezes out” motion in the direction, so the model instead describes two-dimensional motion in the plane. The transitional Hamiltonian, in Casimir form, is [28, 29], where . This is the two-dimensional equivalent of (2.2), to within an additive constant. The conserved two-dimensional angular momentum is . The eigenvalue spectra of states, for various values of , are shown in Fig. 3. The spectra for the dynamical symmetry () and the dynamical symmetry () have simple analytic forms [28]. Note also the spectrum for the ground state QPT (). 3 Semiclassical dynamics 3.1 Coordinate Hamiltonian Each of the many-body models considered in Sec. 2 has an associated classical Hamiltonian, defined with respect to classical coordinates and momenta, which is obtained through the use of coherent states [31, 2, 32]. The basic properties of the excited state quantum phase transition follow from the semiclassical analysis of a double-well potential with a parabolic barrier [Fig. 4(c)] or, in higher dimensions, a sombrero potential (also known as the “champagne bottle” potential [33]). The semiclassical dynamics for these potentials has been studied in depth [34, 33, 35, 36, 37], and the connection with ESQPT phenomena in the Lipkin model and higher-dimensional - boson models has been made in Refs. [4, 6, 8, 9]. In particular, at the energy of the top of the barrier, the classical action undergoes a logarithmic singularity, which leads semiclassically to the prediction of an infinite level density. Here we do not attempt a comprehensive recapitulation of the existing analysis but rather briefly summarize the essential points and derive some results specifically relevant to the observables of interest in phase transitional phenomena. For the quasispin models of Sec. 2, the two superposed algebraic structures (quasispin and unitary) give rise to two alternative sets of coherent states and therefore to two realizations of the classical dynamics. The or quasispin algebra yields a one-dimensional dynamics (the phase space is a Bloch sphere or hyperboloid [31, Ch. 6]) which is common to all the quasispin models. The dynamics arising from the quasispin algebra therefore highlights aspects universal to these models, yielding the basic double-well potential [Fig. 4(c)] and therefore indicating that all should exhibit an ESQPT at the energy of the top of the barrier. In contrast, the coherent states obtained from the unitary algebra yield a much richer classical dynamics, in dimensions, associated with the coset space  [31, Ch. 9]. This more complete dynamics, so far only fully investigated for the - models [38, 39], yields a much more detailed description of the system. The dynamics obtained from the quasispin algebra is essentially a one-dimensional projection or “shadow” of the full dynamics arising from the unitary algebra, as described by Feng, Gilmore, and Deans [2] for the IBM. In particular, the presence of angular degrees of freedom and conserved angular momentum quantum numbers have significant consequences for the ESQPT [8, 9]. First, let us summarize the classical Hamiltonian obtained from the coherent states for the - model. The classical Hamiltonian acts on coordinates and their conjugate momenta. However, for the -invariant interaction in (2.2), the Hamiltonian is invariant under rotations in the -dimensional space and can therefore be expressed solely in terms of a radial coordinate , its conjugate momentum , and a conserved angular kinetic energy , as [17, 39, 9] where has eigenvalue and the coordinate is defined only on the domain .222In obtaining (3.1) from Ref. [39], a scaling transformation has been made, and a constant term of order has been suppressed. The eigenvalue problem for (3.1) therefore has the form of a radial Schrödinger equation with a quadratic-quartic potential, except for the appearance of the position-dependent kinetic energy term proportional to . For the one-dimensional case, i.e., the Lipkin model, the centrifugal term is not present, and the coordinate and momentum are more aptly denoted by and , so where here both negative and positive values of the coordinate are allowed (). The role of in the usual Schrödinger equation is taken on by the coefficient of or in (3.1) or (3.2). We therefore make the identification , with the coordinate-dependent mass . The forms assumed by the quadratic-quartic potential in (3.1) or (3.2), are summarized for convenience in Fig. 4(a–c). For the radial problem, of course, only the positive abscissa is relevant. For , the potential has a single minimum, at , which is locally quadratic. For , the critical value for the ground state QPT, the potential is pure quartic. For , the familiar double-well potential is obtained (or the sombrero potential for ). For the Hamiltonian (3.1) or (3.2), the zero in energy is such that the top of the barrier is always at , independent of . 3.2 Singular properties of the action The main semiclassical features of levels at energies near the top of the barrier are obtained by noting that for the classical velocity locally vanishes at the top of the barrier (). While indeed the classical velocity also vanishes at the ordinary linear turning points of a potential well, the vanishing slope at the top of the barrier presents a qualitatively broader “flat” region over which the classical velocity is small. Thus, the semiclassical motion has a long “dwell time” in the vicinity of . This leads to two essential results, namely (1) an infinite period for classical motion across the top of the barrier and (2) strong localization of the semiclassical probability density at the top of the barrier [6]. The first-order semiclassical analysis provides a simple guidemap to the properties of the spectrum as a whole and also provides an explanation for the singularity in level density as the top of the barrier is approached. We consider the one-dimensional problem (3.2), but the results apply equally to the radial problem (3.1) with . For the Hamiltonian (3.2), the usual first-order WKB quantization condition [40] becomes with , , , where the action over a full classical period of motion is given by the integral between classical turning points and .333Some bookkeeping issues naturally must be taken into account in the one-dimensional double-well problem [Fig. 4(c)]. For , i.e., below the barrier, the two wells are classically isolated. Applying the quantization condition with evaluated over one of the wells in isolation is equivalent to counting only states of one parity (symmetric or antisymmetric). For , applying the quantization condition with evaluated over the full well counts states of both parity. Questions as to the proper transition between the regimes and are somewhat artificial, since the validity conditions for (3.3) break down at . The action depends upon variously through , , and the turning points. Contour plot showing the global structure of the classical action Figure 4: Contour plot showing the global structure of the classical action for the geometric Hamiltonian (3.1) or (3.2), through the different regimes determined by the shape of the quadratic-quartic potential energy function (3.1), which is shown for (a) , (b) , and (c) . The individual contours are related semiclassically to the evolution of the level eigenvalues . The quantization condition (3.3) implicitly gives the adiabatic evolution of the energy of a given level with respect to the parameter . Since (3.3) enforces that be constant if is held fixed, the curve describing is simply a contour of in the - plane. These contours, calculated numerically for the Hamiltonian (3.2) [or (3.1) with ] are plotted in Fig. 4. A compression of energy levels at is visible qualitatively even here. [In Fig. 1, the are plotted as excitation energies and therefore cannot be compared directly with Fig. 4. More appropriate plots for comparison may be found in the following section, e.g., Fig. 7(a).] The derivative along a single contour of is plotted in Fig. 5(a). Note that undergoes a singularity in which but , at a critical value . Singularities in derivatives of the classical action ( Figure 5: Singularities in derivatives of the classical action (3.4) for the geometric Hamiltonian (3.1) or (3.2). (a) The derivative along a countour of (Fig. 4), related semiclassically to the adiabatic evolution of the level energy. (b) The inverse of the partial derivative , proportional to the semiclassical estimate for the gap. In semiclassical analysis, the gap or level density is directly related to the classical period. From the quantization condition (3.3), it follows that the semiclassical estimate of the gap between adjacent levels () is . By differentiation of (3.4), the gap is simply . As already noted for the ESQPT [6], the period becomes infinite at and, equivalently, the gap vanishes. An explicit calculation of as a function of for the classical Hamiltonian (3.2) is shown in Fig. 5(b). Note that undergoes a singularity in which but , at the critical energy . For nonzero angular momentum , the origin () is classically forbidden due to the centrifugal term in (3.1), which causes the wave function probability near the origin to be suppressed. This mitigates the effects just described, by masking the top of the barrier and precluding the long semiclassical dwell time at the origin [9]. The dependence of the Hamiltonian (3.1) on is through the coefficient of the centrifugal term, which is proportional to . Therefore, the phenomena associated with the ESQPT can be expected to be suppressed for sufficiently large at any given value of . On the other hand, the angular momentum effects at any given value of are negligible for sufficiently large . That is, the signatures of the ESQPT persist for small () and only disappear for of order unity (as illustrated quantitatively in Sec. 4.1). 3.3 Asymptotic spectrum Let us now consider more precisely the form of the singularity in the spectrum in the immediate neighborhood of the ESQPT. As the wave function becomes increasingly well-localized near the top of the barrier for , it should become an increasingly good approximation to treat the barrier as a pure inverted oscillator potential, . The position-dependent kinetic energy term () also becomes irrelevant. In the action integral (3.4), the classical turning point at the barrier is for , or for integration simply extends to the origin. The distant turning point is a slowly varying function of which does not contribute to the singularity, so we may take it to be a constant. (In any case, for the actual potential, the approximation of a pure parabolic barrier breaks down well before the distant turning point is reached.) The action integral for is therefore where, for the inverted oscillator Hamiltonian we have defined by analogy with the conventional harmonic oscillator. Expanding this action integral [41, (2.271.3)] for yields where and are constants, i.e., depend only on the potential parameters and . An essentially identical result is obtained for , with the replacement  [41, (1.646.2)]. The singular behavior for energies near the top of the barrier therefore arises from the term.444Since the Schrödinger equation for a pure parabolic barrier is exactly solvable in terms of parabolic cylinder functions [42], the dependence can also be obtained by explicitly matching this solution for the wave function in the vicinity of the barrier to asymptotic WKB wave functions away from the barrier [43]. The quantization condition (3.3) takes on the form where is obtained for . If the energy dependence in (3.7) is truncated at the terms shown, i.e., linear order in , this quantization condition can be solved for in terms of the Lambert function, by (B.5), yielding . The relevant properties of the function are summarized in Appendix B. For the Hamiltonian (3.2), the top of the barrier is described by an oscillator constant which may be read off from the coefficients of the and , giving The oscillator constant thus depends upon both and . The same function , interestingly, also enters into the ground state QPT scaling properties, as obtained by the continuous unitary transform method in Ref. [44]. The semiclassical estimate for the eigenvalue spectrum in the vicinity of the ESQPT is therefore where will contain a dependence on as well. Differentiation with respect to , making use of (B.4), yields a semiclassical estimate for the energy gap between adjacent excited states. Since the excitation quantum number and particle number enter into the quantization condition (3.3) together in the combination , the spectrum and finite-size scaling properties are inextricably linked at the semiclassical level.555For the ground state QPT, the semiclassical potential is quartic [Fig. 4(b)]. A simple application of the WKB formula gives a dependence , which simultanously yields both the spectrum [Fig. 3 ()] and the scaling (Sec. 4.2). The expression (3.11), considered as a function of at fixed , provides an estimate for the scaling of the gap at the -th eigenvalue above or below . The large- behavior follows from the known asymptotic form (B.2) of the function as a sum of logarithms for (see Fig. 12). The values of relevant to (3.11) in the vicinity of the ESQPT are of the order . The asymptotic form (B.2) provides a good approximation to for reasonable , e.g., accurate to by . For very large , the scaling behavior is in principle even simpler. The term in (B.2) outgrows the term as . With this logarithmic approximation, an extreme asymptotic estimate is obtained, recovering the logarithmic scaling noted by Leyvraz and Heiss [6]. However, even for , the approximation yields an error of and therefore is of limited quantitative value for systems of typical “mesoscopic” size. Note that the quantization condition as given in (3.3) is derived under the assumption that the classical turning points are well separated (by several de Broglie wavelengths) and that the potential is locally linear at these turning points [40]. This suffices for the analysis of levels which are not close in energy to the top of the barrier. However, for , the barrier presents a quadratic classical turning point. (Equivalently, the linear turning points on either side of the barrier approach each other, violating the assumption of sufficient separation.) For accurate quantitative analysis of the levels immediately surrounding , the more general phase-integral method must be applied [45]. For a smooth, symmetric double-well potential, the phase-integral method yields an approximate quantization condition [45, (3.47.1)] with an integer, where the various phases appearing on the right hand side are defined in Ref. [45]. The full derivation involves the evaluation of contour integrals on the complex extension of the coordinate axis and the consideration of complex-valued turning points for energies just above the barrier [45]. Quantitative solution of the problem is considered in detail in Refs. [46, 47, 35, 37]. The effects of these corrections (3.13) relative to (3.3) are explored in Ref. [33]. The corrections are essential to the treatment of the first few eigenvalues above or below the barrier. However, here we are instead interested in extracting the basic nature of the singularity from the dependence of on in the vicinity of , for which the simple quantization condition (3.3) suffices. 4 Quantum properties 4.1 Eigenvalue spectrum In a ground state QPT, the singular behavior of the system is simultaneously reflected in the eigenvalue spectrum (ground state energy and gap) and in the order parameters. From the preceding semiclassical analysis (Sec. 3), it is to be expected that a similar variety of interconnected phenomena occur at the ESQPT, and this is indeed borne out by the quantum calculations. Of course, the analogy between ground state QPT and ESQPT is far from exact, so let us now examine the results for spectra and order parameters obtained numerically from the full quantum calculation, to elucidate both the analogy with the ground state QPT and the applicability of the semiclassical results of Sec. 3. While the ground state QPT may only be traversed by varying a Hamiltonian parameter, the locus of the ESQPT is a curve in the two-parameter space defined by the Hamiltonian parameter and the excitation energy (as along the dense band in Fig. 1). Therefore, the ESQPT may be crossed either “horizontally”, by varying , or “vertically”, by varying the excitation quantum number (or, equivalently, the energy ) of the level being examined. The energy spectrum consists of the set of eigenvalues , which contain dependences on several quantities: the system size , the Hamiltonian parameter , the excitation quantum number , and other conserved quantum numbers (angular momenta or seniorities in the present models). For large , however, and become essentially continuous variables. In the preceding section, it was seen that semiclassically the energy depends upon the quantum numbers only through these combinations and . We are therefore largely interested in the properties of the spectrum given by the function of three quasi-continuous variables [17]. The dependence of the spectrum on interaction, excitation quantum number, and angular momentum is contained in the dependence of on its three arguments.666In the Hamiltonians (2.2) and (2.7), the coefficients of the one-body operators are scaled by and the coefficients of the two-body operators are scaled by . Often a Hamiltonian normalization differing by an overall factor of is instead used, e.g., for the - model. For the normalization (2.2) or (2.7), does indeed approach a limiting value as , by (3.3). However, for the alternate normalization it is actually which approaches a limiting value as . Furthermore, note that the dependence on the argument implicitly contains information not only on the excitation spectrum [when the function is considered as at fixed ] but also on the finite-size scaling behavior [when the function is considered as at fixed ]. The properties of in the vicinity of the ground state, that is, for , have been studied in detail, at least for the - models. Here, instead, we are considering the regime . First, let us establish the common ground between the various models under consideration (Sec. 2), by a simple comparison of the energy spectra. Calculations are shown in Fig. 6 for the Lipkin model [Fig. 6(a)], the vibron model [Fig. 6(b)], a bosonic pairing model with equal degeneracies for both levels () [Fig. 6(c)], and a fermionic pairing model with equal degeneracies () [Fig. 6(d)]. The calculations are all for a fixed, modest particle number (), so that individual eigenvalues are clearly distinguishable. In the comparison, we must distinguish the invariant subspaces of states for each model. Each eigenstate of the Lipkin model contains only even- or odd- components and is thus characterized by a grading quantum number with values and ( Eigenvalues for (a) the Lipkin model (Schwinger realization), (b) the Figure 6: Eigenvalues for (a) the Lipkin model (Schwinger realization), (b) the vibron model, (c) the bosonic pairing model (), and (d) the fermionic pairing model (), as functions of the coupling parameter , all for total particle number . For the Lipkin model, both even-parity (solid curves) and odd-parity (dashed curves) levels are shown. For the other models, only the lowest angular momenta or seniorities are shown. A diagonal contribution has been subtracted from the Hamiltonian (2.7) for the bosonic pairing model [27]. Angular momentum dependence of spectral properties for the Figure 7: Angular momentum dependence of spectral properties for the vibron model (). (a,b) Evolution of eigenvalues with for and , i.e., . (c) Dependence of the gap on excitation energy, as in Fig. 8(b), for various (). Note the essentially identical evolution, with respect to , of the even-parity () states of the Lipkin model, the zero angular momentum () states of the vibron model, and the zero seniority [] states of both the bosonic and fermionic pairing models (solid curves in Fig. 6). The ground state energy is near constant, with , for and decreases to for . The highest eigenvalue decreases approximately linearly with , from to . Various qualitative features associated with the ESQPT occur at for for these models. Note especially the inflection points for these levels (solid curves) as well as the change in the pattern of degeneracies between different seniorities (or parities or angular momenta) at . The major differences among the models lie in the degeneracy patterns at nonzero seniority, which depend upon the specific algebraic properties of the individual models [27]. At present, we will limit consideration of angular momentum effects to the - models, since for these only one angular momentum quantum number is involved, and in the vibron model (Sec. 2) serves as a natural example for illustration. The semiclassical analysis of Sec. 3 provided a simple set of predictions (Fig. 5) for the singular behavior of as the ESQPT is crossed both “horizontally” [] and “vertically” []. Namely, undergoes a singularity in which the slope sharply approaches zero () [Fig. 5(a)] but with a curvature which becomes infinite and reverses sign (), yielding a special divergent form of inflection point, as . A similar singularity is expected in [Fig. 5(b)] at the critical energy. The actual diagonalization results at finite show clear precursors of this form of singularity in as is varied. Even for the small system size () considered in Fig. 6, each eigenvalue undergoes an inflection [Fig. 6 (solid curves)] at an energy close to the expected critical energy, i.e., for the Hamiltonians used. The derivative is shown for larger boson number ( and ) in Fig. 8(a), for the vibron model states. The second derivative is also shown (inset). The expected dip and divergent inflection both are present and become gradually sharper with increasing . Evolution of excited level energies and the order parameter Figure 8: Evolution of excited level energies and the order parameter across the ESQPT, as traversed both by varying  (left) and by varying  (right), i.e., “horizontally” and “vertically”. Calculations are shown for the vibron model states, with (dashed curves) and (solid curves). (a) The derivatives and (inset), for a specific excited level (). (b) The derivative or, equivalently, the scaled gap , and (inset), for . (c) The order parameter (rescaled by ) as a function of for the same level as in panel (a). (d) The order parameter (rescaled by ) as a function of excitation energy, for the same value as in panel (b). The discrete eigenstates are resolved at the expanded scale shown in the inset. For nonzero in Fig. 6(b), the inflection points in the eigenvalues as functions of are washed out, as expected from the semiclassical analysis (for the centrifugal term suppresses the probability density near , mitigating the effect of the barrier). For , the inflection points disappear even for the very lowest nonzero values [dashed curves in Fig. 6(b)]. Also for , the inflection points are suppressed for the negative parity () states of the Lipkin model in Fig. 6(a). Here a similar mechanism applies: negative parity states posess a node at , and the effect of the parabolic barrier at is therefore again reduced. (To this extent, the grade in the Lipkin model is a surrogate for the angular momentum in the higher-dimensional boson models. The formal relation is given in Appendix A.) Compare also the curves for nonzero seniorities in Fig. 6(c,d). While the change in behavior of the eigenvalues between and nonzero seems to be rather abrupt for the illustration, it must be borne in mind that the relevant parameter for the semiclassical description was noted to be , which can only be varied very coarsely when . The more gradual evolution of the ESQPT with , as obtained for larger , is considered further below. The properties of the spectrum as the ESQPT is traversed “vertically” by varying the excitation quantum number for a single fixed Hamiltonian parameter value are explored in Fig. 8(b), again for the vibron model with and , now at the specific parameter value . The singularity in with respect to gives rise to the original, defining property of the ESQPT, namely the vanishing gap or infinite level density. The gap is simply the change in energy for a unit change in quantum number, so in the limit where is taken as a quasi-continuous variable we have . The gap is shown as a function of energy, rather than of , in Fig. 8(b), so that the energy in the spectrum at which the precursors of the singularity occurs can be compared with the expected critical energy . The second derivative is also shown (inset). The qualitative features and expected from the semiclassical analysis are indeed realized, more sharply with increasing . The inflection point of with respect to at (though not its singular nature) is also immediately visible simply by inspection of the spectra obtained for various (Fig. 3). The spectra are concave downward with respect to below and concave upward above this energy. At the limit, the entire spectrum falls below and constant downward concavity follows from the exact formula [28] for the eigenvalues, quadratic in . Although here we are considering the dip in as a property of the ESQPT in a many-body interacting boson model, it should be noted that the dip arising for the associated two-dimensional Schrödinger equation is well known as the “Dixon dip” [48], with applications to molecular spectroscopy (see also Ref. [30]). For nonzero , as noted above, the relevant parameter governing the disappearance of the ESQPT is expected to be . The eigenvalue spectrum for the vibron model with indeed shows compression of the level density at the critical energy for [Fig. 7(a)] and, conversely, no apparent compression of level density for large [Fig. 7(b)], where or is shown in this example. (See Ref. [7] for analogous plots for the IBM.) However, the gradual nature of the evolution with is seen by considering the dip in , which becomes continuously less deep and less sharp as is increased [Fig. 7(c)]. 4.2 Finite-size scaling The spectroscopic hallmark of the critical point of a QPT is not a vanishing gap per se, since the gap never strictly vanishes for finite system size, but rather the nature of its approach to zero as increases. It is therefore essential to characterize the finite size scaling behavior of the gap in the vicinity of the ESQPT. With the Hamiltonian normalization of (2.2), the gap everywhere approaches zero with increasing , so we are actually, more precisely, interested in the scaling of the gap at the ESQPT relative to the scaling elsewhere in the spectrum. For states well-separated from both the ground state QPT and ESQPT, the scaling is as . For states in the vicinity of the ground state QPT, the gap vanishes more quickly than , as the power law . This has been established both numerically and analytically for the various models under consideration [49, 50, 51, 52, 44, 30].777As noted above, different normalization conventions may be encountered for the model Hamiltonians. Overall multiplcation of the Hamiltonian by a factor gives rise to a superficial difference of unity in the finite-size scaling exponents. The gap at the excited state QPT also approaches zero more rapidly than . This is apparent even from the simple plot Fig. 8(b), where is essentially independent of away from the critical energy (compare the curves for and ) but approaches zero with increasing at the critical energy. Let us now examine finite-size scaling more carefully, in particular, to see the extent to which the semiclassical expression (3.11) reproduces the scaling behavior. It is not a priori obvious that the semiclassical result (3.11) should yield the proper scaling properties for the eigenvalues in the vicinity of the ESQPT. Even in the solution of the ordinary Schrödinger equation, the semiclassical analysis becomes unreliable for the first few eigenvalues in the vicinity of the top of the barrier [34, 35, 33]. First, in Fig. 9(a), the actual form of the spectrum in the vicinity of the ESQPT, obtained by numerical diagonalization, is compared with the semiclassical estimate (3.10). Eigenvalues are shown for and . Note that is simply determined as the value of for which the energy eigenvalues cross zero. This must be interpolated between discrete eigenvalues, so is in general noninteger. The singular logarithmic term in (3.6) has a coefficient which is predicted unambiguously from the value of  (3.8) for the inverted oscillator, but no attempt is made here to directly calculate the coefficient of the nonsingular linear term. Rather, is simply chosen to numerically reproduce the linear trend in the eigenvalues in the vicinity of the ESQPT. The value obtained from a limited number of eigenvalues around therefore depends somewhat on both and the number of eigenvalues considered. The gap, that is the first difference of the eigenvalues in Fig. 9(a), is plotted in Fig. 9(b), together with the semiclassical estimate (3.11). The form of the singularity is well matched by the semiclassical estimate. (The parameter essentially determines the normalization of the curve .) The most significant deviation occurs for the first few eigenvalues around . Quantitative comparison of quantum and semiclassical results for the gap, including finite-size scaling properties, in the vicinity of the ESQPT ( Figure 9: Quantitative comparison of quantum and semiclassical results for the gap, including finite-size scaling properties, in the vicinity of the ESQPT (). Calculations are for the vibron model states with . (a,b) Eigenvalue spectrum and its first difference, i.e., the gap, shown as functions of for (open circles) and (solid circles). The semiclassical result (3.10) or (3.11) in terms of the function (with ) is shown for comparison (solid curve). (c) Scaling of the gap with respect to , evaluated at fixed quantum number relative to the ESQPT, for , , , . The semiclassical results for the scaling (with , , , and ) are shown for comparison (solid curve). The results of the asymptotic logarithmic expression (3.12), evaluated at , are also indicated (open triangles). Some care must be taken in establishing exactly what gap is to be considered in the context of finite-size scaling, since the gap is a function of , that is, how far above or below the ESQPT the gap is measured. The phase transition does not fall exactly “on” an eigenvalue ( is in general noninteger), the gap is varying singularly with at , and the quantum corrections are fluctuating most strongly for the first few eigenvalues in the vicinity of  [33]. Therefore, in considering the finite-size scaling at the mean field level, it is only meaningful to examine the gap some sufficient number of eigenvalues above or below the ESQPT, but nonetheless close enough () that the scaling appropriate to the ESQPT dominates over the usual scaling. The gap for is plotted as a function of , for , in Fig. 9(c). (The quantity plotted is essentially the gap between the fifth and sixth eigenvalues above , but interpolation is necessary, since is discrete and noninteger in the actual spectra.) Note foremost that the gaps for and , or for and , or for and , converge towards each other for large . Since is symmetric about [see (3.9)], this demonstrates that the asymptotic behavior depends on through , as expected if the properties of the ESQPT are dominated by the value (3.8) of the parabolic top of the barrier. The semiclassical estimate (3.11) is shown for comparison (using only one fixed value for each symmetric pair of values, for simplicity) and appears to reasonably reproduce the finite-size scaling. The results of the simple logarithmic approximation from (3.12), evaluated at , are also shown for reference. 4.3 Order parameters Let us now consider the singularity in the order parameter (or ), which plays a defining role for the ground state QPT. The evolution of the order parameter is shown as a function of in Fig. 8(c), again for the vibron model, for the same level () considered in Fig. 8(a). This quantity is closely related to the energy plotted in Fig. 8(a), since by the Feynman-Hellmann theorem. It is seen that undergoes a dip towards zero at , which becomes sharper and deeper with increasing . At the semiclassical level, one of the essential characteristics of the ESQPT was localization of the wave function at , together with vanishing classical velocity (hence, ). In coordinate form, [with the coordinate definitions used in (3.2)], so the natural extension to the fully quantum description is localization of probability with respect to occupation number at . The order parameter is shown as a function of energy in Fig. 8(d), for the same fixed value () considered in Fig. 8(b). The “evolution” of properties with respect to excitation energy is of necessity discrete, since for finite the eigenvalue spectrum is itself discrete [Fig. 8(d) inset]. It is apparent from Fig. 8(c,d) that, while drops towards zero at the ESQPT, and the dip becomes sharper and deeper with increasing , is far from actually reaching zero at the finite being considered. 5 Quantum phases So far we have considered the excited state quantum phase transition as a singularity in the evolution of the excited state properties rather than as a boundary between phases. A central question which arises in connection with the ESQPT phenomenology concerns the meaning of “phases” for excited states, namely, whether or not the excited states on each side of the phase transition can meaningfully be considered to belong to qualitatively distinct phases. Of course, in thermodynamics, it is well known that phase transitions, in the sense of singularities, do not necessarily imply the existence of distinguishable phases, the liquid-vapor transition in the vicinity of a critical point being a classic counterexample. Here we approach identification of phases both through indirect measures of the structural properties of the states on either side of the ESQPT (e.g., order parameters and spectroscopic signatures) and directly through inspection of the wave functions. For the ground state, the “phase” is simply indicated by the value of the order parameter (or ). In the large limit, the value of is qualitatively different on either side of , namely, vanishing for and nonzero (growing towards ) for . In contrast, for the excited states, does not show such a qualitative difference between the two sides of the ESQPT. Rather, as the level crosses the ESQPT but is nonzero on either side (Sec. 4.3). Therefore, the expectation value by itself does not distinguish two “phases” for the excited states. The reason is fundamentally related to the classical limit of the problem (Sec. 3). Recall that . For the classical ground state, the kinetic energy vanishes, and the static equilibrium value for is simply determined by the location of the minimum in the potential (3.1). For excited states, such a static quantity no longer provides a suitable measure of the phase at the classical level, since excited states (with nonzero kinetic energy) are not described by a single equilibrium position. Instead, one must consider a dynamical definition of phase, taking into account the topology of the classical orbits in the phase space [8, 9]. The classical analogue of the “expectation value” of an observable is its time average over the classical motion, . This is also the semiclassical average with respect to the first-order WKB probability density , so the time average carries over naturally to the quantum expectation value. At the quantum level, the consequence of the breakdown of the static definition is that the expectation value does not provide an unambiguous measure of the phase of an excited state. Correlation diagram for the Figure 10: Correlation diagram for the vibron model (), with , showing the change in angular momentum degeneracies across the ESQPT. Probability distributions for the entire spectrum of eigenstates, decomposed with respect to the Figure 11: Probability distributions for the entire spectrum of eigenstates, decomposed with respect to the quantum number, i.e., in the basis, for the vibron model with and . The probability distributions are shown for (a) , (b) , (c) , and (d)
8f5ae7e8081a80f5
A Physicist’s Guide to Machine Learning and Its Opportunities Share This: A Physicist’s Guide to Machine Learning and Its Opportunities Kendra Redmond, Editor The ATLAS collaboration upgrades parts of its detectors in preparation for the LHC upgrade. Image by Maximilien Brice, copyright CERN.As we browse, drive, watch, and order, data pours into the ether. Our behaviors and preferences are collected and analyzed, and in response, the world changes. The combination of advances in semiconductor computation devices and this new and extremely large influx of data has powered rapid growth in machine learning. Evgeni Gousev is senior director at Qualcomm Technologies Inc. and chairman of the board of directors of the tinyML Foundation, www.tinyML.org. He has a PhD in solid state physics. Photo courtesy of Gousev.“The world is becoming more digitized, whether or not we want it,” says Evgeni Gousev, a PhD physicist and senior director at Qualcomm Technologies Inc., a company working toward an internet-of-things reality where billions of devices are intelligently connected. “We are all living in a data-driven world now,” he says. Companies like Facebook (now Meta) are paying unfathomable sums of money to acquire technology startups, often for access to their data. And it’s not just tech companies buying tech startups. The pharmaceutical company Pfizer gave Israel COVID-19 vaccine priority in 2021, in part because Israel agreed to share health data on its citizens. “Data is an integral component of the digital economy,” says Sandeep Giri, a staff project manager at Google and honorary member of Sigma Pi Sigma. With data―and the ability to interpret it―comes power. That may include economic power and the power to influence public opinion, but it can also include the power to improve access to education and healthcare, the diagnosis and treatment of diseases, car crash survival rates, severe weather predictions, our understanding of the universe, and many other aspects of the human experience. Finding meaning in massive amounts of messy, real-world data is a challenge, but it’s one that physicists are uniquely poised to tackle. We’re in the midst of “a once-in-a-century opportunity for physicists to play a bigger role in society,” says Gousev. Machine learning That opportunity lies in the rapidly growing field of machine learning. A subset of artificial intelligence (AI), machine learning is perhaps the most powerful tool we have for making sense of data that isn’t neatly organized or for which we don’t know all the governing rules. Machine learning describes a system in which an algorithm, or set of algorithms, learns from data and adapts. It’s a salient correlation to the process of applying physics to the real world. “Machine learning is essentially a system in which rather than building an algorithm or a model from an explicit description of desired behavior, we provide a set of examples that define the desired behavior of the system,” says Chris Rowen, vice president for engineering for Collaboration AI at Cisco. Rowen gives this example: Say you want a program that classifies something as a dog or a cat. You probably don’t want to try to describe what makes a cat a cat or what makes a dog a dog, in algorithmic terms. Instead, machine learning allows you to train a generalized system with a bunch of pictures of cats and dogs. From these inputs, the system extracts the relevant features of dogs and cats and infers an algorithm that distinguishes between these two classes of inputs across a wide variety of kinds of pictures.1 “Machine learning is really great for cases where you don’t have an algorithm with explicit rules on how to accomplish a certain task,” says Michelle Kuchera, a computational physicist and assistant physics professor at Davidson College. She says that it’s also great for discovery―looking for patterns, outliers, or unexpected behavior in data—and for making fast theoretical predictions. In cases when a prediction would typically take an extremely long time to calculate, you can use machine learning to build a surrogate model that can make much faster calculations. From toolbox to sandbox Machine learning has direct applications in physics and astronomy research. As co-PI of the Algorithms for Learning in Physics Applications group at Davidson, Kuchera collaborates with theoretical and experimental physicists to address computational challenges. Machine learning is ideal for overcoming some of these challenges, such as identifying interesting particle interactions among the huge data sets produced at particle accelerators and speeding up time-consuming theoretical predictions. “If you look at the Large Hadron Collider (LHC), or any of the scientific instruments where there’s a lot of fine-tuning that’s all happening in real time with the magnets and so forth, and if you want to control them, it’s great to be able to do that using machine learning. . .You’re going to infer very complex patterns much more easily,” says Vijay Janapa Reddi, associate professor of engineering and applied science at Harvard University. It’s not just the big particle physics collaborations that use machine learning. Scientists are using it to design new materials, find turbulent motion on the sun, uncover anomalies in the US power grid, give robots humanlike sensitivity to touch, and much more. Machine learning isn’t a magic bullet for all situations. “If you have a really solid understanding of the physics and the explicit mathematical rules to accomplish a task that you’re interested in, then that’s the preferred method, unless there’s some challenge with implementing it or it’s taking too long to be reasonable,” Kuchera says. But it’s one more powerful tool in the data analysis toolbox. Applying machine learning to areas outside of physics and astronomy also constitutes a gratifying and fulfilling career for many physicists and astronomers. For his PhD thesis, Sean Grullon studied neutrino fluxes at the IceCube particle detector at the South Pole. He dabbled with machine learning at times, as one of many data analysis techniques. When he graduated and decided to leave academia, machine learning was starting to take off. Grullon jumped in and has been applying machine learning to healthcare-related challenges ever since. He’s now the lead AI scientist at Proscia, a startup that builds tools to help pathologists find better ways to fight cancer. They’re using deep learning, a subset of machine learning that utilizes neural networks, to analyze pathology images for melanoma. Deep learning is particularly powerful for natural language processing and computer vision applications, which are notoriously difficult to do with conventional approaches. “A physics background is really appropriate for the field of deep learning,” Grullon says, in part because of the math background physics requires―most machine learning algorithms reflect different applications of linear algebra—and in part because physicists understand data. Compared to what you might find in a computer science class, data from the real world is messy. But physicists are comfortable with error bars, uncertainties, and probabilities. Grullon has found his career path to be gratifying. “I’ve found it rewarding, very interesting, and also very impactful,” he says. Machine learning is “a wonderful, wide-open area,” says Helen Jackson, a PhD nuclear physicist and machine learning researcher. Jackson’s PhD thesis focused on the effects of radiation on high-electron-mobility transistors. Upon graduation she had lots of data analysis and software experience, and while looking for a job, she taught herself machine learning. That opened the door to a position applying deep learning to airport security―using computer vision to detect threats in the cluttered airport environment. Since then she’s worked as a contractor on machine learning applications ranging from position-sensitive detection in computer vision to complex document understanding. In Jackson’s opinion, physicists are primed to work in machine learning. She has found that some companies “actually prefer to hire someone like a physicist or chemist rather than a straight computer science major, because a computer science major knows the mechanics of the code, but we know the underlying application and what this machine learning [system] is supposed to do.” She says the work is a lot of fun, and the applications are “just fascinating.” At Cisco, Rowen leads the team charged with improving the audio and video environment of the WebEx collaborative platform with machine learning and AI. With a bachelor’s degree in physics and a PhD in electrical engineering, he finds the mixture of important societal questions, computer architecture, and fundamental physics in machine learning fascinating. Machine learning deals with computationally hard problems, like what makes up speech, but uses physical systems that you can trace all the way down to electrons, Rowen says. “This continuity of understanding from physics on up through the computer architecture questions, the computationally hard algorithm questions, and the application questions surrounding machine learning and neural networks has been so exciting and interesting,” he says. Opportunities for physicists Sandeep Giri is a staff project manager at Google, cloud.google.com/tpu, and on the AIP Foundation’s board of trustees. He has a BS in physics and an MS in materials science and engineering. Photo courtesy of Giri.Gousev earned his PhD in solid state physics and has spent most of his career at IBM and Qualcomm developing new technologies, many involving machine learning. As the AI-based economy comes racing toward us, he sees not just an opportunity but a need for physicists to get involved. “We look at the whole world around us through a different type of lens, through a different type of mindset. We look at connecting dots in the environment, because we’ve been trained to look at the laws of physics and understand how things are connected in the world,” he says. That holistic picture of machine learning ranges from electrical components to program architecture and even ethics. What is the problem? What are possible solutions? Should we even be solving this problem? Who else might utilize this solution? What biases and inequities might emerge if this method is used with other data sets, like data on humans? Sorting through these questions requires a well-equipped, critically thinking, and creative workforce. “We have to prepare students for this new economy, and I strongly believe physics departments have a big opportunity,” Giri says. But taking advantage of that opportunity will require some changes. “There is a disconnect between physics departments and the AI-based economy that is inevitably coming our way,” he says. After earning bachelor’s degrees in physics and mathematics, Giri was on his way to a PhD in materials science and engineering when he decided to change course and take a job in industry. He’s worked at Qualcomm and then for Google on projects ranging from head-mounted displays to supercomputers. He’s also been an advisor for undergraduate physics education efforts through the American Institute of Physics (the parent organization of Sigma Pi Sigma) and the American Physical Society, and is a board member of the AIP Foundation. Giri says that the tools exist to prepare physics and astronomy students for this new paradigm, but physics departments need to embrace them. Physics departments often leave students feeling intimidated by and unprepared for careers in industry, whether by lack of knowledge or in favor of promoting a more traditional academic degree path. Many young students think the only physics career path is academia, and some choose not to major in physics for this reason. “I believe that a majority of physics majors today don’t only want to learn Newton’s laws or the Schrödinger equation. They want to know ‘What type of skills do I need to solve the problems that bring meaning to me? How can I build a product or service that leaves an impact on the world?’” Giri says. “Physics students would benefit from an awareness of all the technical and nontechnical career paths that exist in the machine learning and AI space,” says Giri. That ranges from software engineering to hardware design, systems engineering, supply chain, operations, product and project management, sales and business development, and beyond. These are all careers that people with a physics background can and do grow into. Machine learning is at the intersection of skills, opportunity, and change-the-world capacity, and that’s a huge opportunity for physics and astronomy departments to attract and retain new students―including students from groups that are traditionally underrepresented in physics. For example, in 2020 the TEAM-UP report noted the following key findings during its study of systemic issues that contribute to the underrepresentation of African Americans in physics and astronomy:2 The connection of physics to activities that improve society or benefit one’s community is especially important to African American students. Having multiple pathways into and through the major helps to recruit and retain students who may not have initially considered physics or astronomy as an option. There is a vast set of existing resources that departments, physics students, and professional physicists can utilize to take advantage of machine learning and its opportunities. Many are free or low cost and don’t require anything but curiosity, a willingness to learn and explore, some logical thinking, and a bit of math―all things every physicist and astronomer has in good measure. 1. To read more about classification algorithms in machine learning, see Sidath Asiri, “Machine Learning Classifiers,” Towards Data Science (blog), June 11, 2018, towardsdatascience.com/machine-learning-classifiers-a5cc4e1b0623. 2. The TEAM-UP report was written by the AIP National Task Force to Elevate African American Representation in Undergraduate Physics & Astronomy (TEAM-UP) in 2020. It’s the result of a two-year investigation into the long-term systemic issues within physics and astronomy that have contributed to the underrepresentation of African Americans in these fields and includes actionable recommendations for reversing the trend. See TEAM-UP Task Force, The Time Is Now: Systemic Changes to Increase African Americans with Bachelor’s Degrees in Physics and Astronomy (American Institute of Physics, 2020), www.aip.org/diversity-initiatives/team-up-task-force. Spotlight on TinyML In its early days, machine learning was done at large-scale data centers, but now the technology has moved into our phones and homes—think Alexa and Siri. There’s so much data that it’s not cost-effective, energy efficient, or at times even practical to move all of this data into the cloud for processing. In the cutting-edge research area of TinyML (tiny machine learning), scientists are running machine learning models on ultra-low-power microcontrollers. They aim to keep the data processing as close as possible to the data, thereby enabling always-on sensors or other devices, more secure networks, and the ability to add features like voice recognition to small devices that can’t be recharged frequently. Learn more at www.tinyML.org. Get Up to Speed on Machine Learning There are many widely available, internet-based resources on machine learning. This list is compiled from recommendations given by the physicists interviewed for A Physicists Guide to Machine Learning and Its Opportunities. In most cases URLs are not listed, but if you’re interested in machine learning you won’t have any trouble finding them. Learning Python • Machine learning is commonly done using Python. Google’s Python Class and Microsoft’s Introduction to Python are good, free online classes. Blogs and background • To get a sense of machine learning, its vocabulary, and what’s happening in the field, check out blogs like Google AI, Facebook AI, Berkeley AI Research, and Stanford AI Lab. If what they’re writing about excites you, that’s a good indication you should investigate it further. • Towards Data Science is another great blog if you’re just getting started. They have a lot of introductory articles that explain machine learning and deep learning algorithms and how to get started. Setting up your system • Scikit has a package for Python for machine learning with a good overview of machine learning algorithms and how to incorporate them in Python. • Environments like TensorFlow (Google) and PyTorch (Facebook) allow you to quickly build models for whatever kind of data you have. Online courses • Platforms like edX, Coursera, Udemy, and Udacity have free or low-cost Python classes and machine learning classes with projects that you can complete and show a prospective employer. Andrew Ng’s machine learning course out of Stanford is very popular, and it’s free on Coursera. • HarvardX’s Tiny Machine Learning (TinyML) and Google are collaborating on a series of courses focused on TinyML. The courses cover topics from the fundamentals of machine learning to collecting data, designing and optimizing machine learning models, and assessing their outputs. The first three courses are available now on edX, https://tinyml.seas.harvard.edu/courses/. • The Google Cloud AI Platform has tools, videos, and documentation for data science and machine learning, https://developers.google.com/learn/topics/datascience. The following resources may be especially helpful: - Google’s codelab “TensorFlow, Keras and deep learning, without a PhD” - Online learning channel, www.youtube.com/user/googlecloudplatform - Product documentation, https://cloud.google.com/docs • fast.ai has courses, tools, and articles for people interested in getting into machine learning. Getting data • Don’t have data? There are public domain data repositories with data on almost anything you could want, and most machine learning courses will direct you to them. Kaggle has lots of public datasets. Resources for teaching • In support of departments that want to teach their students about machine learning, Harvard has made much of its TinyML content and classroom materials, open source licensed and available at https://tinyml.seas.harvard.edu/#courses. More from this Department
a481316fba9e0e1f
- Art Gallery - Quantum mechanics is a fundamental theory in physics that provides a description of the physical properties of nature at the scale of atoms and subatomic particles.[2] It is the foundation of all quantum physics including quantum chemistry, quantum field theory, quantum technology, and quantum information science. Classical physics, the description of physics that existed before the theory of relativity and quantum mechanics, describes many aspects of nature at an ordinary (macroscopic) scale, while quantum mechanics explains the aspects of nature at small (atomic and subatomic) scales, for which classical mechanics is insufficient. Most theories in classical physics can be derived from quantum mechanics as an approximation valid at large (macroscopic) scale.[3] Quantum mechanics arose gradually, from theories to explain observations which could not be reconciled with classical physics, such as Max Planck's solution in 1900 to the black-body radiation problem, and the correspondence between energy and frequency in Albert Einstein's 1905 paper which explained the photoelectric effect. Early quantum theory was profoundly re-conceived in the mid-1920s by Niels Bohr, Erwin Schrödinger, Werner Heisenberg, Max Born and others. The original interpretation of quantum mechanics is the Copenhagen interpretation, developed by Niels Bohr and Werner Heisenberg in Copenhagen during the 1920s. The modern theory is formulated in various specially developed mathematical formalisms. In one of them, a mathematical function, the wave function, provides information about the probability amplitude of energy, momentum, and other physical properties of a particle. Main article: History of quantum mechanics In 1896 Wilhelm Wien empirically determined a distribution law of black-body radiation,[8] called Wien's law. Ludwig Boltzmann independently arrived at this result by considerations of Maxwell's equations. However, it was valid only at high frequencies and underestimated the radiance at low frequencies. The foundations of quantum mechanics were established during the first half of the 20th century by Max Planck, Niels Bohr, Werner Heisenberg, Louis de Broglie, Arthur Compton, Albert Einstein, Richard Feynman, Erwin Schrödinger, Max Born, John von Neumann, Paul Dirac, Enrico Fermi, Wolfgang Pauli, Max von Laue, Freeman Dyson, David Hilbert, Wilhelm Wien, Satyendra Nath Bose, Arnold Sommerfeld, and others. The Copenhagen interpretation of Niels Bohr became widely accepted. Max Planck corrected this model using Boltzmann's statistical interpretation of thermodynamics and proposed what is now called Planck's law, which led to the development of quantum mechanics. After Planck's solution in 1900 to the black-body radiation problem (reported 1859), Albert Einstein offered a quantum-based explanation of the photoelectric effect (1905, reported 1887). Around 1900–1910, the atomic theory but not the corpuscular theory of light[9] first came to be widely accepted as scientific fact; these latter theories can be considered quantum theories of matter and Electromagnetic radiation, respectively. However, the photon theory was not widely accepted until about 1915. Even until Einstein's Nobel Prize, Niels Bohr did not believe in the photon.[10] Among the first to study quantum phenomena were Arthur Compton, C. V. Raman, and Pieter Zeeman, each of whom has a quantum effect named after him. Robert Andrews Millikan studied the photoelectric effect experimentally, and Albert Einstein developed a theory for it. At the same time, Ernest Rutherford experimentally discovered the nuclear model of the atom, and Niels Bohr developed a theory of atomic structure, confirmed by the experiments of Henry Moseley. In 1913 Peter Debye extended Bohr's theory by introducing elliptical orbits, a concept also introduced by Arnold Sommerfeld.[11] This phase is known as old quantum theory. \( E=h\nu \ , \) Max Planck is considered the father of the quantum theory. where h is Planck's constant. Planck cautiously insisted that this was only an aspect of the processes of absorption and emission of radiation and was not the physical reality of the radiation.[12] In fact, he considered his quantum hypothesis a mathematical trick to get the right answer rather than a sizable discovery.[13] However, in 1905 Albert Einstein interpreted Planck's quantum hypothesis realistically and used it to explain the photoelectric effect, in which shining light on certain materials can eject electrons from the material. Einstein won the 1921 Nobel Prize in Physics for this work. In the mid-1920s quantum mechanics was developed to become the standard formulation for atomic physics. In the summer of 1925, Bohr and Heisenberg published results that closed the old quantum theory. Due to their particle-like behavior in certain processes and measurements, light quanta came to be called photons (1926). In 1926 Erwin Schrödinger suggested a partial differential equation for the wave functions of particles like electrons. And when effectively restricted to a finite region, this equation allowed only certain modes, corresponding to discrete quantum states – whose properties turned out to be exactly the same as implied by matrix mechanics.[16] Einstein's simple postulation spurred a flurry of debate, theorizing, and testing. Thus, the entire field of quantum physics emerged, leading to its wider acceptance at the Fifth Solvay Conference in 1927.[17] By 1930 quantum mechanics had been further unified and formalized by David Hilbert, Paul Dirac and John von Neumann[19] with greater emphasis on measurement, the statistical nature of our knowledge of reality, and philosophical speculation about the 'observer'.[20] It has since permeated many disciplines, including quantum chemistry, quantum electronics, quantum optics, and quantum information science. It also provides a useful framework for many features of the modern periodic table of elements, and describes the behaviors of atoms during chemical bonding and the flow of electrons in computer semiconductors, and therefore plays a crucial role in many modern technologies.[18] Its speculative modern developments include string theory and quantum gravity theory. While quantum mechanics was constructed to describe the world of the very small, it is also needed to explain some macroscopic phenomena such as superconductors[21] and superfluids.[22] The word quantum derives from the Latin, meaning "how great" or "how much".[23] In quantum mechanics, it refers to a discrete unit assigned to certain physical quantities such as the energy of an atom at rest (see Figure 1). The discovery that particles are discrete packets of energy with wave-like properties led to the branch of physics dealing with atomic and subatomic systems which is today called quantum mechanics. It underlies the mathematical framework of many fields of physics and chemistry, including condensed matter physics, solid-state physics, atomic physics, molecular physics, computational physics, computational chemistry, quantum chemistry, particle physics, nuclear chemistry, and nuclear physics.[24] Some fundamental aspects of the theory are still actively studied.[25] Quantum mechanics is essential for understanding the behavior of systems at atomic length scales and smaller. If the physical nature of an atom were solely described by classical mechanics, electrons would not orbit the nucleus, since orbiting electrons emit radiation (due to circular motion) and so would quickly lose energy and collide with the nucleus. This framework was unable to explain the stability of atoms. Instead, electrons remain in an uncertain, non-deterministic, smeared, probabilistic wave–particle orbital about the nucleus, defying the traditional assumptions of classical mechanics and electromagnetism.[26] Broadly speaking, quantum mechanics incorporates four classes of phenomena for which classical physics cannot account[20]: quantization of certain physical properties quantum entanglement principle of uncertainty wave–particle duality Mathematical formulations Main article: Mathematical formulation of quantum mechanics See also: Quantum logic Mathematically equivalent formulations Especially since Heisenberg was awarded the Nobel Prize in Physics in 1932 for the creation of quantum mechanics, the role of Max Born in the development of QM was overlooked until the 1954 Nobel award. The role is noted in a 2005 biography of Born, which recounts his role in the matrix formulation and the use of probability amplitudes. Heisenberg acknowledges having learned matrices from Born, as published in a 1940 festschrift honoring Max Planck.[46] In the matrix formulation, the instantaneous state of a quantum system encodes the probabilities of its measurable properties, or "observables". Examples of observables include energy, position, momentum, and angular momentum. Observables can be either continuous (e.g., the position of a particle) or discrete (e.g., the energy of an electron bound to a hydrogen atom).[47] An alternative formulation of quantum mechanics is Feynman's path integral formulation, in which a quantum-mechanical amplitude is considered as a sum over all possible classical and non-classical paths between the initial and final states. This is the quantum-mechanical counterpart of the action principle in classical mechanics. Relation to other scientific theories Question, Web Fundamentals.svg Unsolved problem in physics: (more unsolved problems in physics) Relation to classical physics Predictions of quantum mechanics have been verified experimentally to an extremely high degree of accuracy.[50] According to the correspondence principle between classical and quantum mechanics, all objects obey the laws of quantum mechanics, and classical mechanics is just an approximation for large systems of objects (or a statistical quantum mechanics of a large collection of particles).[51] The laws of classical mechanics thus follow from the laws of quantum mechanics as a statistical average at the limit of large systems or large quantum numbers (Ehrenfest theorem).[52][53] However, chaotic systems do not have good quantum numbers, and quantum chaos studies the relationship between classical and quantum descriptions in these systems. Quantum coherence is an essential difference between classical and quantum theories as illustrated by the Einstein–Podolsky–Rosen (EPR) paradox – an attack on a certain philosophical interpretation of quantum mechanics by an appeal to local realism.[54] Quantum interference involves adding together probability amplitudes, whereas classical "waves" infer that there is an adding together of intensities. For microscopic bodies, the extension of the system is much smaller than the coherence length, which gives rise to long-range entanglement and other nonlocal phenomena characteristic of quantum systems.[55] Quantum coherence is not typically evident at macroscopic scales, except maybe at temperatures approaching absolute zero at which quantum behavior may manifest macroscopically.[56] This is in accordance with the following observations: While the seemingly "exotic" behavior of matter posited by quantum mechanics and relativity theory become more apparent for extremely small particles or for velocities approaching the speed of light, the laws of classical, often considered "Newtonian", physics remain accurate in predicting the behavior of the vast majority of "large" objects (on the order of the size of large molecules or bigger) at velocities much smaller than the velocity of light.[58] Copenhagen interpretation of quantum versus classical kinematics A big difference between classical and quantum mechanics is that they use very different kinematic descriptions.[59] In Niels Bohr's mature view, quantum mechanical phenomena are required to be experiments, with complete descriptions of all the devices for the system, preparative, intermediary, and finally measuring. The descriptions are in macroscopic terms, expressed in ordinary language, supplemented with the concepts of classical mechanics.[60][61][62][63] The initial condition and the final condition of the system are respectively described by values in a configuration space, for example a position space, or some equivalent space such as a momentum space. Quantum mechanics does not admit a completely precise description, in terms of both position and momentum, of an initial condition or "state" (in the classical sense of the word) that would support a precisely deterministic and causal prediction of a final condition.[64][65] In this sense, a quantum phenomenon is a process, a passage from initial to final condition, not an instantaneous "state" in the classical sense of that word.[66][67] Thus there are two kinds of processes in quantum mechanics: stationary and transitional. For a stationary process, the initial and final condition are the same. For a transition, they are different. Obviously by definition, if only the initial condition is given, the process is not determined.[64] Given its initial condition, prediction of its final condition is possible, causally but only probabilistically, because the Schrödinger equation is deterministic for wave function evolution, but the wave function describes the system only probabilistically.[68][69] Relation to general relativity Gravity is negligible in many areas of particle physics, so that unification between general relativity and quantum mechanics is not an urgent issue in those particular applications. However, the lack of a correct theory of quantum gravity is an important issue in physical cosmology and the search by physicists for an elegant "Theory of Everything" (TOE). Consequently, resolving the inconsistencies between both theories has been a major goal of 20th- and 21st-century physics. Many prominent physicists, including Stephen Hawking, worked for many years to create a theory underlying everything. This TOE would combine not only the models of subatomic physics, but also derive the four fundamental forces of nature – the strong force, electromagnetism, the weak force, and gravity – from a single force or phenomenon. However, after considering Gödel's Incompleteness Theorem, Hawking concluded that a theory of everything is not possible, and stated so publicly in his lecture "Gödel and the End of Physics" (2002).[74] Attempts at a unified field theory Main article: Grand unified theory The quest to unify the fundamental forces through quantum mechanics is ongoing. Quantum electrodynamics (or "quantum electromagnetism"), which is (at least in the perturbative regime) the most accurately tested physical theory in competition with general relativity,[75][76] has been merged with the weak nuclear force into the electroweak force; work continues, to merge it with the strong force into the electrostrong force. Current predictions state that at around 1014 GeV these three forces fuse into a single field.[77] Beyond this "grand unification", it is speculated that it may be possible to merge gravity with the other three gauge symmetries, expected to occur at roughly 1019 GeV. However – and while special relativity is parsimoniously incorporated into quantum electrodynamics – the expanded general relativity, currently the best theory describing the gravitation force, has not been fully incorporated into quantum theory. One of those searching for a coherent TOE is Edward Witten, a theoretical physicist who formulated the M-theory, which is an attempt at describing the supersymmetrical based string theory. M-theory posits that our apparent 4-dimensional spacetime is, in reality, actually an 11-dimensional spacetime containing 10 spatial dimensions and 1 time dimension, although 7 of the spatial dimensions are – at lower energies – completely "compactified" (or infinitely curved) and not readily amenable to measurement or probing. Another popular theory is loop quantum gravity (LQG) proposed by Carlo Rovelli, that describes quantum properties of gravity. It is also a theory of quantum spacetime and quantum time, because in general relativity the geometry of spacetime is a manifestation of gravity. LQG is an attempt to merge and adapt standard quantum mechanics and standard general relativity. This theory describes space as granular analogous to the granularity of photons in the quantum theory of electromagnetism and the discrete energy levels of atoms. More precisely, space is an extremely fine fabric or networks "woven" of finite loops called spin networks. The evolution of a spin network over time is called a spin foam. The predicted size of this structure is the Planck length, which is approximately 1.616×10−35 m. According to this theory, there is no meaning to length shorter than this (cf. Planck scale energy). Philosophical implications Main article: Interpretations of quantum mechanics Since its inception, the many counter-intuitive aspects and results of quantum mechanics have provoked strong philosophical debates and many interpretations. Even fundamental issues, such as Max Born's basic rules about probability amplitudes and probability distributions, took decades to be appreciated by society and many leading scientists. Richard Feynman once said, "I think I can safely say that nobody understands quantum mechanics."[78] According to Steven Weinberg, "There is now in my opinion no entirely satisfactory interpretation of quantum mechanics."[79] The Copenhagen interpretation – due largely to Niels Bohr and Werner Heisenberg – remains most widely accepted some 75 years after its enunciation. According to this interpretation, the probabilistic nature of quantum mechanics is not a temporary feature which will eventually be replaced by a deterministic theory, but is instead a final renunciation of the classical idea of "causality". It also states that any well-defined application of the quantum mechanical formalism must always make reference to the experimental arrangement, due to the conjugate nature of evidence obtained under different experimental situations. Albert Einstein, himself one of the founders of quantum theory, did not accept some of the more philosophical or metaphysical interpretations of quantum mechanics, such as rejection of determinism and of causality. He famously said about this, "God does not play with dice".[80] He rejected the concept that the state of a physical system depends on the experimental arrangement for its measurement. He held that a state of nature occurs in its own right, regardless of whether or how it might be observed. That view is supported by the currently accepted definition of a quantum state, which does not depend on the configuration space for its representation, that is to say, manner of observation. Einstein also believed that underlying quantum mechanics must be a theory that thoroughly and directly expresses the rule against action at a distance; in other words, he insisted on the principle of locality. He considered, but rejected on theoretical grounds, a particular proposal for hidden variables to obviate the indeterminism or acausality of quantum mechanical measurement. He believed that quantum mechanics was a currently valid but not a permanently definitive theory for quantum phenomena. He thought its future replacement would require profound conceptual advances, and would not come quickly or easily. The Bohr-Einstein debates provide a vibrant critique of the Copenhagen interpretation from an epistemological point of view. In arguing for his views, he produced a series of objections, of which the most famous has become known as the Einstein–Podolsky–Rosen paradox. John Bell showed that this EPR paradox led to experimentally testable differences between quantum mechanics and theories that rely on local hidden variables. Experiments confirmed the accuracy of quantum mechanics, thereby showing that quantum mechanics cannot be improved upon by addition of local hidden variables.[81] Alain Aspect's experiments in 1982 and many later experiments definitively verified quantum entanglement. Entanglement, as demonstrated in Bell-type experiments, does not violate causality, since it does not involve transfer of information. By the early 1980s, experiments had shown that such inequalities were indeed violated in practice – so that there were in fact correlations of the kind suggested by quantum mechanics. At first these just seemed like isolated esoteric effects, but by the mid-1990s, they were being codified in the field of quantum information theory, and led to constructions with names like quantum cryptography and quantum teleportation.[82] Quantum cryptography is proposed for use in high-security applications in banking and government. In light of the Bell tests, Cramer in 1986 formulated his transactional interpretation[84] which is unique in providing a physical explanation for the Born rule.[85] Relational quantum mechanics appeared in the late 1990s as the modern derivative of the Copenhagen interpretation. Main article: Quantum physics Quantum mechanics has had enormous[18] success in explaining many of the features of our universe, with regards to small-scale and discrete quantities and interactions which cannot be explained by classical methods. Quantum mechanics is often the only theory that can reveal the individual behaviors of the subatomic particles that make up all forms of matter (electrons, protons, neutrons, photons, and others). Quantum mechanics has strongly influenced string theories, candidates for a Theory of Everything (see reductionism). Free particle Particle in a box 1-dimensional potential energy box (or infinite potential well) Main article: Particle in a box \( -{\frac {\hbar ^{2}}{2m}}{\frac {d^{2}\psi }{dx^{2}}}=E\psi . \) With the differential operator defined by \( {\hat {p}}_{x}=-i\hbar {\frac {d}{dx}} \) the previous equation is evocative of the classic kinetic energy analogue, \( {\frac {1}{2m}}{\hat {p}}_{x}^{2}=E, \) with state \( \psi \) in this case having energy E coincident with the kinetic energy of the particle. \( \psi (x)=Ae^{ikx}+Be^{-ikx}\qquad \qquad E={\frac {\hbar ^{2}k^{2}}{2m}} \) or, from Euler's formula, \( {\displaystyle \psi (x)=C\sin(kx)+D\cos(kx).\!} \) The infinite potential walls of the box determine the values of \( {\displaystyle C,D,} \) and k at x=0 and x=L where \( \psi \) must be zero. Thus, at x=0, \( {\displaystyle \psi (0)=0=C\sin(0)+D\cos(0)=D} \) and D=0. At x=L, \( {\displaystyle \psi (L)=0=C\sin(kL),} \) in which C cannot be zero as this would conflict with the Born interpretation. Therefore, since \( {\displaystyle \sin(kL)=0} \) , \( {\displaystyle kL} \) must be an integer multiple of \( \pi , \) \( k={\frac {n\pi }{L}}\qquad \qquad n=1,2,3,\ldots . \( E={\frac {\hbar ^{2}\pi ^{2}n^{2}}{2mL^{2}}}={\frac {n^{2}h^{2}}{8mL^{2}}}. \) The ground state energy of the particles is \( E_{1} \) for \( {\displaystyle n=1.} \) The energy of the particle in the nth state is \( {\displaystyle E_{n}=n^{2}E_{1},\;n=2,3,4,\dots } \) Particle in a box with boundary condition \( {\displaystyle V(x)=0,\;-a/2<x<+a/2} \) At x=0, the wave function is not actually zero at all values of n. Clearly, from the wave function variation graph we have, At \( {\displaystyle n=1,3,5,\dots ,} \) the wave function follows a cosine curve with x=0 as the origin. At \( {\displaystyle n=2,4,6,\dots ,} \) the wave function follows a sine curve with x=0 as the origin. Variation of wave function with x and n. Wave Function Variation with x and n. \( {\displaystyle \psi _{n}(x)={\begin{cases}A\cos(k_{n}x),&n=1,3,5,\dots \\B\sin(k_{n}x),&n=2,4,6,\dots \end{cases}}} \) Finite potential well Main article: Finite potential well Rectangular potential barrier Main article: Rectangular potential barrier Harmonic oscillator Main article: Quantum harmonic oscillator \( {\displaystyle V(x)={\frac {1}{2}}m\omega ^{2}x^{2}.} \) \( {\displaystyle \psi _{n}(x)={\sqrt {\frac {1}{2^{n}\,n!}}}\cdot \left({\frac {m\omega }{\pi \hbar }}\right)^{1/4}\cdot e^{-{\frac {m\omega x^{2}}{2\hbar }}}\cdot H_{n}\left({\sqrt {\frac {m\omega }{\hbar }}}x\right),\qquad } \) \( {\displaystyle n=0,1,2,\ldots .} \) where Hn are the Hermite polynomials \( {\displaystyle H_{n}(x)=(-1)^{n}e^{x^{2}}{\frac {d^{n}}{dx^{n}}}\left(e^{-x^{2}}\right),} \) and the corresponding energy levels are \( {\displaystyle E_{n}=\hbar \omega \left(n+{1 \over 2}\right).} \) Step potential Main article: Solution of Schrödinger equation for a step potential The potential in this case is given by: \( V(x)={\begin{cases}0,&x<0,\\V_{0},&x\geq 0.\end{cases}} \) \( {\displaystyle \psi _{1}(x)={\frac {1}{\sqrt {k_{1}}}}\left(A_{\rightarrow }e^{ik_{1}x}+A_{\leftarrow }e^{-ik_{1}x}\right)\qquad x<0} \) \( {\displaystyle \psi _{2}(x)={\frac {1}{\sqrt {k_{2}}}}\left(B_{\rightarrow }e^{ik_{2}x}+B_{\leftarrow }e^{-ik_{2}x}\right)\qquad x>0}, \) \( k_{1}={\sqrt {2mE/\hbar ^{2}}} \) \( k_{2}={\sqrt {2m(E-V_{0})/\hbar ^{2}}}. See also Angular momentum diagrams (quantum mechanics) Einstein's thought experiments Hamiltonian (quantum mechanics) Two-state quantum system Fractional quantum mechanics List of quantum-mechanical systems with analytical solutions List of textbooks on classical and quantum mechanics Macroscopic quantum phenomena Phase space formulation Quantum dynamics Regularization (physics) Spherical basis A New Kind of Science Note (a) for Quantum phenomena "ysfine.com". Retrieved 11 September 2015. D. Hilbert Lectures on Quantum Theory, 1915–1927 Ehrenfest, P. (1927). "Bemerkung über die angenäherte Gültigkeit der klassischen Mechanik innerhalb der Quantenmechanik". Zeitschrift für Physik. 45 (7–8): 455–457. Bibcode:1927ZPhy...45..455E. doi:10.1007/BF01329203. S2CID 123011242. Smith, Henrik (1991). Introduction to Quantum Mechanics. World Scientific Pub Co Inc. pp. 108–109. ISBN 978-9810204754. Cramer, John G. (1986). "The transactional interpretation of quantum mechanics". Reviews of Modern Physics. 58 (3): 647–687. Bibcode:1986RvMP...58..647C. doi:10.1103/RevModPhys.58.647. Derivation of particle in a box, chemistry.tidalswan.com N.B. on precision: If \( \delta x \) and\( \delta p \) are the precisions of position and momentum obtained in an individual measurement and \( \sigma _{x} \), \( {\displaystyle \sigma _{p}} \) their standard deviations in an ensemble of individual measurements on similarly prepared systems, then "There are, in principle, no restrictions on the precisions of individual measurements \( \delta x \) and \( \delta p \), but the standard deviations will always satisfy \( {\displaystyle \sigma _{x}\sigma _{p}\geq \hbar /2} \)".[4] Ghirardi, GianCarlo, 2004. Sneaking a Look at God's Cards, Gerald Malsbary, trans. Princeton Univ. Press. The most technical of the works cited here. Passages using algebra, trigonometry, and bra–ket notation can be passed over on a first reading. Victor Stenger, 2000. Timeless Reality: Symmetry, Simplicity, and Multiple Universes. Buffalo NY: Prometheus Books. Chpts. 5–8. Includes cosmological and philosophical considerations. More technical: Dirac, P.A.M. (1930). The Principles of Quantum Mechanics. ISBN 978-0-19-852011-5. The beginning chapters make up a very clear and comprehensible introduction. Feynman, Richard P.; Leighton, Robert B.; Sands, Matthew (1965). The Feynman Lectures on Physics. 1–3. Addison-Wesley. ISBN 978-0-7382-0008-8. Griffiths, David J. (2004). Introduction to Quantum Mechanics (2nd ed.). Prentice Hall. ISBN 978-0-13-111892-8. OCLC 40251748. A standard undergraduate text. Albert Messiah, 1966. Quantum Mechanics (Vol. I), English translation from French by G.M. Temmer. North Holland, John Wiley & Sons. Cf. chpt. IV, section III. online Transnational College of Lex (1996). What is Quantum Mechanics? A Physics Adventure. Language Research Foundation, Boston. ISBN 978-0-9643504-1-0. OCLC 34661512. Further reading Bernstein, Jeremy (2009). Quantum Leaps. Cambridge, Massachusetts: Belknap Press of Harvard University Press. ISBN 978-0-674-03541-6. Bohm, David (1989). Quantum Theory. Dover Publications. ISBN 978-0-486-65969-5. Eisberg, Robert; Resnick, Robert (1985). Quantum Physics of Atoms, Molecules, Solids, Nuclei, and Particles (2nd ed.). Wiley. ISBN 978-0-471-87373-0. Liboff, Richard L. (2002). Introductory Quantum Mechanics. Addison-Wesley. ISBN 978-0-8053-8714-8. Merzbacher, Eugen (1998). Quantum Mechanics. Wiley, John & Sons, Inc. ISBN 978-0-471-88702-7. Sakurai, J.J. (1994). Modern Quantum Mechanics. Addison Wesley. ISBN 978-0-201-53929-5. Shankar, R. (1994). Principles of Quantum Mechanics. Springer. ISBN 978-0-306-44790-7. Veltman, Martinus J.G. (2003), Facts and Mysteries in Elementary Particle Physics. Zucav, Gary (1979, 2001). The Dancing Wu Li Masters: An overview of the new physics (Perennial Classics Edition) HarperCollins. Quantum mechanics Introduction History timeline Glossary Classical mechanics Old quantum theory collapse Universal wavefunction Wave–particle duality Matter wave Wave propagation Virtual particle Dirac Klein–Gordon Pauli Rydberg Schrödinger Heisenberg Interaction Matrix mechanics Path integral formulation Phase space Schrödinger algebra calculus differential stochastic geometry group Q-analog Measurement problem QBism biology chemistry chaos cognition complexity theory computing Quantum technology links Matrix isolation Phase qubit Quantum dot cellular automaton display laser single-photon source solar cell Quantum well Dirac sea Fractional quantum mechanics Quantum electrodynamics links Quantum geometry Quantum field theory links Quantum gravity links Quantum information science Quantum mechanics of time travel Textbooks Physics Encyclopedia Hellenica World - Scientific Library Retrieved from "http://en.wikipedia.org/"
c7e33a06b501acfd
Technology & Digital - Where Will Quantum Computers Create Value—and When? Related Expertise: Data and Analytics, Digital, Technology, and Data, Quantum Computing Where Will Quantum Computers Create Value—and When? By Matt LangioneCorban Tillemann-DickAmit Kumar, and Vikas Taneja Master the New Logic of Competition Learn more Despite the relentless pace of progress over the last half-century, there are still many problems that today’s computers can’t solve. Some simply await the next generation of semiconductors rounding the bend on the assembly line. Others will likely remain beyond the reach of classical computers forever. It is the prospect of finally finding a solution to these “classically intractable” problems that has CIOs, CTOs, heads of R&D, hedge fund managers, and others abuzz at the dawn of the era of quantum computing. Their enthusiasm is not misplaced. In the coming decades, we expect productivity gains by end users of quantum computing, in the form of both cost savings and revenue opportunities, to surpass $450 billion annually. Gains will accrue first to firms in industries with complex simulation and optimization requirements. It will be a slow build for the next few years: we anticipate value for end users in these sectors to reach a relatively modest $2 billion to $5 billion by 2024. But value will then increase rapidly as the technology and its commercial viability mature. When they do, the opportunity will not be evenly distributed—far from it. Since quantum computing is a step-change technology with substantial barriers to adoption, early movers will seize a large share of the total value, as laggards struggle with integration, talent, and IP. Based on interviews and workshops involving more than 100 experts, a review of some 150 peer-reviewed publications, and analysis of more than 35 potential use cases, this report assesses how and where quantum computing will create business value, the likely progression, and what steps executives should take now to put their firms in the best position to capture that value. Who Benefits? If quantum computing’s transformative value is at least five to ten years away, why should enterprises consider investing now? The simple answer is that this is a radical technology that presents formidable ramp-up challenges, even for companies with advanced supercomputing capabilities. Both quantum programming and the quantum tech stack bear little resemblance to their classical counterparts (although the two technologies might learn to work together quite closely). Early adopters stand to gain expertise, visibility into knowledge and technological gaps, and even intellectual property that will put them at a structural advantage as quantum computing gains commercial traction. More important, many experts believe that progress toward maturity in quantum computing will not follow a smooth, continuous curve. Instead, quantum computing is a candidate for a precipitous breakthrough that may come at any time. Companies that have invested to integrate quantum computing into the workflow are far more likely to be in a position to capitalize—and the leads they open will be difficult for others to close. This will confer substantial advantage in industries in which classically intractable computational problems lead to bottlenecks and missed revenue opportunities. We have explored previously the likely development of quantum computing over the next ten years as well as The Coming Quantum Leap in Computing. (You can also take our quiz to test your own quantum IQ.) The assessment of future business value begins with the question of what kinds of problems quantum computers can solve more efficiently than binary machines. It’s far from a simple answer, but two indicators are the size and complexity of the calculations that need to be done. Take drug discovery, for example. For scientists trying to design a compound that will attach itself to, and modify, a target disease pathway, the critical first step is to determine the electronic structure of the molecule. But modeling the structure of a molecule of an everyday drug such as penicillin, which has 41 atoms at ground state, requires a classical computer with some 1086 bits—more transistors than there are atoms in the observable universe. Such a machine is a physical impossibility. But for quantum computers, this type of simulation is well within the realm of possibility, requiring a processor with 286 quantum bits, or qubits. This radical advantage in information density is why many experts believe that quantum computers will one day demonstrate superiority, or quantum advantage, over classical computers in solving four types of computational problems that typically impede efforts to address numerous business and scientific challenges. (See Exhibit 1.) These four problem types cover a large application landscape in a growing number of industries, which we will explore below. Three Phases of Progress Quantum computing is coming. But when? How will this sea change play out? What will the impact look like early on, and how long will it take before quantum computers are delivering on the full promise of quantum advantage? We see applications (and business income) developing over three phases. (See Exhibit 2.) The NISQ Era The next three to five years are expected to be characterized by so-called NISQ (Noisy Intermediate-Scale Quantum) devices, which are increasingly capable of performing useful, discrete functions but are characterized by high error rates that limit functionality. One area in which digital computers will retain advantage for some time is accuracy: they experience fewer than one error in 1024 operations at the bit level, while today’s qubits destabilize much too quickly for the kinds of calculations necessary for quantum-advantaged molecular simulation or portfolio optimization. Experts believe that error correction will remain quantum computing’s biggest challenge for the better part of a decade. That said, research underway at multiple major companies and startups, among them IBM, Google, and Rigetti, has led to a series of technological breakthroughs in error mitigation techniques to maximize the usefulness of NISQ-era devices. These efforts increase the chances that the near to medium term will see the development of medium-sized, if still error-prone, quantum computers that can be used to produce the first quantum-advantaged experimental discoveries in simulation and combinatorial optimization. Broad Quantum Advantage In 10 to 20 years, the period that will witness broad quantum advantage, quantum computers are expected to achieve superior performance in tasks of genuine industrial significance. This will provide step-change improvements over the speed, cost, or quality of a binary machine. But it will require overcoming significant technical hurdles in error correction and other areas, as well as continuing increases in the power and reliability of quantum processors. Quantum advantage has major implications. Consider the case of chemicals R&D. If quantum simulation enables researchers to model interactions among materials as they grow in size—without the coarse, distorting heuristic techniques used today—companies will be able to reduce, or even eliminate, expensive and lengthy lab processes such as in situ testing. Already, companies such as Zapata Computing are betting that quantum-advantaged molecular simulation will drive not only significant cost savings but the development of better products that reach the market sooner. The story is similar for automakers, airplane manufacturers, and others whose products are, or could be, designed according to computational fluid dynamics. These simulations are currently hindered by the inability of classical computers to model fluid behavior on large surfaces (or at least to do so in practical amounts of time), necessitating expensive and laborious physical prototyping of components. Airbus, among others, is betting on quantum computing to produce a solution. The company launched a challenge in 2019 “to assess how [quantum computing] could be included or even replace other high-performance computational tools that, today, form the cornerstone of aircraft design.” Full-Scale Fault Tolerance The third phase is still decades away. Achieving full-scale fault tolerance will require makers of quantum technology to overcome additional technical constraints, including problems related to scale and stability. But once they arrive, we expect fault-tolerant quantum computers to affect a broad array of industries. They have the potential to vastly reduce trial and error and improve automation in the specialty-chemicals market, enable tail-event defensive trading and risk-driven high-frequency trading strategies in finance, and even promote in silico drug discovery, which has major implications for personalized medicine. With all this promise, it’s little surprise that the value creation numbers get very big over time. In the industries we analyzed, we foresee quantum computing leading to incremental operating income of $450 billion to $850 billion by 2050 (with a nearly even split between incremental annual revenues streams and recurring cost efficiencies). (See Exhibit 3.) While that’s a big carrot, it comes at the end of a long stick. More important for today’s decision makers is understanding the potential ramifications in their industries: what problems quantum computers will solve, where and how the value will be realized, and how they can put their organizations on the path to value ahead of the competition. How to Benefit But what should companies do today to get ready? A good first step is performing a diagnostic assessment to determine the potential impact of quantum computing on the company or industry and then, if appropriate, developing a partnership strategy, ideally with a full-stack technology provider, to start the process of integrating capabilities and solutions. The first part of the diagnostic is a self-assessment of the company’s technical challenges and use of computing resources, ideally involving people from R&D and other functions, such as operations, finance, and strategy, to push boundaries and bring a full perspective to what will ultimately be highly technical discussions. The key questions to ask are: • Are you currently spending a lot of money or other resources to tackle problems with a high-performance computer? If so, do these efforts yield low-impact, delayed, or piecemeal results that leave value on the table? • Does the presumed difficulty of solving simulation or optimization problems prevent you from trying high-performance computing or other computational solutions? • Are you spending resources on inefficient trial-and-error alternatives, such as wet-lab experiments or physical prototyping? • Are any of the problems you work on rooted in the quantum-advantaged problem archetypes identified above? If the answer to any of these questions is yes, the next step is an “impact of quantum” (IQ) diagnostic that has two components. The first is sizing a company’s unsolved technical challenges and the potential quantum computing solutions as they are expected to develop and mature over time. The goal is to visualize the potential value of solutions that address real missed revenue opportunities, delays in time to market, and cost inefficiencies. This analysis requires combining domain-specific knowledge (of molecular simulation, for example) with expertise in quantum computing and then assessing potential future value. (We demonstrate how this is done at the industry level in the next section.) The second component of the IQ assessment is a vendor assessment. Given the ever-changing nature of the quantum computing ecosystem, it is critical to find the right partner or partner providers, meaning companies that have expertise across the broadest set of technical challenges that you face. Some form of partnership will likely be the best play for enterprises wishing to get a head start on building a capability in the near term. A low-risk, low-cost strategy, it enables companies to understand how the technology will affect their industry, determine what skills and IT gaps they need to fill, and even play a role in shaping the future of quantum computing by providing technology providers with the industry-specific skills and expertise necessary to produce solutions for critical near-term applications. Partnerships have already become the model of choice for most of the commercial activity in the field to date. Among the collaborations formed so far are JPMorgan Chase and IBM’s joint development of solutions related to risk assessment and portfolio optimization, Volkswagen and Google’s work to develop batteries for electric vehicles, and the Dubai Electricity and Water Authority’s alliance with Microsoft to develop energy optimization solutions. High-Impact Applications One way to assess where quantum computing will have an early or outsized impact is to connect the quantum-advantaged problem types shown in Exhibit 1 with discrete pain points in particular industries. Behind each pain point is a bottleneck for which there may be multiple solutions or a latent pool of income that can be tapped in many ways, so the mapping must account for solutions rooted in other technologies—machine learning, for example—that may arrive on the scene sooner or at lower cost, or that may be integrated more easily into existing workflows. Establishing a valuation for quantum computing in a given industry (or for a given firm) over time—charting what we call a path to value—therefore requires gathering and synthesizing expertise from a number of sources, including: • Industry business leaders who can attest to the business value of addressing a given pain point • Industry technical experts who can assess the limits of current and future nonquantum solutions to the pain point • Quantum computing experts who can confirm that quantum computers will be able to solve the problem and when Using this methodology, we sized up the impact of quantum advantage on a number of sectors, with an emphasis on the early opportunities. Here are the results. Materials Design and Drug Discovery On the face of things, no two fields of R&D more naturally lend themselves to quantum advantage than materials design and drug discovery. Even if some experts dispute whether quantum computers will have an advantage in modeling the properties of quantum systems, there is no question that the shortcomings of classical computers limit R&D in these areas. Materials design, in particular, is a slow lab process characterized by trial and error. According to R&D Magazine, for specialty materials alone, global firms spend upwards of $40 billion a year on candidate material selection, material synthesis, and performance testing. Improvements to this workflow will yield not only cost savings through efficiencies in design and reduced time to market, but revenue uplift through net new materials and enhancements to existing materials. The benefits of design improvements yielding optimal synthetic routes also would, in all likelihood, flow downstream, affecting the estimated $460 billion spent annually on industrial synthesis. The biggest benefit quantum computing offers is the potential for simulation, which for many materials requires computing power that binary machines do not possess. Reducing trial-and-error lab processes and accelerating discovery of new materials are only possible if materials scientists can derive higher-level spectral, thermodynamic, and other properties from ground-state energy levels described by the Schrödinger equation. The problem is that none of today’s approximate solutions—from Hartree-Fock to density functional theory—can account for the quantized nature of the electromagnetic field. Current computational approximations only apply to a subset of materials for which interactions between electrons can effectively be ignored or easily approximated, and there remains a well-defined set of problems in want of simulation-based solutions—as well as outsized rewards for the companies that manage to solve them first. These problems include simulations of strongly correlated electron systems (for high-temperature superconductors), manganites with colossal magnetoresistance (for high-efficiency data storage and transfer), multiferroics (for high-absorbency solar panels), and high-density electrochemical systems (for lithium air batteries). All of the major players in quantum computing, including IBM, Google, and Microsoft, have established partnerships or offerings in materials science and chemistry in the last year. Google’s partnership with Volkswagen, for example, is aimed at simulations for high-performance batteries and other materials. Microsoft released a new chemical simulation library developed in collaboration with Pacific Northwest National Laboratory. IBM, having run the largest-ever molecular simulation on a quantum computer in 2017, released an end-to-end stack for quantum chemistry in 2018. Potential end users of the technology are embracing these efforts. One researcher at a leading global materials manufacturer believes that quantum computing “will be able to make a quality improvement on classical simulations in less than five years,” during which period value to end users approaching some $500 million is expected to come in the form of design efficiencies (measured in terms of reduced expenditures across the R&D workflow). As error correction enables functional simulations of more complex materials, “you’ll start to unlock new materials and it won’t just be about efficiency anymore,” a professor of chemistry told us. During the period of broad quantum advantage, we estimate that upwards of $5 billion to $15 billion in value (which we measure in terms of increased R&D productivity) will accrue to end users, principally through development of new and enhanced materials. Once full-scale fault-tolerant quantum computers become available, value could reach the range of $30 billion to $60 billion, principally through new materials and extensions of in-market patent life as time-to-market is reduced. As the head of business development at a major materials manufacturer put it, “If unknown chemical relationships are unlocked, the current specialty market [currently $51 billion in operating income annually] could double.” Quantum advantage in drug discovery will be later to arrive given the maturity of existing simulation methods for “established” small molecules. Nonetheless, in the long run, as quantum computers unlock simulation capabilities for molecules of increasing size and complexity, experts believe that drug discovery will be among the most valuable of all industry applications. In terms of cost savings, the drug discovery workflow is expected to become more efficient, with in silico modeling increasingly replacing expensive in vitro and in vivo screening. But there is good reason to believe that there will be major top-line implications as well. Experts expect more powerful simulations not only to promote the discovery of new drugs but also to generate replacement value over today’s generics as larger molecules produce drugs with fewer off-target effects. Between reducing the $35 billion in annual R&D spending on drug discovery and boosting the $920 billion in yearly branded pharmaceutical revenues, quantum computing is expected to yield $35 billion to $75 billion in annual operating income for end users once companies have access to fault-tolerant machines. Financial Services In recent history, few if any industries have been faster to adopt vanguard technologies than financial services. There is good reason to believe that the industry will quickly ramp up investments in quantum computing, which can be expected to address a clearly defined set of simulation and optimization problems—in particular, portfolio optimization in the short term and risk analytics in the long term. Investment money has already started to flow to startups, with Goldman Sachs and Fidelity investing in full-stack companies such as D-Wave, while RBS and Citigroup have invested in software players such as 1QBit and QC Ware. Our discussions with quantitative investors about the pain points in portfolio optimization, arbitrage strategy, and trading costs make it easy to understand why. While investors use classical computers for all these problems today, the capabilities of these machines are limited—not so much by the number of assets or the number of constraints introduced into the model as by the type of constraints. For example, adding noncontinuous, nonconvex functions such as interest rate yield curves, trading lots, buy-in thresholds, and transaction costs to investment models makes the optimization “surface” so complex that classical optimizers often crash, simply take too long to compute, or, worse yet, mistake a local optimum for the global optimum. To get around this problem, analysts often simplify or exclude such constraints, sacrificing the fidelity of the calculation for reliability and speed. Such tradeoffs, many experts believe, would be unnecessary with quantum combinatorial optimization. Exploiting the probability amplitudes of quantum states is expected to dramatically accelerate portfolio optimization, enabling a full complement of realistic constraints and reducing portfolio turnover and transaction costs—which one head of portfolio risk at a major US bank estimates to represent as much as 2% to 3% of assets under management. We calculate that income gains from portfolio optimization should reach $200 million to $500 million in the next three to five years and accelerate swiftly with the advent of enhanced error correction during the period of broad quantum advantage. The resulting improvements in risk analytics and forecasting will drive value creation beyond $5 billion. As the brute-force Monte Carlo simulations used for risk assessment today give way to more powerful “quantum walk algorithms,” faster simulations will give banks more time to react to negative market risk (with estimated returns of as much as 12 basis points). The expected benefits include better intraday risk analytics for banks and near-real-time risk assessment for quantitative hedge funds. “Brute-force Monte Carlo simulations for economic spikes and disasters took a whole month to run,” complained one former quantitative analyst at a leading US hedge fund. Bankers and hedge fund managers hope that, with the kind of whole-market simulations theoretically possible on full-scale fault-tolerant quantum computers, they will be able to better predict black-swan events and even develop risk-driven high-frequency trading. “Moving risk management from positioning defensively to an offensive trading strategy is a whole new paradigm,” noted one former trader at a US hedge fund. Coupled with enhanced model accuracy and positioning against extreme tail events, reductions in capital reserves (by as much as 15% in some estimates) will position quantum computing to deliver $40 billion to $70 billion in operating income to banks and other financial services companies as the technology matures. Computational Fluid Dynamics Simulating the precise flow of liquids and gases in changing conditions on a computer, known as computational fluid dynamics, is a critical but costly undertaking for companies in a range of industries. Spending on simulation software by companies using CFD to design airplanes, spacecraft, cars, medical devices, and wind turbines exceeded $4 billion in 2017, but the costs that weigh most heavily on decision makers in these industries are those related to expensive trial-and-error testing such as wind tunnel and wing flex tests. These direct costs, together with the revenue potential of energy-optimized design, have many experts excited by the prospect of introducing quantum simulation into the workflow. The governing equations behind CFD, known as the Navier-Stokes equations, are nonlinear partial differential equations and thus a natural fit for quantum computing. The first bottleneck in the CFD workflow is actually an optimization problem in the preprocessing stage that precedes any fluid dynamics algorithms. Because of the computational complexity involved in these algorithms, designers create a mesh to simulate the surface of an object—say, an airplane wing. The mesh is composed of geometric primitives whose vertices form a constellation of nodes. Most classic optimizers impose a limit on the number of nodes in a mesh that can be simulated efficiently to 109. This forces the designer into a tradeoff between how fine-grained and how large a surface can be simulated. Quantum optimization is expected to relieve the designer of that constraint so that bigger pieces of the puzzle can be solved at once and more accurately—from the spoiler, for example, to the entire wing. Improving this preprocessing stage of the design process is expected to lead to operating-income gains of between $1 billion and $2 billion across industries through reduced costs and faster revenue realization. As quantum computers mature, we expect the benefits of improved mesh optimization to be surpassed by those from accelerated and improved simulations. As with mesh optimization, the tradeoff in fluid simulations is between speed and accuracy. “For large simulations with more than 100 million cells,” one of our own experts told us, “run times could be weeks even on very powerful supercomputers.” And that is with the use of simplifying heuristics, such as approximate turbulence models. During the period of broad quantum advantage, experts believe that quantum simulation could enable designers to reduce the number of heuristics required to run Navier-Stokes solvers in manageable time periods, resulting in the replacement of expensive physical testing with accurate moving-ground aerodynamic models, unsteady aerodynamics, and turbulent-flow simulations. The benefits to end users in terms of cost reductions are expected to start at $1 billion to $2 billion during this period. With full-scale fault tolerance, value creation could as much as triple, as experts anticipate that quantum linear solvers will unlock predictive simulations that not only obviate physical testing requirements but lead to product improvements (such as improved fuel economy) and manufacturing yield optimization as well. We expect value creation in the phase of full-scale fault tolerance to range from $19 billion to $37 billion in operating income. Other Industries During the NISQ era, we expect more than 40% of the value created in quantum computing to come from materials design, drug discovery, financial services, and applications related to CFD. But applications in other industries will show early promise as well. Examples include: • Transportation and Logistics. Using quantum computers to address inveterate optimization challenges (such as the traveling salesman problem and the minimum spanning tree problem) is expected to lead to efficiencies in route optimization, fleet management, network scheduling, and supply chain optimization. • Energy. With the era of easy-to-find oil and gas coming to an end, companies are increasingly reliant on wave-based geophysical processing to locate new drilling sites. Quantum computing could not only accelerate the discovery process but also contribute to drilling optimizations for both greenfield and brownfield operations. • Meteorology. Many experts believe that quantum simulation will improve large-scale weather and climate forecasting technologies, which would not only enable earlier storm and severe-weather warnings but also bring speed and accuracy gains to industries that depend on weather-sensitive pricing and trading strategies. Should quantum computing become integrated into machine learning workflows, the list of affected industries would expand dramatically, with salient applications wherever predictive capabilities (supervised learning and deep learning), principal component analysis (dimension reduction), and clustering analysis (for anomaly detection) provide an advantage. While experts are divided on the timing of quantum computing’s impact on machine learning, the stakes are so high that many of the leading players are already putting significant resources against it today, with promising early results. For example, in conjunction with researchers from Oxford and MIT, a group from IBM recently proposed a set of methods for optimizing and accelerating support vector machines, which are applicable to a wide range of classification problems but have fallen out of favor in recent years because they quickly become inefficient as the number of predictor variables rises and the feature space expands. The eventual role of quantum computing in machine learning is still being defined, but early theoretical work, at least for optimizing current methods in linear algebra and support vector machines, shows promise. While it may be years before investments in a quantum strategy begin to pay off, failure to understand the coming impact of quantum computing in one’s industry is at best a missed opportunity, at worst an existential mistake. Companies that stay on the sidelines, assuming they can buy their way into the game later on, are likely to find themselves playing catchup—and with a lot of ground to cover. protected by reCaptcha Where Will Quantum Computers Create Value—and When?
b55e6e0970f6b69d
Checked content Introduction to quantum mechanics Related subjects: Physics Did you know... Werner Heisenberg and Erwin Schrödinger, founders of Quantum Mechanics. Quantum mechanics (QM, or quantum theory) is a physical science dealing with the behaviour of matter and energy on the scale of atoms and subatomic particles / waves. QM also forms the basis for the contemporary understanding of how very large objects such as stars and galaxies, and cosmological events such as the Big Bang, can be analyzed and explained. Quantum mechanics is the foundation of several related disciplines including nanotechnology, condensed matter physics, quantum chemistry, structural biology, particle physics, and electronics. The term "quantum mechanics" was first coined by Max Born in 1924. The acceptance by the general physics community of quantum mechanics is due to its accurate prediction of the physical behaviour of systems, including systems where Newtonian mechanics fails. Even general relativity is limited—in ways quantum mechanics is not—for describing systems at the atomic scale or smaller, at very low or very high energies, or at the lowest temperatures. Through a century of experimentation and applied science, quantum mechanical theory has proven to be very successful and practical. The foundations of quantum mechanics date from the early 1800s, but the real beginnings of QM date from the work of Max Planck in 1900. Albert Einstein and Niels Bohr soon made important contributions to what is now called the "old quantum theory." However, it was not until 1924 that a more complete picture emerged with Louis de Broglie's matter-wave hypothesis and the true importance of quantum mechanics became clear. Some of the most prominent scientists to subsequently contribute in the mid-1920s to what is now called the "new quantum mechanics" or "new physics" were Max Born, Paul Dirac, Werner Heisenberg, Wolfgang Pauli, and Erwin Schrödinger. Later, the field was further expanded with work by Julian Schwinger, Sin-Itiro Tomonaga and Richard Feynman for the development of Quantum Electrodynamics in 1947 and by Murray Gell-Mann in particular for the development of Quantum chromodynamics. The interference that produces colored bands on bubbles cannot be explained by a model that depicts light as a particle. It can be explained by a model that depicts it as a wave. The drawing shows sine waves that resemble waves on the surface of water being reflected from two surfaces of a film of varying width, but that depiction of the wave nature of light is only a crude analogy. Early researchers differed in their explanations of the fundamental nature of what we now call electromagnetic radiation. Some maintained that light and other frequencies of electromagnetic radiation are composed of particles, while others asserted that electromagnetic radiation is a wave phenomenon. In classical physics these ideas are mutually contradictory. Ever since the early days of QM scientists have acknowledged that neither idea by itself can explain electromagnetic radiation. Later experiments indicated that a packet or quantum model was needed to explain some phenomena. When light strikes an electrical conductor it causes electrons to move away from their original positions. The observed phenomenon could only be explained by assuming that the light delivers energy in definite packets. In a photoelectric device such as the light meter in a camera, light hitting the metallic detector causes electrons to move. Greater intensities of light at one frequency can cause more electrons to move, but they will not move faster. In contrast, higher frequencies of light can cause electrons to move faster. Ergo, intensity of light controls current, but frequency of light controls voltage. These observations raised a contradiction when compared with sound waves and ocean waves, where only intensity was needed to predict the energy of the wave. In the case of light, frequency appeared to predict energy. Something was needed to explain this phenomenon and to reconcile experiments that had shown light to have particle nature with experiments that had shown it to have wave nature. Despite the success of quantum mechanics, it does have some controversial elements. For example, the behaviour of microscopic objects described in quantum mechanics is very different from our everyday experience, which may provoke some degree of incredulity. Most of classical physics is now recognized to be composed of special cases of quantum physics theory and/or relativity theory. Dirac brought relativity theory to bear on quantum physics so that it could properly deal with events that occur at a substantial fraction of the speed of light. Classical physics, however, also deals with mass attraction (gravity), and no one has yet been able to bring gravity into a unified theory with the relativized quantum theory. Spectroscopy and onward NASA photo of the bright-line spectrum of hydrogen Photo of the bright-line spectrum of nitrogen In 1885, Johann Jakob Balmer (1825-1898) figured out how the frequencies of atomic hydrogen are related to each other. The formula is a simple one: where \lambda is wavelength, R is the Rydberg constant and n is an integer (n =3, 4,...) This formula can be generalized to apply to atoms that are more complicated than hydrogen, but we will stay with hydrogen for this general exposition. (That is the reason that the denominator in the first fraction is expressed as a square.) The next development was the discovery of the Zeeman effect, named after Pieter Zeeman (1865-1943). The physical explanation of the Zeeman effect was worked out by Hendrik Antoon Lorentz (1853-1928). Lorentz hypothesized that the light emitted by hydrogen was produced by vibrating electrons. It was possible to get feedback on what goes on within the atom because moving electrons create a magnetic field and so can be influenced by the imposition of an external magnetic field in a manner analogous to the way that one iron magnet will attract or repel another magnet. The Zeeman effect could be interpreted to mean that light waves are originated by electrons vibrating in their orbits, but classical physics could not explain why electrons should not fall out of their orbits and into the nucleus of their atoms, nor could classical physics explain why their orbits would be such as to produce the series of discrete frequencies derived by Balmer’s formula and displayed in the line spectra. Why did the electrons not produce a continuous spectrum? Old quantum theory Quantum mechanics developed from the study of electromagnetic waves through spectroscopy which includes visible light seen in the colours of the rainbow, but also other waves including the more energetic waves like ultraviolet light, x-rays, and gamma rays and the waves with longer wavelengths including infrared waves, microwaves and radio waves. Only waves that travel at the speed of light are included in this description. Also, when the word "particle" is used below, it always refers to elementary or subatomic particles. Planck's constant Classical physics predicted that a black-body radiator would produce infinite energy, but that result was not observed in the laboratory. If black-body radiation was dispersed into a spectrum, then the amount of energy radiated at various frequencies rose from zero at one end, peaked at a frequency related to the temperature of the radiating object, and then fell back to zero. In 1900, Max Planck developed an empirical equation that could account for the observed energy curves, but he could not harmonize it with classical theory. He concluded that the classical laws of physics do not apply on the atomic scale as had earlier been assumed. In this theoretical account, Planck allowed all possible frequencies, all possible wavelengths. However, he restricted the energy that is delivered. "In classical physics,... the energy of a given oscillator depends merely on its amplitude, and this amplitude is subject to no restriction." But, according to Planck's theory, the energy emitted by an oscillator is strictly proportional to its frequency. The higher the frequency, the greater the energy. To reach this theoretical conclusion, he postulated that a radiating body consisted of an enormous number of elementary oscillators, some vibrating at one frequency and some at another, with all frequencies from zero to infinity being represented. The energy E of any one oscillator was not permitted to take on any arbitrary value, but was proportional to some integral multiple of the frequency f of the oscillator. That is, E = nhf,\,\! where n =1, 2, 3,... The proportionality constant h is called Planck's constant. One of the most direct applications is finding the energy of photons. If h is known, and the frequency of the photon is known, then the energy of the photons can be calculated. For instance, if a beam of light illuminated a target, and its frequency was 540 × 1012 hertz, then the energy of each photon would be h × 540 × 1012 joules. The value of h itself is exceedingly small, about 6.6260693 × 10-34 joule seconds. This means that the photons in the beam of light have an energy of about 3.58 × 10-19 joules or (in another system of measurement) approximately 2.23 eV. When the energy of a wave is described in this manner, it seems that the wave is carrying its energy in little packets. This discovery then seemed to remake the wave into a particle. These packets of energy carried along with the wave were called quanta by Planck. Quantum mechanics began with the discovery that energy is delivered in packets whose size is related to the frequencies of all electromagnetic waves (and to the color of visible light since in that case frequency determines colour). Be aware, however, that these descriptions in terms of packet, wave and particle import macro-world concepts into the quantum world, where they have only provisional relevance or appropriateness. In early research on light, there were two competing ways to describe light, either as a wave propagated through empty space, or as small particles traveling in straight lines. Because Planck showed that the energy of the wave is made up of packets, the particle analogy became favored to help understand how light delivers energy in multiples of certain set values designated as quanta of energy. Nevertheless, the wave analogy is also indispensable for helping to understand other light phenomena. In 1905, Albert Einstein used Planck's constant to explain the photoelectric effect by postulating that the energy in a beam of light occurs in concentrations that he called light quanta, later on called photons. According to that account, a single photon of a given frequency delivers an invariant amount of energy. In other words, individual photons can deliver more or less energy, but only depending on their frequencies. Although the description that stemmed from Planck's research sounds like Newton's corpuscular account, Einstein's photon was still said to have a frequency, and the energy of the photon was accounted proportional to that frequency. The particle account had been compromised once again. Both the idea of a wave and the idea of a particle are models derived from our everyday experience. We cannot see individual photons. We can only investigate their properties indirectly. We look at some phenomena, such as the rainbow of colours that we see when a thin film of oil rests on the surface of a puddle of water, and we can explain that phenomenon to ourselves by comparing light with waves. We look at other phenomena, such as the way a photoelectric meter in our camera works, and we explain it by analogy to particles colliding with the detection screen in the meter. In both cases we take concepts from our everyday experience and apply them to a world we have never seen. Neither form of explanation, wave or particle, is entirely satisfactory. In general any model can only approximate that which it models. A model is useful only within the range of conditions where it is able to predict the real thing with accuracy. Newtonian physics is still a good predictor of many of the phenomena in our everyday life. To remind us that both "wave" and "particle" are concepts imported from our macro world to explain the world of atomic-scale phenomena, some physicists such as George Gamow have used the term "wavicle" to refer to whatever it is that is really there. In the following discussion, "wave" and "particle" may both be used depending on which aspect of quantum mechanical phenomena is under discussion. Reduced Planck's (or Dirac's) constant Relation between a cycle and a wave; half of a circle describes half of the cycle of a wave Planck's constant originally represented the energy that a light wave carries as a function of its frequency. A step in the development of this concept appeared in Bohr's work. Bohr was using a "planetary" or particle model of the electron, and could not understand why a 2π factor was essential to his experimentally derived formulae. Later, de Broglie postulated that electrons have frequencies, just as do photons, and that the frequency of an electron must conform to the conditions for a standing wave that can exist in a certain orbit. That is to say, the beginning of one cycle of a wave at some point on the circumference of a circle (since that is what an orbit is) must coincide with the end of some cycle. There can be no gap, no length along the circumference that is not participating in the vibration, and there can be no overlap of cycles. So the circumference of the orbit, C, must equal the wavelength, λ, of the electron multiplied by some positive integer (n = 1, 2, 3...). Knowing the circumference one can calculate wavelengths that fit that orbit, and knowing the radius, r, of the orbit one can calculate its circumference. To put all that in mathematical form, C = 2 \pi r = n \lambda \,\! and so \lambda = 2 \pi r/n \,\! and the appearance of the 2π factor is seen to occur simply because it is needed to calculate possible wavelengths (and therefore possible frequencies) when the radius of an orbit is already known. Again in 1925 when Werner Heisenberg developed his full quantum theory, calculations involving wave analysis called Fourier series were fundamental, and so the "reduced" version of Planck's constant (h/2π) became invaluable because it includes a conversion factor to facilitate calculations involving wave analysis. Finally, when this reduced Planck's constant appeared naturally in Dirac's equation it was then given an alternate designation, "Dirac's constant." Therefore, it is appropriate to begin with an explanation of what this constant is, even though the theories that made its use convenient have yet to be discussed. As noted above, the energy of any wave is given by its frequency multiplied by Planck's constant. A wave is made up of crests and troughs. In a wave, a cycle is defined by the return from a certain position to the same position such as from the top of one crest to the next crest. A cycle actually is mathematically related to a circle, and both have 360 degrees. A degree is a unit of measure for the amount of turn needed to produce an arc of a certain length at a given distance. A sine curve is generated by a point on the circumference of a circle as that circle rotates. (See a demonstration at: Rotation Applet) There are 2π radians per cycle in a wave, which is mathematically related to the way a circle has 360° (which are equal to two π radians). (A radian is simply the angle if a distance along the circumference of the circle is measured equal to the radius of the circle, and then lines are drawn to the centre of the circle. This forms an angle equal to 1 radian.) Since one cycle is 2π radians, when h is divided by 2π the two "2 π" factors will cancel out leaving just the radian to contend with. So, dividing h by 2π describes a constant that, when multiplied by the frequency of a wave, gives the energy in joules per radian. The reduced Planck's constant is written in mathematical formulas as ħ, and is read as "h-bar". \hbar = \frac{h}{2 \pi} \ . The reduced Planck's constant allows computation of the energy of a wave in units per radian instead of in units per cycle. These two constants h and ħ are merely conversion factors between energy units and frequency units. The reduced Planck's constant is used more often than h (Planck's constant) alone in quantum mechanical mathematical formulas for many reasons, one of which is that angular velocity or angular frequency is ordinarily measured in radians per second so using ħ that works in radians too will save a computation to put radians into degrees or vice-versa. Also, when equations relevant to those problems are written in terms of ħ, the frequently occurring 2π factors in numerator and denominator can cancel out, saving a computation. However, in other cases, as in the orbits of the Bohr atom, h/2π was obtained naturally for the angular momentum of the orbits. Another expression for the relation between energy and wave length is given in electron volts for energy and angstroms for wavelength: Ephoton (eV) = 12,400/λ(Å) — it appears not to involve h at all, but that is only because a different system of units has been used and now, numerically, the appropriate conversion factor is 12,400. Bohr atom The Bohr model of the atom, showing electron quantum jumping to ground state n=1 In 1897 the particle called the electron was discovered. By means of the gold foil experiment physicists discovered that matter is, volume for volume, largely space. Once that was clear, it was hypothesized that negative charge entities called electrons surround positively charged nuclei. So at first all scientists believed that the atom must be like a miniature solar system. But that simple analogy predicted that electrons would, within about one hundredth of a microsecond, crash into the nucleus of the atom. The great question of the early 20th century was, "Why do electrons normally maintain a stable orbit around the nucleus?" Bohr explained the orbits that electrons can take by relating the angular momentum of electrons in each "permitted" orbit to the value of h, Planck's constant. He held that an electron in the lowest orbital has a discrete angular momentum equal to h/2π. Each orbit after the initial orbit must provide for an electron's angular momentum being an integer multiple of that lowest value. He depicted electrons in atoms as being analogous to planets in a solar orbit. However, he took Planck's constant to be a fundamental quantity that introduces special requirements at this subatomic level and that explains the spacing of those "planetary" orbits. Bohr considered one revolution in orbit to be equivalent to one cycle in an oscillator (as in Planck's initial measurements to define the constant h) which is in turn similar to one cycle in a wave. The number of revolutions per second is (or defines) what we call the frequency of that electron or that orbital. Specifying that the frequency of each orbit must be an integer multiple of Planck's constant h would only permit certain orbits, and would also fix their size. Bohr generalized Balmer's formula for hydrogen by replacing denominator in the term 1/4 with an explicit squared variable: \frac{1}{\lambda} = R_\mathrm{H}\left(\frac{1}{m^2} - \frac{1}{n^2}\right), m=1,2,3,4,5,..., and n > m where λ is the wavelength of the light, RH is the Rydberg constant for hydrogen, and the integers n and m refer to the orbits between which electrons can transit. This generalization predicted many more line spectra than had been previously detected, and experimental confirmation of this prediction followed. It follows almost immediately that if \lambda is quantized as the formula above indicates, then the momentum of any photon must be quantized. The frequency of light, \nu, at a given wavelength \lambda is given by the relationship \nu=\frac{c}{\lambda} and :\lambda=\frac{c}{\nu} and multiplying by h/h = 1, \lambda=\frac{hc}{h\nu}, and we know that E = hν so \lambda=\frac{hc}{E} which we can rewrite as: \lambda=\frac{h}{E/c}, and E/c = p (momentum) so \lambda=\frac{h}{p} or p=\frac{h}{\lambda} Beginning with line spectra, physicists were able to deduce empirically the rules according to which the orbits of electrons are determined and to discover something vital about the momentums involved — that they are quantized. Bohr next realized how the angular momentum of an electron in its orbit, L, is quantized, i.e., he determined that there is some constant value K such that when it is multiplied by Planck’s constant, h, it will yield the angular momentum that pertains to the lowest orbital. When it is multiplied by successive integers it will then give the values of other possible orbitals. He later determined that K = 1/2π . (See the detailed argument at .) Wave-particle duality Probability distribution of the Bohr atom Niels Bohr determined that it is impossible to describe light adequately by the sole use of either the wave analogy or of the particle analogy. Therefore he enunciated the principle of complementarity, which is a theory of pairs, such as the pairing of wave and particle or the pairing of position and momentum. Louis de Broglie worked out the mathematical consequences of these findings. In quantum mechanics, it was found that electromagnetic waves could react in certain experiments as though they were particles and in other experiments as though they were waves. It was also discovered that subatomic particles could sometimes be described as particles and sometimes as waves. This discovery led to the theory of wave-particle duality by Louis-Victor de Broglie in 1924, which states that subatomic entities have properties of both waves and particles at the same time. The Bohr atom model was enlarged upon with the discovery by de Broglie that the electron has wave-like properties. In accord with de Broglie's conclusions, electrons can only appear under conditions that permit a standing wave. A standing wave can be made if a string is fixed on both ends and made to vibrate (as it would in a stringed instrument). That illustration shows that the only standing waves that can occur are those with zero amplitude at the two fixed ends. The waves created by a stringed instrument appear to oscillate in place, simply changing crest for trough in an up-and-down motion. A standing wave can only be formed when the wave's length fits the available vibrating entity. In other words, no partial fragments of wave crests or troughs are allowed. In a round vibrating medium, the wave must be a continuous formation of crests and troughs all around the circle. Each electron must be its own standing wave in its own discrete orbital. Development of modern quantum mechanics Full quantum mechanical theory Werner Heisenberg made the full quantum mechanical theory in 1925 at the young age of 23. Following his mentor, Niels Bohr, Werner Heisenberg began to work out a theory for the quantum behavior of electron orbitals. Because electrons could not be observed in their orbits, Heisenberg went about creating a mathematical description of quantum mechanics built on what could be observed, that is, the light emitted from atoms in their characteristic atomic spectra. Heisenberg studied the electron orbital on the model of a charged ball on a spring, an oscillator, whose motion is anharmonic (not quite regular). For a picture of the behaviour of a charged ball on a spring see: Vibrating Charges. Heisenberg first explained this kind of observed motion in terms of the laws of classical mechanics known to apply in the macro world, and then applied quantum restrictions, discrete (non-continuous) properties, to the picture. Doing so causes gaps to appear between the predicted orbitals so that the mathematical description he formulated would then represent only the electron orbitals predicted on the basis of the atomic spectra. In approaching the problem that Bohr gave him to solve, Heisenberg took the strategic stance that he would not deal with unobservable quantities. He would begin formulating equations using only the quantities that could be observed. That strategy led him to begin with the actual experimental evidence at hand: Measurements had been well established for such data as (1) the frequencies (and the mathematically related energies) emitted or absorbed by electron transitions from one of the Bohr stationary orbits, known to be associated with the bright line spectra, (2) the "transition amplitude" or likelihood of transition from any given orbit to any given orbit, known from the strength of the various lines in the bright spectrum, etc. From classical formulas that would characterize those phenomena Heisenberg created analogous formulas that took account of quantum conditions. Formulas that followed from the fundamental decisions made at this point resulted in good results but results that were sometimes not what one might expect. In the paper wherein he introduced quantum mechanics to the world he cautions, "A significant difficulty arises, however, if we consider two quantities x(t), y(t), and ask after their product....Whereas in classical x(t)y(t) is always equal to y(t)x(t), this is not necessarily the case in quantum theory." When the predicted values are exhibited in matrix form and multiplications are performed, the nature of the difficulty appears in a form that is more familiar to mathematicians. More significantly, empirical studies validate the theoretical results and suggest that there is something of deep importance in that the difference between x(t)y(t) and y(t)x(t) is a value related to Planck's constant. Schema for a table of transition frequencies (produced when electrons change orbitals): Electron States S1 S2 S3 S4 S5 .... S1 f1→1 f2→1 f3→1 f4→1 f5→1 ..... S2 f1→2 f2→2 f3→2 f4→2 f5→2 ..... S3 f1→3 f2→3 f3→3 f4→3 f5→3 ..... S4 f1→4 f2→4 f3→4 f4→4 f5→4 ..... S5 f1→5 f2→5 f3→5 f4→5 f5→5 ..... Schema for a related table showing the transition amplitudes: Electron States S1 S2 S3 S4 S5 .... S1 a1→1 a2→1 a3→1 a4→1 a5→1 ..... S2 a1→2 a2→2 a3→2 a4→2 a5→2 ..... S3 a1→3 a2→3 a3→3 a4→3 a5→3 ..... S4 a1→4 a2→4 a3→4 a4→4 a5→4 ..... S5 a1→5 a2→5 a3→5 a4→5 a5→5 ..... As related above, Heisenberg developed ways of meaningfully relating the information in tables such as these in a mathematical way. Empirically filling in the values for tables involving quantum quantities is not a simple procedure since any measurement made on a single system gives that one value but has the potential of changing other values. So large numbers of identical copies of the system in question must be prepared, and a single measure made on each system. Multiple experiments to determine the same characteristics are made, and the results are averaged. Even then, precise measurements of all characteristics of the system as they would appear simultaneously cannot be provided because of quantum uncertainty. A precise determination of one characteristic's value necessarily creates an uncertainty in the value of its correlate. "Certain pairs of observables simply cannot be simultaneously measured to an arbitrarily high level of precision." If simultaneous measurements are made of correlated characteristics (such as position and momentum) in multiple identical systems, there will inevitably be differences in the measurements such that the difference between their products is equal to or greater than \hbar/2." In 1925 Heisenberg published a paper entitled "Quantum-mechanical re-interpretation of kinematic and mechanical relations" relating his discoveries. So ended the old quantum theory and began the age of quantum mechanics. Heisenberg's paper gave few details that might aid readers in determining how he actually contrived to get his results for the one-dimensional models he used to form the hypothesis that proved so useful. In his paper, Heisenberg proposed to "discard all hope of observing hitherto unobservable quantities, such as the position and period of the electron", and restrict himself strictly to actually observable quantities. He needed mathematical rules for predicting the relations actually observed in nature, and the rules he produced worked differently depending on the sequence in which they were applied. "It quickly became clear that the non-commutativity (in general) of kinematical quantities in quantum theory was the really essential new technical idea in the paper." The special type of multiplication that turned out to be required in his formula was most elegantly described by using special arrays of numbers called matrices. In ordinary situations it does not matter in which order the operations involved in multiplication are performed, but matrix multiplication does not commute. Essentially that means that it matters which order given operations are performed in. Multiplying matrix A by matrix B is not the same as multiplying matrix B by matrix A. In symbols, A×B is in general not equal to B×A. (The important thing in quantum theory is that it turned out to matter whether one experimentally measures velocity first and then immediately measures position, or vice-versa.) The matrix convention turned out to be a convenient way of organizing information and making clear the exact sequence in which calculations must be made, and it reflects in a symbolic form the unexpected results obtained in the real world. Heisenberg approached quantum mechanics from the historical perspective that treated an electron as an oscillating charged particle. Bohr's use of this analogy had already allowed him to explain why the radii of the orbits of electrons could only take on certain values. It followed from this interpretation of the experimental results available and the quantum theory that Heisenberg subsequently created that an electron could not be at any intermediate position between two "permitted" orbits. Therefore electrons were described as "jumping" from orbit to orbit. The idea that an electron might now be in one place and an instant later be in some other place without having traveled between the two points was one of the earliest indications of the "spookiness" of quantum phenomena. Although the scale is smaller, the "jump" from orbit to orbit is as strange and unexpected as would be a case in which someone stepped out of a doorway in London onto the streets of Los Angeles. Quantum tunneling is one instance in which electrons seem to be able to move in the "spooky" way that Heisenberg ascribed to their actions within atoms. Amplitudes of position and momentum that have a period of 2 \pi like a cycle in a wave are called Fourier series variables. Heisenberg described the particle-like properties of the electron in a wave as having position and momentum in his matrix mechanics. When these amplitudes of position and momentum are measured and multiplied together, they give intensity. However, he found that when the position and momentum were multiplied together in that respective order, and then the momentum and position were multiplied together in that respective order, there was a difference or deviation in intensity between them of h/2\pi. Heisenberg would not understand the reason for this deviation until two more years had passed, but for the time being he satisfied himself with the idea that the math worked and provided an exact description of the quantum behaviour of the electron. Matrix mechanics was the first complete definition of quantum mechanics, its laws, and properties that described fully the behaviour of the electron. It was later extended to apply to all subatomic particles. Very soon after matrix mechanics was introduced to the world, Schrödinger, acting independently, produced a quantum wave theory that appeared to have no similarities whatsoever to Heisenberg's theory. It was computationally easier and avoided some of the odd-sounding ideas like "quantum leaps" of an electron from one orbit to another. But within a short time Schrödinger himself had shown that the two theories produced essentially the same results in all situations. Finally, Dirac made the idea of non-commutativity central to his own theory and proved the formulations of Heisenberg and of Schrödinger to be special cases of his own theory. Schrödinger wave equation Model of the Schrödinger atom, showing the nucleus with two protons (blue) and two neutrons (red), orbited by two electrons (waves) Because particles could be described as waves, later in 1925 Erwin Schrödinger analyzed what an electron would look like as a wave around the nucleus of the atom. Using this model, he formulated his equation for particle waves. Rather than explaining the atom by analogy to satellites in planetary orbits, he treated everything as waves whereby each electron has its own unique wavefunction. A wavefunction is described in Schrödinger's equation by three properties (later Wolfgang Pauli added a fourth). The three properties were (1) an "orbital" designation, indicating whether the particle wave is one that is closer to the nucleus with less energy or one that is farther from the nucleus with more energy, (2) the shape of the orbital, i.e., an indication that orbitals were not just spherical but other shapes, and (3) the magnetic moment of the orbital, which is a manifestation of force exerted by the charge of the electron as it rotates around the nucleus. These three properties were called collectively the wavefunction of the electron and are said to describe the quantum state of the electron. "Quantum state" means the collective properties of the electron describing what we can say about its condition at a given time. For the electron, the quantum state is described by its wavefunction, which is designated in physics by the Greek letter \psi (psi, pronounced "sigh"). The three properties of Schrödinger's equation that describe the wavefunction of the electron and, therefore, also describe the quantum state of the electron as described in the previous paragraph are each called quantum numbers. The first property that describes the orbital was numbered according to Bohr's model where n is the letter used to describe the energy of each orbital. This is called the principal quantum number. The next quantum number that describes the shape of the orbital is called the azimuthal quantum number and it is represented by the letter l (lower case L). The shape is caused by the angular momentum of the orbital. The rate of change of the angular momentum of any system is equal to the resultant external torque acting on that system. In other words, angular momentum represents the resistance of a spinning object to speed up or slow down under the influence of external force. The azimuthal quantum number "l" represents the orbital angular momentum of an electron around its nucleus. However, the shape of each orbital has its own letter as well. So for the letter "l" there are other letters to describe the shapes of "l". The first shape is spherical and is described by the letter s. The next shape is like a dumbbell and is described by the letter p. The other shapes of orbitals become more complicated (see Atomic Orbitals) and are described by the letters d, f, and g. For the shape of a carbon atom, see Carbon atom. The third quantum number of Schrödinger's equation describes the magnetic moment of the electron and is designated by the letter m and sometimes as the letter m with a subscript l because the magnetic moment depends upon the second quantum number l. In May 1926 Schrödinger published a proof that Heisenberg's matrix mechanics and his own wave mechanics gave equivalent results: mathematically they were the same theory. Yet both men disagreed on the interpretation of this theory. Heisenberg saw no problem in the existence of discontinuous quantum jumps, while Schrödinger hoped that a theory based on continuous wave-like properties could avoid this "nonsense about quantum jumps" (in the words of Wilhelm Wien ). Uncertainty principle In 1927, Heisenberg made a new discovery on the basis of his quantum theory that had further practical consequences of this new way of looking at matter and energy on the atomic scale. In Heisenberg's matrix mechanics formula, he encountered an error or difference of h/2\pi between position and momentum. The more certain the position of a particle is determined, the less certain the momentum is known, and h/2\pi is the lower limit of the uncertainty involved. This conclusion came to be called "Heisenberg's Indeterminacy Principle", or Heisenberg's Uncertainty Principle. For moving particles in quantum mechanics, there is simply a certain degree of exactness and precision that is missing. The observer can be precise when taking a measurement of position or can be precise when taking a measurement of momentum, but there is an inverse imprecision when measuring both at the same time as in the case of a moving particle like the electron. In the most extreme case, absolute precision of one variable would entail absolute imprecision regarding the other. Heisenberg, in a voice recording of an early lecture on the uncertainty principle pointing to a Bohr model of the atom, said: "You can say, well, this orbit is really not a complete orbit. Actually at every moment the electron has only an inaccurate position and an inaccurate velocity and between these two inaccuracies there is this uncertainty relation. And only by this idea it was possible to say what such an orbit was." One consequence of the uncertainty principle was that the electron could no longer be considered as in an exact location in its orbital. Rather the electron had to be described by every point where the electron could possibly inhabit. Calculating points of probable location for the electron in its known orbital created the picture of a cloud of points in a spherical shape for the orbital of a hydrogen atom which points gradually faded out nearer to the nucleus and farther from the nucleus. This picture may be termed a probability distribution. Therefore, the Bohr atom number n for each orbital became known as an n-sphere in the three dimensional atom and was pictured as a probability cloud where the electron surrounded the atom all at once. This led to the further description by Heisenberg that if a measurement of the electron was not being taken that it could not be described in one particular location but was everywhere in the electron cloud at once. In other words, quantum mechanics cannot give exact results, but only the probabilities for the occurrence of a variety of possible results. Heisenberg went further and said that the path of a moving particle only comes into existence once we observe it. However strange and counter-intuitive this assertion may seem, quantum mechanics does still tell us the location of the electron's orbital, its probability cloud. Heisenberg was speaking of the particle itself, not its orbital which is in a known probability distribution. It is important to note that although Heisenberg used infinite sets of positions for the electron in his matrices, this does not mean that the electron could be anywhere in the universe. Rather there are several laws that show the electron must be in one localized probability distribution. An electron is described by its energy in Bohr's atom which was carried over to matrix mechanics. Therefore, an electron in a certain n-sphere had to be within a certain range from the nucleus depending upon its energy. This restricts its location. Also, the number of places an electron can be is also called "the number of cells in its phase space". The Uncertainty Principle set a lower limit to how finely one can chop up classical phase space, so the number of places that an electron can be in its orbital becomes finite. An electron's location in an atom is defined to be in its orbital, but stops at the nucleus and before the next n-sphere orbital begins. Classical physics had shown since Newton that if the position of stars and planets and details about their motions were known then where they will be in the future can be predicted. For subatomic particles, Heisenberg denied this notion showing that due to the uncertainty principle one cannot know the precise position and momentum of a particle at a given instant, so its future motion cannot be determined, but only a range of possibilities for the future motion of the particle can be described. Wavefunction collapse Schrödinger's wave equation with its unique wavefunction for a single electron is also spread out in a probability distribution like Heisenberg's quantized particle-like electron. This is because a wave is naturally a widespread disturbance and not a point particle. Therefore, Schrödinger's wave equation has the same predictions made by the uncertainty principle because uncertainty of location is built into the definition of a widespread disturbance like a wave. Uncertainty only needed to be defined from Heisenberg's matrix mechanics because the treatment was from the particle-like aspects of the electron. Schrödinger's wave equation shows that the electron is in the probability cloud at all times in its probability distribution as a wave that is spread out. Max Born discovered in 1928 that when the square of Schrödinger's wavefunction (psi-squared) is computed the electron's location as a probability distribution is obtained. Therefore, to the extent that a measurement of the position of an electron can be made at an exact location instead of as a probability distribution, the electron appears to momentarily cease to have wave-like properties. Without wave-like properties, none of Schrödinger's definitions of the electron being wave-like makes sense. The measurement of the position of the particle nullifies the simple wave-like properties and the one body form of Schrödinger's equation then fails. Because the electron can no longer be described by its separate wavefunction when measured, due to its wave length becoming much shorter and its becoming entangled with the particles of the measuring apparatus, this is called wavefunction collapse. Eigenstates and eigenvalues The term eigenstate is derived from the German/Dutch word "eigen," which means "inherent" or "characteristic." The word eigenstate is descriptive of the measured state of some object that possesses quantifiable characteristics such as position, momentum, etc. The state being measured and described must be an " observable" (i.e. something that can be experimentally measured either directly or indirectly like position or momentum), and must have a definite value. In the everyday world, it is natural and intuitive to think of everything being in its own eigenstate. Everything appears to have a definite position, a definite momentum, a definite value of measure, and a definite time of occurrence. However, quantum mechanics affirms that it is impossible to pinpoint exact values for the momentum of a certain particle like an electron in a given location at a particular moment in time, or, alternatively, that it is impossible to give an exact location for such an object when the momentum has been measured. Due to the uncertainty principle, statements regarding both the position and momentum of particles can only be given in terms of a range of probabilities, a "probability distribution". Eliminating uncertainty in one term maximizes uncertainty in regard to the second parameter. Therefore it became necessary to have a way to clearly formulate the difference between the state of something that is uncertain in the way just described, such as an electron in a probability cloud, and effectively contrast it to the state of something that is not uncertain, something that has a definite value. When something is in the condition of being definitely "pinned-down" in some regard, it is said to possess an eigenstate. For example, if the position of an electron has been made definite, it is said to have an eigenstate of position. The Pauli exclusion principle The Pauli Exclusion Principle states that no electron (or other fermion) can be in the same quantum state as another within an atom. Wolfgang Pauli developed the Exclusion Principle from what he called a "two-valued quantum degree of freedom" to account for the observation of a doublet, meaning a pair of lines, in the spectrum of the hydrogen atom. The observation meant that there was more energy in the electron orbital from magnetic moment than had previously been described. In early 1925, the young physicists Uhlenbeck and Goudsmit introduced a theory that the electron rotates in space in the same way that the earth rotates on its axis. This would account for the missing magnetic moment and allow for two electrons in the same orbital to be different if their spins were in opposite directions, thus satisfying the Exclusion Principle. According to Schrödinger's equation, there are three quantum states of the electron, but if two electrons can be in the same orbital, there has to be another quantum number (the two-valued quantum degree of freedom) to distinguish the two electrons from each other. A single electron cannot have the same four quantum numbers as another electron in the same atomic orbital. Where two electrons are in the same n-sphere and therefore share the same principal quantum number, they must then have some other unique quantum number of shape l, magnetic moment m or spin s. Where electrons are not in an orbital around the nucleus of an atom, such as in the formation of degenerate gases, they must still follow the Pauli Exclusion Principle when in a confined space. Dirac wave equation In 1928, Paul Dirac worked out a variation of the Schrödinger equation that accounted for a fourth property of the electron in its orbital. Paul Dirac introduced the fourth quantum number called the spin quantum number designated by the letter s to the new Dirac equation of the wavefunction of the electron. In 1930, Dirac combined Heisenberg's matrix mechanics with Schrödinger's wave mechanics into a single quantum mechanical representation in his Principles of Quantum Mechanics. The quantum picture of the electron was now complete. Quantum entanglement Albert Einstein rejected Heisenberg's Uncertainty Principle insofar as it seemed to imply more than a necessary limitation on human ability to actually know what occurs in the quantum realm. In a letter to Max Born in 1926, Einstein famously declared that "God does not play dice". The bare surface level prescription for making predictions from quantum mechanics, based on Born's rule for computing probabilities, became known as the Copenhagen Interpretation of quantum mechanics. Bohr spent many years developing and refining this interpretation in light of Einstein's objections. After the 1930 Solvay conference, Einstein never again challenged the Copenhagen interpretation on technical points, but did not cease a philosophical attack on the interpretation, on the grounds of realism and locality. Einstein, in trying to show that quantum theory was not a complete theory, recognized that the theory predicted that two or more particles that have interacted in the past can exhibit strong correlations when various measurements are made on them. He wanted this to be explained in a classical way through their common past, and preferably not by a "spooky action at a distance". The argument is worked out in a famous paper devoted to what is now called the EPR paradox (Einstein-Podolsky-Rosen, 1935). Assuming what is now usually called " local realism", the EPR attempts to show from quantum theory that particles simultaneously possess both position and momentum, while according to the Copenhagen interpretation, only one of these two properties only briefly exists, in the moment that it is being measured. Einstein considered this conclusion a proof that quantum theory was incomplete since it refuses to discuss physical properties which objectively exist in nature. The feature of quantum theory leading to these paradoxes is called quantum entanglement. It means that the properties of several separate objects cannot be described by considering them separately, even after taking account of the history of their past interaction. The 1935 paper of Einstein, Podolsky and Rosen is currently Einstein's most cited publication in physics journals. Bohr's original response to Einstein was that the particles were part of one indivisible system. Einstein's challenge led to decades of substantial research into quantum entanglement. The research would seem to confirm Bohr's objection that the two entangled particles must be viewed together as one whole, and moreover, that difficulties only arise by insisting on the reality of outcomes of measurements that are not made anyway. Moreover, God does throw dice, though rather peculiar ones. A real dice throw can be completely understood with classical mechanics, and the outcome is merely a function of the initial conditions. However the outcome of tossing quantum dice has no antecedent; no cause or explanation at all. According to the correspondence principle and Ehrenfest's theorem as a system becomes larger or more massive ( action >> h ) the classical dynamics tends to emerge, with some exceptions, such as superfluidity. This is why we can usually ignore quantum mechanics when dealing with everyday objects; instead the classical description will suffice. Even so, trying to make sense of quantum theory is an ongoing process which has spawned a number of interpretations of quantum theory, ranging from the conventional Copenhagen Interpretation to hidden variables and many worlds. There seems to be no end in sight to the philosophical musings on the subject; however the empirical or technical success of the theory is unrivalled; all modern fundamental physical theories are quantum theories, relativity being subsumed within quantum field theories. Retrieved from ""
20bbd52db7fae742
Pauli, “armchair physicists”, and “not even wrong” Ah, controversy!  Physics is of course not immune from it, and sometimes the participants in an argument can let anger get the better of them. An example of this began last week, when the following video clip appeared, featuring Professor Brian Cox explaining to a lay audience the Pauli exclusion principle: For reasons that I will try and elaborate on in this post, this short video was, to say the least, eyebrow-raising to me.  Tom over at Swans on Tea picked up on the same video, and wrote a critique of it with the not quite political title, “Brian Cox is Full of **it“, in which he explained his initial critique of the video based on his own knowledge.  I piped in with a comment, Well put. I just saw this clip the other day and it was an eyebrow-raiser, to say the least. I thought I’d mull over the broader implications a bit before writing my own post on the subject, but you’ve addressed it well. A more technical way to put it, if I were to try, is that the Pauli principle applies to the *entire* quantum state of the wavefunction, not just the energy, as Cox seems to imply. This is why we can, to first approximation, have two electrons in the same energy level in an atom: they can have different “up/down” spin states. Since the position of the particle is part of the wavefunction as well, electrons whose spatial wavefunctions are widely separated are also different. Well, apparently being criticized was a bit upsetting for Professor Cox, because he fired off the following angry comment to both myself and Tom: “Since the position of the particle is part of the wavefunction as well, electrons whose spatial wavefunctions are widely separated are also different.” What on earth does this mean? What does a wave packet look like for a particle of definite momentum? Come on, this is first year undergraduate stuff. I’m glad that you, Tom, don’t need to know about the fundamentals of quantum theory in order to maintain atomic clocks, otherwise we’d have problems with our global timekeeping! So, he basically insults both Tom and I in the course of several paragraphs, without addressing the comments at all, really.  It gets worse.  In addition to me later being referred to as “sensitive” by the obviously sensitive Dr. Cox (cough cough projection cough), he doubles down on his anger by referring on Twitter to the lot of those criticizing him (including Professor Sean Carroll of Cosmic Variance) as “armchair physicists”. Well, there have been a number of responses to Cox’s angry rant, including a response on the physics from Sean Carroll and a further elaboration by Tom on his own case at Swans on Tea.  I felt that I should respond myself, at the very least because I’ve been accused of not understanding “undergraduate physics” myself, but also because the “everything is connected” lecture in my opinion represents a really dangerous path for a physicist to go down. We’ll take a look at this from two points of view; first, I’d like to comment on the style of Cox’s response to criticism, and then on the more important substance of the discussion. First, on the style.  When your response to criticism from research physicists is that they don’t understand undergraduate physics and that they are “armchair physicists”, you’ve basically admitted that you’ve lost the argument*.  Though scientists certainly get into petty spats far too often, typically sparked by research disagreements, it is not considered a good thing.  It is especially bad form for someone who is representing the field in a very public way to whine and name call: it is a very poor showing of what science is supposed to be all about. Okay, let’s get to the substance!  In order to get into the meat of the issue, I should say a few words about the quantum theory, since I don’t discuss it very often on this blog.  Dr. Francis talks a bit about one of the issues — entanglement — over at Galileo’s Pendulum, also in reaction to this “controversy”. Up through the late 19th century, “classical” physics served very well in describing the universe.  As researchers started to investigate the behavior of matter on a smaller scale, they began to encounter phenomena that couldn’t be explained by the existing laws, such as the structure of the atom (more on this in my old post here). Many of these issues were spectacularly resolved by the hypothesis that subatomic particles such as the electron and proton are not in fact point-like objects but possess wave-like properties.  This idea was introduced by the French physicist Louis de Broglie in his 1924 doctoral thesis, and it naturally explained such phenomena as the discrete energy levels that electrons in atoms possess.  The wave properties of matter can be demonstrated dramatically by using electrons in a Young’s double slit experiment; the electrons exiting the pair of slits produce a wave-like interference pattern of bright and dark bands, just like light. But this explanation raised a natural and difficult question: what, exactly, is the nature of this electron wave?  An example of the difficulties is provided by the electron double slit experiment.  Individual electrons passing through the slits don’t produce waves; they arrive at a discrete and seemingly random points on the detector, like a particle.  However, if many, many electrons are sent through the same experiment, one finds that the collection of them form the interference pattern.  This was shown quite spectacularly in 1976 by an Italian research group**: How do we explain that individual electrons act like particles but many electrons act like waves?  The conventional interpretation is known as the Copenhagen interpretation, and was developed in the mid-1920s.  In short: the wavefunction of the electron represents the probability of the electron being “measured” with certain properties.  When a property of the electron is measured, such as its position, this wavefunction “collapses” to one of the possible outcomes contained within it.  In the double slit experiment, for instance, a single electron (or, more accurately, its wavefunction) passes through both slits and has a high probability of being detected at one of the “bright” spots of the interference pattern and a low probability of being detected at one of the “dark” spots.  It only takes on a definite position in space when we actually try and measure it. This interpretation is amazingly successful; coupled with the mathematics developed for the quantum theory (the Schrödinger equation, and so forth) it can reproduce and explain the behavior of most atomic and subatomic systems.  However, the wave hypothesis raises many more deep questions!  What, exactly is a “measurement”?  How does a wavefunction “collapse” on measurement? If all particles are waves, why don’t we see their wave-like (or quantum) properties in our daily lives? Are the properties of a particle truly undetermined before measurement, or are they well-defined but somehow “hidden” from view? This latter question formed the basis of a famous counterargument to the quantum theory called the Einstein-Podolsky-Rosen paradox, published in 1935.  The paradox may be formulated in a number of ways; what follows is a simple model from optics.  By the use of a nonlinear optical material, a photon (light particle) of a given energy can be divided into two photons, each with half the energy of the original, propagating in different directions, by the process of spontaneous parametric down conversion. There is an important additional property of these half-energy photons, however; due to the physics of their creation, they have orthogonal polarizations.  That is, if the electric field of one photon is oscillating horizontally the other must be oscillating vertically, and vice-versa.  However — and this is the important part — nothing distinguishes between the two photons on creation, and nothing chooses the polarization of one or another.  Just like the position of the electron in Young’s double slit experiment is genuinely undetermined until we measure it, the polarization of the photons is undetermined until we make a measurement.  Nevertheless, there is a connection between the two photons: we don’t know which one has which polarization, but we know for certain that the polarizations are perpendicular.  If we were to look at the photon polarization head-on, we might see something of the form shown below: The photons are said to be entangled; though their specific behavior is undetermined, the physics of their creation still forced a relationship between the two. Here’s where E, P & R felt there was a paradox: suppose we point our photons to opposite ends of the galaxy.  If undisturbed, they remain in this entangled state and can in principle travel arbitrarily far away from one another.  Now suppose we measure the polarization of one of the photons, and find the result is vertical; we’ve collapsed the wavefunction, and we now know with certainty that the other photon, at the other end of the galaxy, must be horizontally polarized.  By measuring the polarization of one photon, we’ve automatically determined the state of the other one; apparently this wavefunction collapse must happen instantaneously, faster than even the speed of light! This idea of entanglement and its “spooky action at a distance” was intended to demonstrate the ridiculousness of the Copenhagen interpretation of the quantum theory, but in fact it has been verified in countless laboratory experiments.  Furthermore, E, P & R’s counter-explanation — that the polarizations of the photons are well-defined on creation, just “hidden” — has been demonstrated to not be true (though intriguing loopholes remain).  It has also been shown that entanglement is consistent with Einstein’s special relativity.  Although the collapse of the wavefunction can occur instantaneously, it is not possible to transmit any information this way, due (in short) to the random nature of the process. We’ll get to the relevance of entanglement in a moment; we still need one more piece of the puzzle before we can discuss the “everything is connected” video, namely Pauli’s exclusion principle.  As we have noted, the introduction of the quantum theory answered many questions, but raised many more.  Among other things, the quantum theory predicts that electrons exist only in particular special and discrete “orbits” around the nuclei of atoms.  This idea was first introduced in the Bohr model of the atom, as illustrated below: An electron in a hydrogen atom can only exist in certain discrete stable orbits, labeled in this picture by the index n.  Light is emitted from an atom when it drops from a higher energy (outer) orbit to a lower (inner) orbit.  The existence and nature of these discrete orbits is explained by the wave properties of matter: electrons form a “cloud” around the nucleus, rather than orbiting in a well-defined manner. But the wave nature of matter also raises a new problem: electrons are now somewhat “squishy”!  In larger atoms with multiple electrons orbiting the nucleus, it was readily found that only a finite number of electrons can fill each orbital position/energy level.  One is naturally led to wonder why all the electrons don’t just fill the lowest energy state of the atom, the “n=1” state; because the electrons are wavelike and “squishy”, there doesn’t seem to be anything prohibiting this. This was one problem that Wolfgang Pauli (1900-1958) concerned himself with.  The answer he developed became known as the Pauli exclusion principle: no two identical fermions can occupy the same quantum state.  “Fermions” include electrons, protons and neutrons: the constituent parts of ordinary matter.  Under the Pauli principle, electrons cannot all pile into the ground state of an atom.  Because electrons possess intrinsic angular momentum (“spin”) which can either be “up” or “down”, and this is part of the electron’s quantum state, two electrons can fit in the ground state with the same energy but with different spins. Keep in mind that the Pauli principle applies to the complete state of an electron; this potentially includes its energy, its momentum, its spin, and its position in space.  Any property of a pair of electrons that can be used to distinguish them counts against the exclusion principle. Now we’ve hopefully got enough information to understand what Cox is trying to say in the video linked above.  Let’s dissect it one step at a time: For example, in this diamond, there are 3 million billion billion carbon atoms, so this is a diamond-sized box of carbon atoms. And here’s the thing, the pauli exclusion principle still applies, so all the energy levels in all the 3 million billion billion atoms have to be slightly different in order to ensure that none of the electrons sit in precisely the same energy level; Pauli’s principle holds fast. This is a well-known and accepted property of matter.  The electrons in a piece of bulk material are all “squashed together”, just like the multiple electrons in a complex atom are all squashed together.  In an individual atom, the electrons must stack up into the different quantum states (different energies, different spins) that are permitted by the electron/nucleus interaction.  In a bulk piece of crystal, a similar argument applies: there are a large number of permissible quantum states allowed, in which electrons are “spread out” over the size of the crystal; Pauli’s principle indicates that each electron must be in a different state, and they end up filling a “band” of energies. But it doesn’t stop with the diamond, see, you can think that the whole universe is a vast box of atoms, that countless numbers of energy levels all filled by countless numbers of electrons. Here’s where things start to go off the rails for me, and it seems like a dirty trick is being pulled.  In a crystal, there are a large number of strongly-interacting electrons packed together, and it is natural — and demonstrable — that the wavefunctions of the electrons spread out over the entire bulk of the crystal, with the sides of the crystal forming a natural boundary.  But jumping to the cosmological scale, we don’t “see” electrons whose wavefunctions stretch over the extent of the universe — our experiments show electrons localized to relatively small regions.  Even if we treat the universe as a big box — and it’s unclear that this is even a reasonable argument to make — the behavior of electrons in the “universe box” is really, fundamentally different from the behavior of electrons in a “crystal box”.  I think that Sean Carroll over at Cosmic Variance was saying something very similar when he notes, “but in the real universe there are vastly more unoccupied states than occupied ones.”  That is: in a crystal, the electrons are “fighting” to find an unoccupied energy level to occupy, like a quantum-mechanical game of musical chairs.  Over the entire extent of the universe, however, there are plenty of open energy levels — much like finding chairs at a Mitt Romney event in Michigan. Now we’re really getting into trouble.  In a crystal, where the electrons are all essentially “smeared out” over the volume, the energy levels must necessarily split.  But electrons in the universe don’t seem to be smeared out in the same way.  It would seem to me that electrons separated widely in space — around different hydrogen atoms on opposite ends of the universe, for instance — would be perfectly well distinguished by their relative positions, and not need to have energy level splits.  More on this in a moment. Now the explanation has actually made the leap into being simply wrong!  We have noted that, with tangled quantum mechanical particles, it is possible to instantaneously modify the wavefunction of one of the entangled pair by manipulating (measuring) the properties of the other.  But, as we noted, nothing physical can be transmitted at this faster-than-light  wavefunction collapse.  Cox specifically says in this lecture that heating of electrons in his piece of diamond instantly changes the energy levels, i.e. the energy , of the electrons across the universe!  A change in energy is a physical change of a particle, and this is specifically forbidden by the laws of physics as we know them. Another thought came to me as I was reading this, and I found that it was already stated by Tom over at Swans on Tea.  If all the electrons in the universe necessarily have different energies, then they are always in different quantum states — the Pauli exclusion principle would become irrelevant!  It would seem to imply that we could pile an arbitrary number of electrons in the ground state of a hydrogen atom, although they would have slight indistinguishable energies.  Obviously, we don’t see this.  There may be a problem with this argument, as well, but it illustrates (as Tom says) that broad-reaching statements about atomic energy levels end up having potentially more implications than one would at first think. As stated, Cox’s argument is really incorrect, and violates relativistic principles.  One can argue, in his defense, that this is a consequence of trying to simplify things for a popular audience, and that he really meant something a little more subtle.  However, on an undergraduate physics page he makes a similar argument, and linked to it in defense of his lecture. Imagine two electrons bound inside two hydrogen atoms that are far apart. The Pauli exclusion principle says that the two electrons cannot be in the same quantum state because electrons are indistinguishable particles. But the exclusion principle doesn’t seem at all relevant when we discuss the electron in a hydrogen atom, i.e. we don’t usually worry about any other electrons in the Universe: it is as if the electrons are distinguishable. Our intuition says they behave as if they are distinguishable if they are bound in different atoms but as we shall see this is a slippery road to follow. The complete system of two protons and two electrons is made up of indistinguishable particles so it isn’t really clear what it means to talk about two different atoms. For example, imagine bringing the atoms closer together – at some point there aren’t two atoms anymore. You might say that if the atoms are far apart, the two electrons are obviously in very different quantum states. But this is not as obvious as it looks. Imagine putting electron number 1 in atom number 1 and electron number 2 in atom number 2. Well after waiting a while it doesn’t anymore make sense to say that “electron number 1 is still in atom number 1”. It might be in atom number 2 now because the only way to truly confine particles is to make sure their wavefunction is always zero outside the region you want to confine them in and this is never attainable. We can try and explain this more elaborate and detailed argument pictorially for two electrons.  What Cox seems to be espousing here is essentially that two electrons naturally evolve into an entangled state after some period of time.  How this would work: we start with two electrons (labeled “red” and “blue” for clarity, though they are in reality completely indistinguishable particles) surrounding two different hydrogen nuclei.  Let us suppose they start spatially separated, as shown below: Because the red electron’s wavefunction stretches into the domain of the blue atom, and the blue electron’s wavefunction stretches into the domain of the red atom, as time goes on it becomes increasingly likely that the red and blue electrons have switched places.  The wavefunctions may evolve to something like this: Eventually, after sufficient time has passed, each electron is equally likely to be in either atom, which we crudely sketch as: That is, we expect the wavefunctions to be identical for the two electrons.  But they can’t be identical, according to the Pauli principle!  Therefore something else must shift in the wavefunctions to make them distinguishable — one ends up with a slightly higher energy, one ends up with a slightly lower energy. What we’ve got here is what I imagine would be considered a form of entanglement: we know with certainty that there is only one electron around each nucleus, but the specific location of either is undetermined. This idea isn’t particularly controversial: this is essentially what happens in crystals, as we have discussed, and what happens to the electrons in molecules or otherwise interacting atoms.  But this is just a description for two atoms — can we make the same sort of arguments on a universal scale?  Here I have two problems.  The first is that there are too many questions that get raised when trying to extend this to this degree, among them: • Time.  How long does it take to get such an entanglement between two electrons?  For two electrons next to one another, I imagine it would be nearly instantaneous, but for two electrons separated by light-years?  I’m guessing the period of time is very, very large, which brings me to my next point… • Stability.  It is not easy to produce significantly entangled photons in the laboratory, and it is hard to maintain that entanglement.  Keeping two particles entangled for long periods of time is experimentally nontrivial, due to external interactions with other particles: in essence, our quantum system is being continually “measured” by outside influences.  Do widely separated electrons ever form an appreciable degree of entanglement?  Completely unclear, and rather doubtful. • Infinity.  Part of the argument for this universal entanglement is built on the idea that the spatial wavefunctions of electrons are of infinite extent, i.e. they are spread-out throughout all of space.  Indeed, stationary (definite energy) solutions of the Schrödinger equation are infinitely spread out, but I would use a lot of caution to make concrete conclusions from that observation.  In optics, classical states of “definite energy” are monochromatic waves, which are used all the time to make optics calculations convenient.  It follows from the mathematics that monochromatic waves are always of infinite extent, just like wavefunctions, but here’s the thing: nobody with any sense in optics assumes that this infinite extent is a physical behavior that one should derive concrete physical conclusions from.  A monochromatic wave is just a convenient idealization of the real physics***. A natural question to ask at this point: isn’t physics all about deriving general conclusions from simple physical laws?  Why are you being more cautious with Pauli, and quantum mechanics, than you are with, say, gravity and electromagnetism?  Part of the difference is, as we have noted above, that we simply do not understand the quantum theory well enough to derive boldly such universe-wide conclusions.  An even more important difference, though, is that I can see the universal consequences of gravitation and electromagnetism experimentally, whereas it is not clear what consequences, if any, this “universal Pauli principle” provides.  Which brings me to my final observation; returning to Cox’s lecture notes: The initial wavefunction for one electron might be peaked in the region of one proton but after waiting for long enough the wavefunction will evolve to a wavefunction which is not localized at all. In short, the quantum state is completely specified by giving just the electron energies and then it is a puzzle why two electrons can have the same energy (we’re also ignoring things like electron spin here but again that is a detail which doesn’t affect the main line of the argument). A little thought and you may be able to convince yourself that the only way out of the problem is for there to be two energy levels whose energy difference is too small for us to have ever measured in an experiment. Emphasis mine.  HOLY FUCKING PHYSICS FAIL.  Here Cox explicitly acknowledges that his “universal Pauli principle” consequences are something that not only cannot be measured today, but in principle can never be measured, by anyone. At its core, physics is all about experiment.  Experimental tests of scientific hypothesis are what distinguish physics (and all science, really) from general philosophy and, worse, mysticism and pseudoscience.  Consider the application of Cox’s conclusion to a few other situations: • Astrology is the influence of the stars upon human beings via quantum mechanical influences whose energy difference is too small for us to have ever measured in an experiment. • Homeopathy is the lingering effect of chemical forces on water via quantum mechanical changes to the water whose energy difference is too small for us to have ever measured in an experiment. • The human soul exists materially in the quantum wavefunction of a human being, manifesting itself in changes whose energy difference is too small for us to have ever measured in an experiment. In my eyes, there really is not much difference between various pseudoscientific shams being propagated in the world today and the logical argument of a “universal Pauli principle”. (When I mentioned this argument to a colleague, he said, “Ask him how many angels dance on the head of a pin.“) In a sense the whole discussion of this blog post has been a waste of time: my theoretical counterarguments may be reasonable or they may not; we can never draw any conclusion about the reality of this universal principle because it lies outside our ability to ever detect it. I tend to be rather forgiving of using simple, arguably misleading, models to introduce physical principles.  For instance, I’m a defender of the use of the Bohr model as a good tool to expose students to quantum ideas in a simple and historical way.  My criterion, however, is this: a model or explanation must, as a whole guide students in the right direction towards the greater “truth”, such as it is in science.  The “universal Pauli principle” fails this on two parts: it gives a false impression of the importance of completely unexperimental conclusions, and it opens the door to pseudoscientific nonsense.  Nevertheless, Cox doubled down on his statements in a Wall Street Journal article, somehow arguing that his original argument is a necessary evil in a world where the public needs to be excited about science. In a sense, though, we have ironically come full circle on Pauli.  It was none other than Wolfgang Pauli who coined the phrase “not even wrong” to describe theories that cannot be falsified or cannot be used to make predictions about the natural world.  It has been most recently used to describe string theory, with the argument that the predictions of string theory cannot be tested with any experimental apparatus that exists.  However, string theory can at least in principle be tested, albeit not today, where it seems that the “universal Pauli principle” described by Cox has no measurable consequences, in principle, and is immune to any test imaginable.  It serves no useful purpose in the world of physics, and as we have noted there are many objections to it actually working the way it is advertised to work. I was recently thinking of the many advantages to the explosion of science communicators on the internet, and one that struck me is that we no longer have to rely on a single or a small number of “authority figures” to tell us what is right and wrong in the scientific world.  This entire fiasco emphasizes how important this new abundance of voices will be in an ever more complex universe. With no hypothesis to test, and no measurable consequences for science, I conclude my thoughts on the “universal Pauli principle”. Requiescat in pace, “omnia conjuncta est”¹ * If someone wants to get in a pointless pissing match of who is more of an “armchair physicist” based on CVs, though, I’m your huckleberry. *** Curiously, in arguing against the use of the spatial distribution of a quantum wavefunction in providing “distinguishability” of electrons, Cox uses a “momentum eigenstate” — a particle of perfectly specified momentum and infinitely uncertain position.  This is pretty much the equivalent of a monochromatic plane wave in optics, which again nobody would use as a realistic example of how the world works. ¹Thanks to Twitter folks for suggesting the translation of the latter phrase.  Alternate translation: “omnes continent”. Postscript: A couple of friends (including @minutephysics) have pointed out that none of the discussions so far have included quantum field theory, which makes things even more complicated (non-conservation of particle number, for instance). 55 Responses to Pauli, “armchair physicists”, and “not even wrong” 1. Also, I suppose that sign off makes you the Ezio Auditore of physics blogging? 2. This post rocks on so many levels…though, I remain skeptical as to the degree with which it rocks. 3. Phil says: Hmmmm. This is interesting stuff. I don’t buy Prof Cox’s argument, but I don’t buy your entire rebuttal above. It’s hard to argue against the infinite extent of the wavefunction of a particle. Forget about Pauli for a moment, and forget about crystals. Let’s talk about a massive bunch of Hydrogen atoms a billion light years away underoing fusion and releasing a massive bunch of high-energy photons. Is the probability that one of these photons will interact with the wavefunction of the detector elements in one of our super duper telescopes (a billion years later) vanishingly small? No, we can see them with a big enough telescope. What about if there were half as many? A tenth? A hundredth? A millionth? A 1/6.0221415 × 10^23th? What if there were just two Hydrogen atoms? We are left with the conclusion that the wavefunction never truly deteriorates to zero. There is no magic cut-off point. Strap enough similar events together, and they will have a measurable effect (eventually) on the other side of the universe, proving the non-vanishing probability of the constituent events. As for Pauli’s exclusion principle, it’s my understanding (I’m no expert) that this applies to electrons only after they have become bound to the conglomeration of orbitals making up Prof Cox’s crystal. So an electron with a peak probability of being found on the other side of the Universe is not going to concern itself with taking on a unique energy level unless it comes bound. I would maintain that it DOES have a finite chance of becoming bound though, no matter how far away it is. Of course, its ‘probability wave’ would first need to have travelled at the speed of light to reach the crystal in the first place. • Phil: thanks for the comment! Admittedly I’m a little unsure about the wavefunction extent, though it seems that, in a very simple sense, it must still be subject to relativistic effects. Otherwise, there is a non-zero probability that an electron in a definite position (after measurement) here is instantaneously on the other side of the universe. There are a lot of other arguments that could potentially be made, though, especially once one brings in the full relativistic field theory, so I can’t be sure that some other complications involving many-particle interactions mucks things up. Your hydrogen atom argument has a problem, in that when you talk about the photons being emitted by a hydrogen atom, you’re talking about the “wavefunction” of the photon, not the wavefunction of the hydrogen atom! There is certainly the possibility of long-range interaction via photons and presumably gravitons, and presumably there is a long-range correlation between the atom and the photon, but this doesn’t imply the atom’s wavefunction itself has “gone the distance”. That seems to be part of the problem, provided you assume that the electrons were measured in a definite state at some point of their existence, as noted above! • Phil says: The point was to illustrate that current QFT does not impose any limit on the spatial extent of a free particle’s wavefunction. I am not sure whether being bound to atomic matter is supposed to make the wavefunction actually go to zero (as opposed to just really small) outside a certain zone… but if not, I still can’t fault that aspect of Prof Cox’s argument (I still don’t buy the non-local part though). 4. J Thomas says: You provided a general explanation about a whole lot of things, and I want to use my imagination from that. As I see it, classical physics stopped making sense around the turn of the 20th century, and over time it was replaced by newer stuff, notably quantum mechanics. The new view was a statistical one which was never intended to make sense. Statistical ideas are notoriously confusing, and people get confused about causality, the extent that statistical results apply to individual cases, etc. I’m curious whether an alternative classical view can make sense. QM would still apply to statistical experiments — probably all of them — but with a model that made sense it might be easier to apply QM. This looks like a good place to start. They argued since Newton whether light was made of particles or waves. Then we got the same results with electrons. Ideally we could get a model which explained everything which looks like partides as waves, and everything which looks like waves as particles. Then we could choose either approach and make it work. The electron detectors detect quanta. Either there is a detection or there is not. So even if the electrons were behaving exactly like waves, they would be detected as particles. Is there a way that particles could diffract like waves? I will describe one way that could happen. I don’t claim it could happen this way with electrons, but if there is one way that particles can do it, there might be another way that actually fits the data. So any way that gets particles to diffract is a start. First, I need particles that spin in unison. The experiment is set up so they all spin clockwise around their up axis. And they are somehow asymmetrical so it matters how far along they are in their rotation. A particle turned around to 90 degrees isn’t the same as one that has rotated to 270 degrees. And then we have limited detectors. I imagine a detector which can’t detect a single particle but only the sum of say 6 particles. If it gets 6 particles in a row with spin between 0 and 180 degrees, it registers a hit. But if any one of the 6 is between 180 and 360 degrees, the hit is lost and it then will register a hit if there are 5 more between 180 and 360 degrees. The paticles go through a slit and on to the detectors. When they go through the slit their directions are randomized, but their spins are still synchronized, each one is at 0 degrees then. At some distance to the left side, particles that come near the right side of the slit will travel farther than particles that come near the left side of the slit, and they will rotate 180 degrees out of phase. All of them will be between 0 and 180 degrees. So there will be lots of detections. A bit farther to the left, half of them will be one kind and half the other. There will be very few detections. Etc. With the right kind of particles and the right kind of detectors, you can get diffraction. It does not matter that only one particle goes through the slit at a time, provided the detector state changes whenever the next particle arrives. Particles can appear to diffract. There are probably multiple ways to get diffracting particles. Perhaps one of them might fit the data for light or for electrons. • Thanks for the comment! Actually, the wave nature of particles seems pretty airtight at this point, especially after the theoretical work of John Stewart Bell on Bell’s inequalities and the experimental verification of this, which strongly demonstrates that no “local” theory of particle behavior can reproduce the observed properties of quantum mechanics. It’s interesting to note, however, that in the history of optics, physicists were even able to explain interference effects and polarization effects to their satisfaction using a particle theory. It was only after diffraction was successfully explained using waves, and led to verifiable predictions, that the wave theory really took off. • J Thomas says: I can explain diffraction myself using particles, given a fistful of assumptions. I’m not sure I have the distribution right but I can get the distribution right. If there are two hypotheses that both explain the phenomenon, how do you decide which to accept? I say, as long as both explain the phenomenon there is no need to decide which to accept. I have not studied the details of Bell’s theorem carefully enough to judge it, but I rather doubt it. I’ve seen this sort of thing a lot in probability theory. Start with a collection of assumptions, one of which is wrong. Reason from the assumptions to a conclusion that looks astounding. Argue that the astounding conclusion must be true. But usually one of the original assumptions was wrong instead. Typically people assume that their sampling is not biased, for example, and it almost always is. 5. csrster says: Peierls also discusses this in his “Surprises in Theoretical Physics” in the context of two electrons at the opposite end of a metre-long metal bar. His conclusion is the same as yours – that if the electron-states are spatially localised to opposite ends of the bar then, by definition, they are in distinct quantum states and the Pauli principle is automatically satisfied regardless of their energy. Iirc he takes the argument further to make some quantitative estimates, but I forget the details. I like the Peierls example better than the “whole universe” discussion because i) one can imagine scaling smoothly from a small metallic crystal to a macroscopic metal bar and ii) a one metre bar is effectively as big as the universe anyway, seen on the quantum scale. 6. J Thomas says: OK, I’ve seen the video, the blog response, and the comments on that blog. I have some conclusions. 1. If you want somebody to dispassionately discuss science and ways to improve his presentation of science, do not title your criticism “Brian Cox is full of **it”. That does not promote careful dispassionate thought, at least among primates. A primate that sees this will tend to interpret it as poo-flinging, and will tend to fling poo back. 2. Quantum Mechanics (QM) is not intuitive. It is similar to statistics and probability theory that way. There is a mixup between what’s true about the reality, versus what you know about it. We describe our knowledge and our ignorance and then look at what we still know when things change. The transformation rules are complex and unintuitive. It’s hard. Ideally we design our language to make it easy to think about stuff. The language does some of the thinking for you. Stupid things sound wrong. We’re far from that with QM. The truth sounds wrong. Stuff that sounds plausible pretty much has to be wrong — if it sounds right then it can’t be right. We try to design our mathematics so that the right answers just fall out easily. This has mostly not been done for QM yet. It’s hard to do the math right. Given the problems, doesn’t it make sense to display a whole lot of tolerance? 3. Various people say that the discussion is all first-year stuff. But they disagree. It looked like they chose sides quick enough, and they agree with other people on the same side. But I strongly suspect that for many simple first-year problems stated in simple english, 10 physicists would give 5 or more answers. This stuff is *hard*. Figuring out what the question is when it’s stated in English is even harder. More later. 7. J Thomas says: This is not really off topic. One time a friend told me that the Monty Hall solution was wrong. He argued it out. He said, suppose you come look at the Monty Hall problem at the last minute. There are two doors that aren’t open and one that is open. The right answer is one of the two doors. The probability is .5 that it’s either one. How is that different from the guy who saw the door get opened? For him it was 1/3 for each door, and then he saw that it wasn’t door A. He knows the same thing you know, it’s one of the two that are left. So it’s 50:50. It took me weeks to persuade him. I wrote a short computer program to model the problem, and showed him the answer came out 2:1. He said I must have done it wrong. I tried to tell him that the guy who saw the door opened does know something the other guy doesn’t know. That different people can come up with different statistics, and both be right as far as they know. “No, there’s a real probability and if somebody guesses wrong what it is that just means they’re wrong.” I guess he was right on that one, but I was right too. I finally showed him that Monty was not behaving at random. If Monty instead opened any of the three doors no matter where the prize was, and the game was over if he opened a door that had the prize, then it did come out 50:50 between the two that were left. But it wasn’t easy to show him how that mattered. Probability theory is *hard*. Professionals get it wrong sometimes. Tiny details can change the whole problem. And QM is inherently probabilistic. If you have to argue about it, for gods sake don’t do it in English. You haven’t got a chance unless you show the math. And yet, that’s so very tedious…. Indeed, that’s perhaps the major issue with all of this discussion — there’s no way to mathematically model the wavefunction properties of the entire universe to draw the conclusions being made! Even worse, it was conceded in the original talk *and* in the undergraduate lecture that the conclusions being drawn have absolutely no observable physical consequences! I can almost — almost — understand making such assertions in a popular physics lecture, provided they’re couched in appropriate caveats (“it is possible to view this result of having the surprising consequences of…”), but telling undergraduate students that unmeasurable, unprovable effects of no consequence are important is really, really doing a disservice to people who want to be physicists. If I gave a talk at a physics meeting where I said, “the following hypothesis has no consequences for physics, explains no unanswered questions and cannot be detected ever”, I would rightly be tarred & feathered, at least metaphorically. • J Thomas says: I think I’ve seen this before, though I have no idea where to find links. A long time ago people believed that electric fields and gravitational fields were instantaneous. And somebody in a lecture to laymen said that this meant that everything you did would have some instantaneous effect, no matter how tiny, out to the farthest star. But then they decided that those fields take time to act. So somebody making a similar lecture said that everything you did would eventually have some effect, no matter how tiny, out to the farthest star. And now here’s somebody saying the same thing from QM. It doesn’t sound like it means anything beyond feel-good talk to laymen. Oh. Physics undergraduates. Hmm. Are they physics undergraduates who do the math? If so, it probably won’t hurt them much at all. Are they physics undergraduates who don’t do the math? Then they’re laymen who aren’t really learning much, and it won’t matter until they learn the math. I swear, when I squint a little and let the details fuzz out, this seems a whole lot like arguments I used to hear Baptists make. “Did you hear him? He said it was OK to try to follow Jesus Christ’s example!” “What’s wrong with that? Jesus said to follow him, didn’t he?” “But nobody can be like Jesus, Christ was God and sinful human beings can’t be God.” “Well, what’s wrong with trying?” “Wash your mouth out with soap! If you try to be like Christ you’re guilty of the sin of pride! You don’t understand the least little thing about theology and you have the nerve to ask questions! This idiot was telling people it was OK to be sinful! He told them to try to follow Jesus Christ’s example, and you don’t even see what’s wrong with that! You’re just ignorant, but he’s preaching evil and he hasn’t the right!” 8. yoron says: If you debate you will argue 🙂 No news there, the problem comes when one gets a little to enthusiastic in ones arguments . I like your blog for several reasons, open mindedness, a urge to try to present it right, and very good knowledge of what you discuss. And I don’t mind either of you guys getting it ‘wrong’ now and then. That’s what open minds do, they speculate and find connections and insights, sometimes wrong but even when right forced to present in a painstakingly sound mathematical notation, as stringently and clear as possible. Einstein took a lot of ‘wrong turns’ in his hunt for relativity, but he got it right in the end, and also had to invent/learn new ways of presenting it mathematically as I remember it. So ‘fight’, with a smile, life is too short to take it seriously Isn’t it so that in a Big Bang it’s possible to assume a ‘entanglement’ of it all? If I combine this with a later ‘instant’ inflation creating ‘distances’, then all particles ‘untouched’ by their isolation still could be entangled? Eh, maybe 🙂 Also it seems to me as if both a wave function and relativity questions ‘distance’, although from different point of views naturally? Not that I doubt the concept, ‘distance’ is here to stay, but what does it mean? And now I’m weirder than any of you 🙂 I found it quite nice reading and I hope that you, as well as all other guys involved, get something fruitful out of your discussion in the end. • I agree that there will always be some arguing! My PhD advisor is one who essentially told me, “When we discuss physics we will get into terrible fights. That’s okay, though, because we’ll all be friends in the end.” In fact, I had some knock-down, drag-out fights with him over physical principles (metaphorically speaking, of course), and we’re still good friends! There’s no doubt that Tom’s original post was rather tactlessly titled (as J Thomas also noted below); however, Tom’s post raised some valid questions, and certainly my comment was about as polite as one could be (“eyebrow-raiser” doesn’t seem to me to be a horrific insult). The petty sniping that Cox responded with to me and Tom (not understanding undergraduate physics) did not forward the discussion at all, and was really just mean-spirited. And that is the sort of thing that pisses me off. >:) 9. yoron says: Yeah, things happen. I looked at Swans on Tea:s blog too 🙂 What’s good with a discussion like this one is that people present their ideas, and interpretations, and so put a lot of otherwise strange concepts into different ‘lights’, making me see how it is thought too work in new ways. Sort of ‘holistic perspective’ reading the comment section there, at least for me. And both you and Swans belong to my favorite bloggers. Just keep on 🙂 10. yoron says: Thinking of it. The definition of Entanglements are truly confusing, maybe you have discussed it? Probably you have, and I seem to think I get what it should be at times, and then, some year later, I find myself wondering if I’ve understood the definition of a entanglement at all? You have the simple way by down converting a photon into two. That one is easy to understand. But then you have thingies ‘bumping’ into each other for example, sending momentum into each other, and of course the ‘indistinguishable electrons’ etc. And to make my headache even worse you can also find those defining it as if you have a entangled ‘pair’ there can be no ‘wave function’ breaking down, until both are measured, if I now got that one right? Been some time since I discussed that. It’s worth going through, if you haven’t? • I need to write a more detailed post on quantum stuff, perhaps as a “basics” post or two. In entanglement, though, the idea is that the wavefunction breaks down as soon as one of the particles is measured. This results in the “instantaneous collapse of the wavefunction” that so upset Einstein, Podolsky and Rosen. I’ll go into it in more detail soon; in the meantime, you can check out the post on entanglement at Galileo’s Pendulum. 11. yoron says: this is my view, and I’m trying to keep it simple. “as I said a description I like was the one of ‘one particle’. I can go with a ‘wave function’ describing it too though, as long as we then assume it to be in a pristine ‘superposition’ prior to the measurement, with ‘both sides’ falling out in the interaction/measurement, no matter if the side not making that initial measuring, will measure it later, or not.” And here is DrChinese view “Nope, generally this is not the case (although there are some complex exceptions that are really not relevant to this discussion). Once there is a measurement on an entangled particle, it ceases to act entangled! (At the very least, on that basis.) So you might potentially get a new entangled pair [A plus its measuring apparatus] but that does not make [A plus its measuring apparatus plus B] become entangled. Instead, you terminate the entangled connection between A and B. You cannot EVER say specifically that you can do something to entangled A that changes B in any specific way. For all the evidence, you can just as easily say B changed A in EVERY case! This is regardless of the ordering, as I keep pointing out. There is NO sense in QM entanglement that ordering changes anything in the results of measurements. Again, this has been demonstrated experimentally. My last paragraph, if you accept it, should convince you that your hypothesis is untenable. Because you are thinking measuring A can impart momentum to the A+B system, when I say it is just as likely that it would be B’s later measurement doing the same thing. (Of course neither happens in this sense.) Because time ordering is irrelevant in QM but would need to matter to make your idea be feasible.” And if I get the idea right here? You might say that it’s a consequence of SR, and the possibility of different observers getting different ‘time stamps’ for ‘A’ relative ‘B’, so there might be no ‘universal order’. Instead it will be defined locally. Which is a very interesting thought, if correct. At least it’s the way I interpret it for now 🙂 I will follow that link. 12. J Thomas says: Imagine that you and your girlfriend make an agreement. You take the king of hearts and the queen of diamonds out of a deck of cards. You shuffle them around so nobody knows which is which, and you seal them into two envelopes. You each keep one of them, and you agree that in 30 years you’ll open the envelopes and look at them. It’s a romantic gesture. But 5 years later she dies and she asks that her envelope be buried with her. After 30 years you open your envelope and see the queen of diamonds. You immediately know that her envelope has the king of hearts. But how can you know that? You haven’t dug up her grave and opened her envelope. The difference between this and Bell’s theorem is that Bell’s theorem says that in the QM case, the decision which card was which could not have been made when the envelopes were sealed. That decision was made when one of the envelopes was opened. And at that time two things changed, two things that might be light years apart or buried in separate graves. Probability theory doesn’t distinguish between things that have been decided — but are unknown– versus things that have not been decided yet. Somebody flipped a coin yesterday and you don’t know which face came up. Somebody will flip a coin tomorrow. Either way, assuming a fair coin, your best guess is 50% either way. If you have reason to think it’s a false coin that comes up heads 55% of the time, then your best guess is 55% heads either way. Once you find out the truth, then it isn’t 50% or 55%, you know. It’s either heads or tails. The easy interpretation is that it doesn’t matter whether something is real but unknown versus not-decided-yet. Either way, you have your best guess now and when you find out the truth you’ll know. There’s no point arguing whether really the cards are separated and in their envelopes, or whether a ghost magically paints the cards just before you open the envelopes, because there’s no way to tell which it is. Just use what you do know, which is first the guess and then the reality. But Bell’s theorem proves that it cannot be true that the truth is real but unknown. It has to be true that the state does not exist until two random but correlated states are created when one of them is observed. Without that proof, there is nothing special going on. Somehow, quantum mechanics is arranged so that it is impossible for the truth to exist but be unknown. That’s the part that’s hard to understand. What we need is a good simple explanation why, for example, two photons that are created to have complementary polarization but we don’t know what polarization they have, are not actually polarized any way in particular until we measure the polarization on one of them. QM says it can’t be true that their polarization state was set when they were created, and we only discover it later. QM says that in reality they have only a probability of polarization until it becomes real when one of them is measured. Why does QM make it impossible for the truth to be real but unknown? 13. yoron says: A nice description J 🙂 And one that I agree too, and actually can understand. Your definition has been proven a lot of times, that even though you know that there will be a complementary ‘polarization’ for ‘B’, you can’t define it until measuring ‘A’, or vice versa. As you can’t know what the polarization will be for any of the space separated objects until measuring on one of them, only that they will be opposite. That if I got you right? And it’s there my headache begins, although physics is on the whole a headache, mostly nice though. If you have a wave function describing a ‘particle’ or an entanglement, it must be your observation that ‘sets’ it. And if what you observe is separated in space, then what you observe of it should set it all. But I’m getting an impression of that a ‘space like’ separation, as in this entanglement case, now allows me to state that no matter what I observe, ‘A’ or ‘B’ I could define it such as the observation I do has nothing to do with what ‘sets’ what. That’s why I like the ‘one particle’ definition better, because in that one it becomes meaningless to discuss ‘causality chains’ as in such a definition that ‘wave function’ is a whole object, in which you ‘instantaneously’ set a state for ‘both’ . But then I have SR of course, but it shouldn’t really matter there, should it? As no matter what ‘time stamp’ different observers will give, ‘A’ or ‘B’ first, it still have to be ‘one particles wave function’ getting set? Then again, I really need to look at it from first principles, and see if I really get it.. 14. J Thomas says: Yoron, the following link (which Dr. Skull provided) is extremely unclear because as a Wikipedia page it incorporates lots of different ideas which disagree. It does include a quote from Jaynes. He was very good at statistics and probability theory, and had a lot to say about QM as a result. So, some random things have to happen late, near the time they are measured. But maybe others can happen early, at the time of the entanglement. Which is true in the cases people are interested in? How can we find out? 15. yoron says: What I mean is that in some ways the question from SR becomes slightly metaphysical. Because if ‘the arrow of time’ is a local definition, which I see it as. Then that also will mean that any observation you do must be valid from your frame of reference, just as a Lorentz contraction should be for that speeding muon impacting on earth. That another frame of reference will define it differently doesn’t invalidate the muon’s frame. But that is from the assumption of ‘time’ always being a ‘local phenomena’, so invalidating the assumption of a ‘same SpaceTime’ for us all time and space wise. But the ‘arrow’ is always a local definition as far as I can see. Even though you can join any other frame of reference to find the time dilation you observed earlier ‘gone’, from your new ‘local perspective’ in SpaceTime that only state that relative your life span, your clock never changes. The thing joining SpaceTime is a constant. Lights speed in a vacuum. And that is also what gives us ‘time dilations’ and the complementary Lorentz fitzGerald contractions, well, as I see it 🙂 16. yoron says: Hmm, sorry.. That was me explaining myself, not answering your post. I liked your citation on ‘deduction’. SpaceTime as ‘whole experience’ I see as conceptual, described through diverse transformations between ‘frames of reference’, joining them into a ‘whole, and so also becoming the exact same ‘deductions’ he described. If a time dilation is ‘true’ from your frame of reference, and differs from my frames observations, which we know to be true through experiments, then locality defines your ‘reality’. and it has to be real if one accept Relativity. And that should mean that radiation is what joins us. 17. yoron says: Would you have an example, or link of that? On the other hand you write “Somehow, quantum mechanics is arranged so that it is impossible for the truth to exist but be unknown.” I like ‘indeterminacy’, as a principle I find it rather comforting, reminding me of ‘free will’, in/from some circumstances. But I also have faith in that we will find a way to fit it into a model where that indeterminacy becomes a natural effect of something else. Wasn’t that what Einstein meant too? Or did he expect ‘linear’ causality chains to rule everything? I’m not sure how he thought of it there? I know he found entanglements uncomfortable in that they contained this mysterious ‘action at a distance’. But Relativity splits into local definitions of reality as I think of it, brought together by Lorentz transformations, describing the ‘whole universe’ we observe through radiations constant. So from my point of view, ‘reality’ and the arrow both follow one constant, and that one will always be a local phenomena. That simplifies a lot of things for me at last, although it makes ‘constants’ into something defining the rules of the game, and the question of a whole unified SpaceTime int something of a misnomer. • Regarding hidden variable theories, the major distinction is between local and nonlocal ones. As I understand it (and I am admittedly not an expert on these controversies, so take this explanation with a grain of salt), EPR concerned itself with the idea of a local hidden variable theory (LHVT): that a particle’s properties such as momentum and spin are well-determined, and spatially localized to the particle. This is the concept that would be consistent with classical mechanics: localized and definite particle properties. Bell’s theorem suggests that physical experiments *can* tell the difference between a local hidden variable theory and conventional quantum mechanics. Experiments results are inconsistent with the LHVT and consistent with conventional quantum theory, suggesting that an LHVT cannot be correct. There is still a possibility of a *nonlocal* HVT, however, in which the properties of a particle are definite but “spread out” in space-time in some manner. This is typically done by imagining that a definite particle exists coupled to a “guiding wave” that controls its motion. No experiments have been done to conclusively rule out a nonlocal HVT. This puts physicists in a bit of a philosophical conundrum: they can either accept QM, which requires throwing away determinism, or they can accept NLHVT, which requires throwing away causality (the theory requires faster-than-light influences between particles). As I understand it, most physicists at this point act under the assumption that conventional QM is the better interpretation, though the question is by no means solved. It is these sort of controversies, BTW, that make me dubious of any attempt to extend simple quantum postulates to a universal scale without qualification. • J Thomas says: “Regarding hidden variable theories, the major distinction is between local and nonlocal ones.” The big distinction is about causation that happens instantaneously at large distances. That’s spooky. A hidden variable theory that gives you instantaneous causation at large distances is no improvement, and presumably some of those can easily be tuned to give the same result as QM. They may be taking that too far. What if some local variables are set, and others are not? Then you could have some variables set locally and the information travels at lightspeed or slower, and later is revealed. But other things could be strictly probabilistic and could be set later. Then it might turn out that nothing spooky happens, and at the same time it could definitely be shown that some events cannot be determined by local variables. Smith and Jones are physicists at the same university. They both own red Ferraris and it is impossible to tell the cars apart by satellite imagery. Smith lives to the north and Jones lives to the south. So it is predictable that whenever the satellite photos show a red Ferrari going south from the university, the other will go north. By satellite studies we cannot tell whether these Ferraris have hidden variables (namely Smith and Jones) or whether they are merely entangled. Maybe the choice which of them will go south is never made until one of them actually turns, and that information is then instantaneously transmitted to the other one so it knows to turn the opposite direction. (But *we* know that it’s really Smith and Jones, the hidden variables, and they don’t decide completely at random while on the road which will go home to Mrs. Smith and which to Mrs. Jones.) But as it turns out, there are nine red Ferraris at the university. Two homes are more or less to the northeast, two to the northwest, three to the southwest, and two to the southeast. So when one red Ferrari goes south you can’t be sure the next one will go north, though the physicists are somewhat likely to leave around the same time because of departmental meetings etc. In reality, it occasionally happens that Smith and Jones both go to Smith’s house. Sometimes a paper they are working on together is approaching deadline and they work late into the night. Occasionally they go bowling together. Just possibly they might occasionally swap wives — but only after near-instantaneous phone communication with both wives to get their approval. Then the red Ferraris go the opposite directions, but by satellite imagery you’ll never know…. These various links do not at all make it obvious why local hidden variables are impossible, or even that there is an example where the result has to be spooky because local hidden variables cannot apply to that example. They give some of the details of the arguments, but do not show how those arguments fit together to forbid the existence of any local hidden variables though they claim there can never be any local hidden variables. • Phil says: Non-locality itself is widely accepted, non? 18. yoron says: Interesting, I will have to relearn this again. Actually entanglements are the ‘spookiest’ thing(s) I know of, and gives me one of the biggest headaches too 🙂 I will need to reread it all. But if we take the simple definition when you downgrade one ‘photon’ into two ‘entangled’ we already have proofed that they ‘know’ each others spin, instantaneously. Assume they are ‘the exact same’. How does that fit with ‘locality’? Both ‘the arrow’ and radiation are local phenomena to me, always the same locally. Maybe the arrow is another name for ‘motion’, meaning that if it is local it has to spring from something ‘jiggling’. But it doesn’t answer ‘time’, as a notion from where that ‘jiggling’ can come to exist, if you see what I mean? to have a ‘motion’ you need a arrow as I see it, as it is through the arrow that ‘motion’ finds its definition. Maybe entanglements is what the universe really is? ‘Motion’ becoming the way we observe it through? Hmm, and now I’m getting mystical again Sorry, I will have to blame it on it being ‘the day after’, after Friday I mean 🙂 19. yoron says: Another thing that’s confusing is this statement. “The violation of Bell’s inequalities in Nature implies that either locality fails, or realism in the sense of classical physics fails in Nature, or both. When one looks at other types of data, it becomes totally unequivocal that locality holds while classical realism fails in Nature.” by Luboš Motl I can see that locality holds, if I by that mean what we measure directly, it’s sort of obvious. But isn’t an entanglement a ‘space like’ separation, although a ‘instantaneous’ correlation, in a observation? And by that also becoming a ‘non local’ phenomena? The point being that you can’t know what the polarization will be for any of the space separated objects until measuring on one of them, only that they will be opposite. That is, there is no ‘standard’ to any entanglements polarization other than this ‘oppositeness’ we expect. That you can’t say which state/polarization ‘A’ have until measured? And as you can’t specify that you also can prove that two separated ‘particles’ in space then must ‘know’ each other, or is it something more I’m missing there? • J Thomas says: Yoron, we cover the same material repeatedly. When two entangled photons are created, we know that their polarization is related but we don’t know what the polarization will be. There are two obvious ways to look at that. Maybe the polarization is set when they are created, and we don’t know what it is — it is a “hidden variable”. Or maybe the polarization is not set until one of them is measured, and then the other one instantaneously gets its polarization set too in violation of lightspeed etc. At first sight there’s nothing that says one of these ideas is better than the other. But somehow physicists know that the first is impossible and the second must be true. I have not yet seen any explanation about the argument why the first is impossible, but a whole lot of physicists say they know it’s impossible according to quantum theory and also there are experiments which show it. I don’t understand it yet. • Long term, I’m going to try and go through and understand in more detail the whole “hidden variable” argument and blog about it, so hold tight! I should note that, as said earlier, “hidden variables” are not completely excluded but Bell’s theorem and the battery of experiments more or less confirming it strongly suggest that local hidden variable theories are inconsistent with observations. Physicists who really study this stuff carefully haven’t excluded the possibility of nonlocal hidden variable effects. 20. yoron says: Yes, I know. Wasn’t it John Bell that first proved statistically that there could be no classical ‘hidden variables’. [url=]Bell’s theorem.[/url] I don’t believe in any FTL communication myself, not macroscopically anyway. But neither am I sure what a ‘distance’ is, and that goes for both QM and ‘SpaceTime’. 21. yoron says: Yes, I’m afraid we do. It’s me trying to see it from the start, and keeping it as simple as possible. [url=]Bell’s theorem[/url] is what proves it statistically. It states that a classical solution isn’t possible, assuming that there is local ‘hidden variable(s)’ although still opens for the possibility of non-local variables, as FTL ‘communication’. But FTL would be a violation of causality in where we would get improbable effects from some frames of reference, aka you answering me before I’ve even asked, according to relativity. So, macroscopically FTL is a strict ‘nono’ as far as I understands it. That leaves us the question how the geometry of the universe can change with relative motion and mass, relative the observer? And that’s where I wonder, as I don’t expect FTL to be allowed macroscopically, But then again, I may all too easy be wrong 🙂 22. yoron says: Sorry, my first reply didn’t show up, until after I had posted the second? 23. yoron says: Eh, by ‘macroscopically’ I just meant ‘SpaceTime’ here, and relativity, nothing more. We have two views, one is QM, the other is Einsteins relativity. Some physicists try to join them, most maybe? I’m not sure there. One discuss ‘superpositions’ etc, and statistics creating probabilities. The other discuss linear functions mostly, involving macroscopic as well as microscopic causality-chains, following an ‘arrow of time’. You can use radiations speed and ‘motion’ as a microscopic example of our ‘classic’ causality chains, and planets orbits as an example of macroscopic causality chains. 24. yoron says: Oh, thanks, wish there was a way to edit and also remove a double post though. It looks silly with two posts stating the same 🙂 And, thinking of it. Both QM and Relativity assumes an arrow of time existing. Otherwise you can’t have statistics, as there would be no order from where you could base your expectations in quantum mechanics. 25. “Imagine putting electron number 1 in atom number 1 and electron number 2 in atom number 2. Well after waiting a while it doesn’t anymore make sense to say that “electron number 1 is still in atom number 1″. It might be in atom number 2 now […]” Seems like this guy needs to read again his Hartree-Fock method for solving Schrödinger’s equation for a multielectron system; not to mention how to build a Slater determinant for a 2 electron system. Then again, lets just hope some kids listen to his lectures, become interested in science and eventually realise he is just (way) overextending some metaphors. Pseudoscience propagates very fast through media like the internet basically because it doesn’t require to think/test/prove anything; it just requires you to believe in the premises and flow along the dodgy logic with which the conclusions are weaved. Efforts like yours and other bloggers, such as the ones you mentioned in your post, will counteract this propagation and proliferation but only in time. I don’t know what is worse: A country like mine (Mexico) where science is disregarded and neglected or the USA where science is even outlawed (well, not exactly but you know what I mean) like in the infamous Kansas School Board case! I mean, nowadays it seems like any politicians stand on evolution should be part of his campaign platform! Ridiculous. Congratulations on yet another wonderful post. • Thank you for the comment, and the compliment! Indeed, pseudoscience flows far too quickly through the media and the political systems. It’s not even a new problem, really; the use of wordplay to justify a “scientific” conclusion reminds me of this comment from an article criticizing perpetual motion back in the late 1800s: 26. Jason Buckley says: Thanks for this patient and layman-friendly rebuttal. Just seen the lecture for first time in a 2013 repeat. Not a physicist but thought the final claims were rather extravagant. Annoying the bbc just repeats the programme as if its not controversial. • You’re very welcome! Yeah, it is rather irritating. Part of the point of science is admitting when an argument isn’t quite right and revising it accordingly, something that hasn’t been done. 27. Yoron says: It’s nice rereading this one again. Reminds of all the things I don’t understand 🙂 The reason why this simple transparent mirror effect can’t be simply explained as a ‘set variable’ constructed by the mirror is that by probability that wave function can’t be set, until measured, if I remember right? And it is experimentally proved (as I remember it) that there is no way to know, until measured, which polarization the measured particle will have. You can argue that for identical experiments, but with no way to determine how that mirror will ‘influence/polarize’ the photon you measure, there must be a hidden variable? Or you can define it such as the two photons are ‘linked’ in a ‘spooky action at a distance’. But to define that ‘hidden variable’ craves a clearer mind than mine, not that it ever is that clear:) How would a hidden variable exist? Assuming ‘identical experiments’ giving you different polarizations from identical photons? If I now remember this correctly. 28. Yoron says: A crazy thought, how does a photon see the universe? Does it see it, or is it just us seeing? Then it comes down to the arrow. 29. Phil says: A photon “sees” the Universe as a flat 2D plane. Since it takes no time to travel anywhere, everything is at the same “depth”. • Yoron says: 1. photons are timeless (as far as physics know experimentally) 2. Lorentz contraction as observed in the direction of motion should in the case of a photon reach? Infinite contraction, or is there a limit where you can assume a point like existence? 3. What would happen to a signal from a relativistically moving object, sent in the opposite direction from its motion? It would redshift (waves), and in the case of a photon? Would it warp? And the redshift itself then, is there a limit to how red shifted something can become relative the moving observer? I could assume that there must be a limit, as I can imagine a stationary (inertial) observer able to watch that ships signal, but I’m not sure, although it seems a contradiction in terms. If there are no limits to a redshift, what would that imply in the case of those two observers? 30. Yoron says: The redshift produced by the motion seen from the moving object will still be at ‘c’. And the stationary observer should see it redshifted too, at ‘c’ from his point of view too. This is assuming a reciprocal effect relative ‘c’, different coordinate systems, and energy. But then you have the light quanta itself, that shouldn’t change intrinsically? 31. Yoron says: I know. Sometime one just have to let go But, it is confusing 🙂 32. This post just earned a “follow”, though I’ll take issue that physics in general rests on a number of theoretical assumptions, from standard cosmology, to the “realness” of the wave function. To wit, “…the wave nature of particles seems pretty airtight at this point, especially after the theoretical work of John Stewart Bell on Bell’s inequalities and the experimental verification of this, which strongly demonstrates that no ‘local’ theory of particle behavior can reproduce the observed properties of quantum mechanics.” I’ll just note here that when pressed, Bell himself admitted that a deterministic universe negated this assumption. The press of academic compliance is a powerful influence, especially when it accounts for funding. But even back in 1956 when Chien-Shiung Wu experimentally demonstrated the asymmetry of nuclear electron emissions with regard to spin, the established scientific consensus of that time compelled Wolfgang Pauli (who had first proposed the idea of an electron’s “spin”) to call her work, “…total nonsense.” It took two more years of experimental replications for the discovery to be accepted, resulting in a Nobel Prize (though not for Wu). Compounding the problem nowadays are the pop-media personalities who seem to have no problem publicly conflating philosophy and metaphysics with actual science… Cox, Carroll, Neil DeGrasse Tyson, and a seemingly endless stream of “discoveries” that threaten to bring down the established scientific paradigm by attempting to again resuscitate some long dead theory-of-everything (Fermilab). It’s become profoundly refreshing to hear a genuine scientist have the courage to say, “That’s a good question. I don’t know the answer.” Leave a Reply to yoron Cancel reply You are commenting using your account. Log Out /  Change ) Twitter picture Facebook photo Connecting to %s
58e029f38397dca6
Skip to main content Chemistry LibreTexts 2.1: Free Electron Model of Polyenes • Page ID • The particle-in-a-box type problems provide important models for several relevant chemical situations The particle-in-a-box model for motion in one or two dimensions discussed earlier can obviously be extended to three dimensions. For two and three dimensions, it provides a crude but useful picture for electronic states on surfaces (i.e., when the electron can move freely on the surface but cannot escape to the vacuum or penetrate deeply into the solid) or in metallic crystals, respectively. I say metallic crystals because it is in such systems that the outermost valence electrons are reasonably well treated as moving freely rather than being tightly bound to a valence orbital on one of the constituent atoms or within chemical bonds localized to neighboring atoms. Free motion within a spherical volume such as we discussed in Chapter 1 gives rise to eigenfunctions that are also used in nuclear physics to describe the motions of neutrons and protons in nuclei. In the so-called shell model of nuclei, the neutrons and protons fill separate \(s\), \(p\), \(d\), etc. orbitals (refer back to Chapter 1 to recall how these orbitals are expressed in terms of spherical Bessel functions and what their energies are) with each type of nucleon forced to obey the Pauli exclusion principle (i.e., to have no more than two nucleons in each orbital because protons and neutrons are Fermions). For example, \(^4He\) has two protons in \(1s\) orbitals and 2 neutrons in \(1s\) orbitals, whereas \(^3He\) has two \(1s\) protons and one \(1s\) neutron. To remind you, I display in Figure 2. 1 the angular shapes that characterize \(s\), \(p\), and \(d\) orbitals. Figure 2.1. The angular shapes of \(s\), \(p\), and \(d\) functions This same spherical box model has also been used to describe the valence electrons in quasi-spherical nano-clusters of metal atoms such as \(Cs_n\), \(Cu_n\), \(Na_n\), \(Au_n\), \(Ag_n\), and their positive and negative ions. Because of the metallic nature of these species, their valence electrons are essentially free to roam over the entire spherical volume of the cluster, which renders this simple model rather effective. In this model, one thinks of each valence electron being free to roam within a sphere of radius \(R\) (i.e., having a potential that is uniform within the sphere and infinite outside the sphere). The orbitals that solve the Schrödinger equation inside such a spherical box are not the same in their radial shapes as the \(s\), \(p\), \(d\), etc. orbitals of atoms because, in atoms, there is an additional attractive Coulomb radial potential \(V(r) = -Ze^2/r\) present. In Chapter 1, we showed how the particle-in-a-sphere radial functions can be expressed in terms of spherical Bessel functions. In addition, the pattern of energy levels, which was shown in Chapter 1 to be related to the values of x at which the spherical Bessel functions \(j_L(x)\) vanish, are not the same as in atoms, again because the radial potentials differ. However, the angular shapes of the spherical box problem are the same as in atomic structure because, in both cases, the potential is independent of \(\theta\) and \(\phi\). As the orbital plots shown above indicate, the angular shapes of s, p, and \(d\) orbitals display varying number of nodal surfaces. The \(s\) orbitals have none, \(p\) orbitals have one, and \(d\) orbitals have two. Analogous to how the number of nodes related to the total energy of the particle constrained to the \(xy\) plane, the number of nodes in the angular wave functions indicates the amount of angular or orbital rotational energy. Orbitals of \(s\) shape have no angular energy, those of \(p\) shape have less then do \(d\) orbitals, etc. It turns out that the pattern of energy levels derived from this particle-in-a-spherical-box model can offer reasonably accurate descriptions of what is observed experimentally. In particular, when a cluster (or cluster ion) has a closed-shell electronic configuration in which, for a given radial quantum number \(n\), all of the \(s\), \(p\), \(d\) orbitals associated with that \(n\) are doubly occupied, nanoscopic metal clusters are observed to display special stability (e.g., lack of chemical reactivity, large electron detachment energy). Clusters that produce such closed-shell electronic configurations are sometimes said to have magic-number sizes. The energy level expression given in Chapter 1 \[E_{L,n} = V_0 + (z_{L,n})^2 \dfrac{h^2}{2mR^2} \tag{2.1}\] for an electron moving inside a sphere of radius \(R\) (and having a potential relative to the vacuum of \(V_0\)) can be used to model the energies of electron within metallic nano-clusters. Each electron occupies an orbital having quantum numbers \(n\), \(L\), and \(M\), with the energies of the orbitals given above in terms of the zeros \(\{z_{L,n}\}\) of the spherical Bessel functions. Spectral features of the nano-clusters are then determined by the energy gap between the highest occupied and lowest unoccupied orbital and can be tuned by changing the radius (\(R\)) of the cluster or the charge (i.e., number of electrons) of the cluster. Another very useful application of the model problems treated in Chapter 1 is the one-dimensional particle-in-a-box, which provides a qualitatively correct picture for \(\pi\)-electron motion along the \(p_{\pi}\) orbitals of delocalized polyenes. The one Cartesian dimension corresponds to motion along the delocalized chain. In such a model, the box length \(L\) is related to the carbon-carbon bond length \(R\) and the number \(N\) of carbon centers involved in the delocalized network \(L=(N-1) R\). In Figure 2.2, such a conjugated network involving nine centers is depicted. In this example, the box length would be eight times the C-C bond length. Figure 2.2. The \(\pi\) atomic orbitals of a conjugated chain of nine carbon atoms, so the box length \(L\) is eight times the C-C bond length. The eigenstates \(\psi_n(x)\) and their energies \(E_n\) represent orbitals into which electrons are placed. In the example case, if nine \(\pi\) electrons are present (e.g., as in the 1,3,5,7-nonatetraene radical), the ground electronic state would be represented by a total wave function consisting of a product in which the lowest four \(\psi\)'s are doubly occupied and the fifth \(\psi\) is singly occupied: \[\Psi = \psi_1 \alpha\psi_1\beta \psi_2 \alpha \psi_2 \beta \psi_3 \alpha \psi_3\beta \psi_4 \alpha \psi_4 \beta \psi_5 \alpha. \tag{2.2}\] The \(z\)-component spin angular momentum states of the electrons are labeled \(\alpha\) and \(\beta\) as discussed earlier. We write the total wave function above as a product wave function because the total Hamiltonian involves the kinetic plus potential energies of nine electrons. To the extent that this total energy can be represented as the sum of nine separate energies, one for each electron, the Hamiltonian allows a separation of variables \[H \cong \sum_{j=1}^9 H(j) \tag{2.3}\] in which each H(j) describes the kinetic and potential energy of an individual electron. Of course, the full Hamiltonian contains electron-electron Coulomb interaction potentials \(e^2/r_{i,j}\) that cannot be written in this additive form. However, as we will treat in detail in Chapter 6, it is often possible to approximate these electron-electron interactions in a form that is additive. Recall that when a partial differential equation has no operators that couple its different independent variables (i.e., when it is separable), one can use separation of variables methods to decompose its solutions into products. Thus, the (approximate) additivity of \(H\) implies that solutions of \(H \psi = E \psi\) are products of solutions to \[H (j) \psi (\textbf{r}_j) = E_j \psi(\textbf{r}_j). \tag{2.4}\] The two lowest \(\pi\pi^*\) excited states would correspond to states of the form \[\psi^* = \psi_1\alpha \psi_1\beta \psi_2\alpha \psi_2\beta \psi_3\alpha \psi_3\beta \psi_4\alpha \psi_5\beta \psi_5\alpha, \tag{2.5a}\] \[\psi'^* = \psi_1\alpha \psi_1\beta \psi_2\alpha \psi_2\beta \psi_3\alpha \psi_3\beta \psi_4\alpha \psi_4\beta \psi_6\alpha,\tag{2.5b}\] where the spin-orbitals (orbitals multiplied by \(\alpha\) or \(\beta\)) appearing in the above products depend on the coordinates of the various electrons. For example, \[\psi_1\alpha \psi_1\beta \psi_2\alpha \psi_2\beta \psi_3\alpha \psi_3\beta \psi_4\alpha \psi_5\beta \psi_5\alpha \tag{2.6a}\] \[ \psi_1\alpha(\textbf{r}_1) \psi_1\beta (\textbf{r}_2) \psi_2\alpha (\textbf{r}_3) \psi_2\beta (\textbf{r}_4) \psi_3\alpha (\textbf{r}_5) \psi_3\beta (\textbf{r}_6) \psi_4a (\textbf{r}_7)\psi_5\beta (\textbf{r}_8) \psi_5\alpha (\textbf{r}_9). \tag{2.6b}\] The electronic excitation energies from the ground state to each of the above excited states within this model would be \[\Delta{E^*} = \dfrac{ \pi^2 \hbar^2}{2m} \left[ \dfrac{5^2}{L^2} - \dfrac{4^2}{L^2}\right] \tag{2.7a}\] \[\Delta{E'^*} = \dfrac{ \pi^2 \hbar^2}{2m} \left[ \dfrac{6^2}{L^2} - \dfrac{5^2}{L^2}\right]. \tag{2.7b}\] It turns out that this simple model of \(\pi\)-electron energies provides a qualitatively correct picture of such excitation energies. Its simplicity allows one, for example, to easily suggest how a molecule’s color (as reflected in the complementary color of the light the molecule absorbs) varies as the conjugation length \(L\) of the molecule varies. That is, longer conjugated molecules have lower-energy orbitals because \(L^2\) appears in the denominator of the energy expression. As a result, longer conjugated molecules absorb light of lower energy than do shorter molecules. This simple particle-in-a-box model does not yield orbital energies that relate to ionization energies unless the potential inside the box is specified. Choosing the value of this potential \(V_0\) that exists within the box such that \(V_0 + \dfrac{\pi^2 \hbar^2}{2m} \dfrac{5^2}{L^2}\) is equal to minus the lowest ionization energy of the 1,3,5,7-nonatetraene radical, gives energy levels (as \(E = V_0 + \dfrac{\pi^2 \hbar^2}{2m} \dfrac{n^2}{L^2}\)), which can then be used as approximations to ionization energies. The individual \(\pi\)-molecular orbitals \[\psi_n = \sqrt{\dfrac{2}{L}} \sin\Big(\dfrac{n\pi x}{L}\Big) \tag{2.8}\] are depicted in Figure 2.3 for a model of the 1,3,5 hexatriene \(\pi\)-orbital system for which the box length \(L\) is five times the distance \(R_{CC}\) between neighboring pairs of carbon atoms. The magnitude of the \(k^{th}\) C-atom centered atomic orbital in the \(n^{th}\) \(\pi\)-molecular orbital is given by \[\sqrt{\dfrac{2}{L}} \sin\Big(\dfrac{n\pi(k-1)R_{CC}}{L}\Big).\] Figure 2.3. The phases of the six molecular orbitals of a chain containing six atoms. In this figure, positive amplitude is denoted by the clear spheres, and negative amplitude is shown by the darkened spheres. Where two spheres of like shading overlap, the wave function has enhanced amplitude (i.e. there is a bonding interaction); where two spheres of different shading overlap, a node occurs (i.e., there is antibonding interaction). Once again, we note that the number of nodes increases as one ranges from the lowest-energy orbital to higher energy orbitals. The reader is once again encouraged to keep in mind this ubiquitous characteristic of quantum mechanical wave functions. This simple model allows one to estimate spin densities at each carbon center and provides insight into which centers should be most amenable to electrophilic or nucleophilic attack. For example, radical attack at the \(C_5\) carbon of the nine-atom nonatetraene system described earlier would be more facile for the ground state \(\psi\) than for either \(\psi^*\) or \(\psi'^*\). In the former, the unpaired spin density resides in \(\psi_5\) (which varies as \(\sin(5\pi x/8R_{CC}\)) so is non-zero at \(x = L/2\)), which has non-zero amplitude at the \(C_5\) site \(x= L/2 = 4R_{CC}\). In \(\psi^*\) and \(\psi'*\), the unpaired density is in \(\psi_4\) and \(\psi_6\), respectively, both of which have zero density at \(C_5\) (because sin(npx/8RCC) vanishes for \(n = 4\) or \(6\) at \(x = 4R_{CC}\)). Plots of the wave functions for \(n\) ranging from 1 to 7 are shown in another format in Figure 2.4 where the nodal pattern is emphasized. Figure 2.4. The nodal pattern for a chain containing seven atoms I hope that by now the student is not tempted to ask how the electron gets from one region of high amplitude, through a node, to another high-amplitude region. Remember, such questions are cast in classical Newtonian language and are not appropriate when addressing the wave-like properties of quantum mechanics. Contributors and Attributions Jack Simons (Henry Eyring Scientist and Professor of Chemistry, U. Utah) Telluride Schools on Theoretical Chemistry Integrated by Tomoyuki Hayashi (UC Davis) • Was this article helpful?
b67d07b4b5738e01
A technician explains measurements whilst a scientist explains observations. Quantum mechanics and New Empiricism Quantum physical interpretations of the universe are not essential or in any way fundamental to the empirical approach to mind. The empirical approach is about stressing that observation is more important than theory, it does not depend on any particular physical theory. However, along with Relativity, the mere existence of quantum theory shows that a simple model of the world where discrete lumps of matter mediate interactions is incomplete. If simple materialism is not a universal physical theory then the regress arguments in the philosophy of mind are not incontrovertible and mind cannot be dismissed simply because early materialism implies a homunculus (the little man within a little man within a..). Indeed the opposite is true, the homunculus that is implied by materialism means that the observation of mind should be respected and nineteenth century materialism rejected. Although quantum physics neither validates nor invalidates New Empiricism it does, however, have interesting consequences for the analysis of mind that may or may not be supported by Empiricism. Modern advances in quantum physics, such as decoherence theory, show that there are several problems that have a direct impact on our idea of mind. Before addressing the role of quantum theory in the philosophy of mind it is essential to distinguish between two different problems, the first problem is whether or not the brain could contain a superposition of quantum states adjacent to our normal environment and the second problem is the nature of the environment itself. The possibility that the brain may exist in a superposition of states like a quantum computer is interesting but fraught with difficulties (see Tegmark 2000). Electromagnetic fields may be able to sustain a superposition of states (Anglin & Zurek 1996) but I will not discuss superpositions of this type any further here. The second problem, the problem of the nature of the environment is far more interesting. Decoherence theory has illuminated this problem so that in the 21st century we can get a clearer idea of the problem than ever before. The problem of the nature of the environment arises because when a measurement is made on an isolated system that has two possible superpositions of state there are two possible outcomes of the measurement, one being the measuring instrument plus system in one state and the other the measuring instrument plus system in the other state. The measuring instrument and the system are said to be "entangled" and form a new conjoint system with its own superpositions of state. But we only ever see the measuring instrument in one state. Zurek analysed this problem and realised that if the measuring instrument and system combination could be isolated it would form a superposition of states and this superposition would be extended if we added more measurements to the conjoint system. For instance, if a beam of light struck the measuring instrument it would create a triple entanglement and if a scientist looked at light photons coming from the instrument it would create a conjoint, four component entanglement. If this four part system were isolated it would exist in a superposition of quantum states, each with its own probability. This predicts that there would be as many copies of the scientist as there were possible states. Zurek took this analysis yet further and demonstrated that the entire environment becomes a conjoint state because of the interactions between its components. This "environment" has the same properties as the world of classical (non-quantum) physics. Although decoherence theory predicts that there would be as many copies of the scientist as there are states of the system it also predicts that an individual scientist, ie: an individual copy, would only observe one state of the system. This scientist is surrounded by a classical world without any significant superpositions of state. Decoherence theory is essentially a "many worlds" interpretation of quantum theory with many copies of an individual being possible in the multiverse but each individual observing a classical universe. So far so good, but what about the observer, you and I? If I consider you or your brain there is some, but little scope within decoherence theory for a superposition of states. If your brain has a synapse that is briefly in a superposition of firing and not-firing I will discover that it will rapidly adopt one of the two states. So I probably see your brain as a classical device. But what of my own observation? It could be claimed that if my own observation is the same as a single outcome, if it is not a superposition of states, then it is localised to a single branch, a single, entangled, conjoint state and hence dependent upon the classical state of the brain. However, there is a "sleight of hand" in this argument because there is no way within the argument to distinguish between an environment that is created by the possibility of conscious observation and one that creates conscious observation. According to decoherence theory it would be possible to introduce an observer-system into a non-entangled universe and after a second or two an "environment" would form around the observer-system. A few minutes later the environment that is due to the presence of the observer would be indistinguishable from any other classical environment. So how can we distinguish between an environment that is created by the possibility of conscious observation and an environment in which this is not the case? If we are to believe the theory that the environment creates our observation then our experience must be fully encompassed by the theory - as scientists we are not entitled to reject observation on the basis of incomplete or flimsy theories. If we are to believe the theory that the environment creates our experience then our experience cannot contain phenomena that are not encompassed by decoherence theory. In fact there is a vast difference between my experience and decoherence theory because my observation contains time as an observable. My own observation is not just like synapses firing, it is composed of objects that are connected in time as well as space (See Time and conscious experience). It has time as a preeminent "observable" and hence does not conform to the Schrödinger equation used in the development of decoherence theory (See Horwitz (2005)). Furthermore, my conscious experience appears to be passive and as Zeh (2000) pointed out, if our conscious experience is passive the rules of decoherence may not apply and it could well be the spacetime point of our observation that selects the universe where we find ourselves. On this model the environment would consist of those events that are compatible with the spacetime form of conscious observation. This would be consistent both with decoherence theory as a limited quantum description of the classical environment and with modern cosmology (See for instance "Hawking's reflections on spacetime and the existence of humans in: Quantum Cosmology, M-theory and the Anthropic Principle ). Our environment would be that part of the multiverse that is consistent with conscious experience ie: that part that has 3 spatial and 1 or more temporal extensive dimensions This is a different view from the conventional idea of quantum physics and mind. For instance, Zurek (2003) explicitly assumes from the outset that the conscious observer is like a computer to avoid much of the problem of observation. This assumption that we are like a computer leads to the tautology that a Newtonian environment created by decoherence creates a conscious experience that has previously been defined as something created by a Newtonian environment. This "built-in" assumption has misled many commentators into thinking that decoherence proves that the conscious observer is a product of decoherence like a digital computer and hence immaterial to quantum physics. If Zurek is right and experience is classical then it cannot contain time extended events. If observation is right then our experience is non-classical from the start, embedding the energy-time form of the Heisenberg Uncertainty Principle within it and the classical world originates in observers. Proving that Zurek is wrong will require experiments. I would suggest analysing delayed choice experiments, these seem to show that decoherence originates at the position of the observer rather than being a property of the average state of the environment. Kent (2005) points out that if wavefunction collapse were to occur some time after events occur (ie: wavefunction collapse occurs in the brain) then there are ways of testing this using entangled particles. He notes that the qm experiments performed to date cannot distinguish between non-local and local collapse of the wavefunction in the brain but could be amended to do so. Unfortunately Kent presumes that conscious experience occurs at 0.1 sec after sensory events whereas neuroscientists know that the gap is more like 0.5 secs, let us hope that no physicist wastes a year doing this experiment only to be told that the time gap should have been 0.4 secs greater. Some further reading Anglin, J.R. & Zurek, J.H. (1996). Decoherence of quantum fields: decoherence and predictability. Phys.Rev. D53 (1996) 7327-7335 http://arxiv.org/abs/quant-ph/9510021 Baker, D (2006) Measurement Outcomes and Probability in Everettian Quantum Mechanics. http://philsci-archive.pitt.edu/archive/00002717/ Bachtold, M (2008). Five Formulations of the Quantum Measurement Problem in the Frame of the Standard Interpretation. J Gen Philos Sci (2008) 39:17–33 Bitbol, M (2008) Consciousness, Situations and the measurement problem of quantum mechanics. NEUROQUANTOLOGY, 6, 203-213, 2008 Horwitz, L.P. (2005) On the Significance of a Recent Experiment Demonstrating Quantum Interference in Time. http://www.arxiv.org/pdf/quant-ph/0507044 Kent, A (2005). Causal Quantum Theory and the Collapse Locality Loophole. Physical Review A 72:11, 12107, American Physical Society, 7/2005. http://www.citebase.org/abstract?id=oai%3AarXiv.org%3Aquant-ph%2F0204104 Saunders, S (1996). Time, Quantum Mechanics and Probability. http://philsci-archive.pitt.edu/archive/00000465/00/Part3uj(S).pdf Tegmark, M. (2000)The importance of quantum decoherence in brain processes. Phys.Rev. E61 (2000) 4194-4206 Wallace, D (2007). The Quantum Measurement Problem: State of Play http://arxiv.org/abs/0712.0149 Zeh, H.D. (2000) The Problem of Conscious Observation in Quantum Mechanical Description http://arxiv.org/abs/quant-ph/9908084 Zurek, W.H. (2003). Decoherence, einselection and the quantum origins of the classical. Rev. Mod. Phys. 75, 715 (2003) http://arxiv.org/abs/quant-ph/0105127 1 comment: 1. Incidentally, given that you have some criticisms of Zurek, I wonder what you'd think of a quantum physicist approaching the question from another angle. Specifically, Richard Conn Henry who apparently concludes from QM that the universe is mental in nature? His "The Mental Universe" is worth reading if you haven't encountered it yet, available on that site. I have a feeling you may be critical of him as well, but as I said, he comes at the issue from a different angle - and he certainly isn't beholden to materialism. Same for Henry Stapp.
8af433d9a9ea75fa
Highlights from 2012 For earlier years see Archive Turmoil in sluggish electrons' existence 22 May 2017 We can refer to electrons in non-conducting materials as 'sluggish'. Typically, they remain fixed in a location, deep inside an atomic composite. It is hence relatively still in a dielectric crystal lattice. This idyll has now been heavily shaken up by a team of physicists from various research institutions, including the Laboratory of Attosecond Physics (LAP) at the Ludwig-Maximilians-Universität Munich (LMU) and the Max Planck Institute of Quantum Optics (MPQ), the Institute of Photonics and Nanotechnologies (IFN-CNR) in Milan, the Institute of Physics at the University of Rostock, the Max Born Institute (MBI), the Center for Free-Electron Laser Science (CFEL) and the University of Hamburg. For the first time, these researchers managed to directly observe the interaction of light and electrons in a dielectric, a non-conducting material, on timescales of attoseconds (billionths of a billionth of a second). The scientists beamed light flashes lasting only a few hundred attoseconds onto 50 nanometer thick glass particles, which released electrons inside the material. Simultaneously, they irradiated the glass particles with an intense light field, which interacted with the electrons for a few femtoseconds (millionths of a billionth of a second), causing them to oscillate. This resulted, generally, in two different reactions by the electrons. First, they started to move, then collided with atoms within the particle, either elastically or inelastically. Because of the dense crystal lattice, the electrons could move freely between each of the interactions for only a few ångstrom (10-10 meter). "Analogous to billiard, the energy of electrons is conserved in an elastic collision, while their direction can change. For inelastic collisions, atoms are excited and part of the kinetic energy is lost. In our experiments, this energy loss leads to a depletion of the electron signal that we can measure," explains Prof. Francesca Calegari (CNR-IFN Milan and CFEL/University of Hamburg). Since chance decides whether a collision occurs elastically or inelastically, with time inelastic collisions will eventually take place, reducing the number of electrons that scattered only elastically. Employing precise measurements of the electrons' oscillations within the intense light field, the researchers managed to find out that it takes about 150 attoseconds on average until elastically colliding electrons leave the nanoparticle. "Based on our newly developed theoretical model we could extract an inelastic collision time of 370 attoseconds from the measured time delay. This enabled us to clock this process for the first time," describes Prof. Thomas Fennel from the University of Rostock and Berlin's Max Born Institute in his analysis of the data. The researchers' findings could benefit medical applications. With these worldwide first ultrafast measurements of electron motions inside non-conducting materials, they have obtained important insight into the interaction of radiation with matter, which shares similarities with human tissue. The energy of released electrons is controlled with the incident light, such that the process can be investigated for a broad range of energies and for various dielectrics. "Every interaction of high-energy radiation with tissue results in the generation of electrons. These in turn transfer their energy via inelastic collisions onto atoms and molecules of the tissue, which can destroy it. Detailed insight about electron scattering is therefore relevant for the treatment of tumors. It can be used in computer simulations to optimize the destruction of tumors in radiotherapy while sparing healthy tissue," highlights Prof. Matthias Kling of the impact of the work. As a next step, the scientists plan to replace the glass nanoparticles with water droplets to study the interaction of electrons with the very substance which makes up the largest part of living tissue. (Text: Thorsten Naeser) Fig. (click to enlarge) Figure: A team of physicists clocked the time it takes electrons to leave a dielectric after their generation with extreme ultraviolet light. The measurement (false color plot) was the first of its kind in a dielectric material and yielded a time of 150 attoseconds (as), from which the physicists determined that inelastic scattering in the dielectric takes about 370 as. Originalpublication: Nature Physics (2017) doi:10.1038/nphys4129 Attosecond Chronoscopy of Electron Scattering in Dielectric Nanoparticles L. Seiffert, Q. Liu, S. Zherebtsov, A. Trabattoni, P. Rupp, M. C. Castrovilli, M. Galli, F. Süßmann, K. Wintersperger, J. Stierle, G. Sansone, L. Poletto, F. Frassetto, I. Halfpap, V. Mondes, C. Graf, E. Rühl, F. Krausz, M. Nisoli, T. Fennel, F. Calegari, M. F. Kling. Prof. Dr. Thomas Fennel Tel. 030 6392 1245 Thomas Fennel started as a Heisenberg fellow at the MBI 12 April 2017 Prof. Thomas Fennel, group leader at the Institute of Physics at the University of Rostock, has been awarded a prestigious Heisenberg Fellowship funded by the Deutsche Forschungsgemeinschaft (DFG). Prof. Dr. Thomas Fennel - Photo: Julia Tetzke, Uni Rostock   Prof. Thomas Fennel, group leader at the Institute of Physics at the University of Rostock, has been awarded a prestigious Heisenberg Fellowship funded by the Deutsche Forschungsgemeinschaft (DFG). With the Heisenberg fellowship, which officially started on January 1st 2017, the DFG is supporting a research project to explore new routes for imaging and controlling ultrafast electronic motion in nanostructures. The underlying research will be carried out in a joint effort between Prof. Fennel's team at the University of Rostock and researchers in division A of the Max Born Institute, which is led by Prof. Marc Vrakking and to which Prof. Fennel is affiliated as an associated researcher. The research activities are devoted to the active manipulation and visualization of ultrafast correlated and collective electron motion in finite systems. On the one hand, routes to the control of electronic processes in clusters, nanoparticles, and jets on the timescale of a single optical cycle of light via its detailed electric waveform or with multi-color fields will be explored, theoretically and experimentally. On the other hand, the technology for characterizing the attosecond electron motion in nanostructures via coherent diffractive imaging experiments using ultrashort intense XUV and x-ray laser pulses from free electrons lasers and lab-based high-harmonic sources will be developed. Finally, both approaches should be combined to trace light-induced electron dynamics with unprecedented spatial and temporal resolution and to reveal its classical and quantum aspects. Prof. Fennel is an expert in numerical many-particle physics and nanophotonics. He aims at the further development of atomistic electromagnetic plasma simulations and the efficient inclusion of the relevant quantum dynamics to tackle the challenging scientific questions of the project. The Max Born Institute is happy to welcome Prof. Fennel and is looking forward to a fruitful collaboration with the local experimental and theoretical groups. Prof. Dr. Marc Vrakking Tel. (030) 6392 1200 Prof. Dr. Thomas Fennel Tel. (030) 6392 1295 Nanostructures give directions to efficient laser-proton accelerators 14 March 2017 Nanostructured surfaces have manifold applications. Among others they are used to selectively increase aborption of light. You can find them everywhere where light harvesting is the key point, e.g. in photovoltaics. But also in laser proton acceleration this approach attracts a lot of attention as nanostructured targets hold the promise to significantly increase maximum proton energies and proton numbers at a given laser energy. As for any other new technology, a high efficiency is a key for a potential future use. Scientists at the Max-Born-Institute (MBI) in Berlin have now investigated under which conditions the use of nanostructures in laser ion acceleration is beneficial. If an ultrashort laser pulse (˜30 fs, >1 J) is focused onto a solid target foil, such that relativistic intensities (>1018 W/cm2) are reached, matter is transformed immediately into a plasma by field ionization. Electrons are accelerated to relativistic energies in the laser field. While fastest electrons can leave the target, those with less (but still relativistic) energy are trapped in the coulomb field of the (now) positively charged target and start to oscillate in this field. They form a dynamic sheath that, together with the target surface, generates an electric field of several megavolts per micrometer, in which positive ions (e. g. protons and carbon ions from the surface contamination layer) experience extreme acceleration. This process is called target normal sheath acceleration (TNSA). Fig. 1 shows an image of such a proton bunch. The idea behind using nanostructured surfaces is now straight forward: Nanostructures increase laser absorption, i. e. more and more energetic electrons are generated which, in turn, can accelerate protons to higher energies. But there are also alternatives for optimizing the TNSA mechanism - particularly important is the optimization of the plasma gradient, i. e. the density profile of the target. The laser intensities applied are so huge, that ionization of the target does not only happen when the peak of the laser pulse interacts with the target, but already starts during the rising edge of the pulse. The pre-ionized plasma expands, the plasma density decreases. The plasma gradient is therefore essentially determined by the exact temporal pulse structure. The team of Dr. Matthias Schnuerer from Max-Born-Institute in Berlin has investigated, under which conditions the use of nanostructured targets is beneficial. For this purpose, the physicists have laser structured their targets in-situ. This method of generation of periodic surface structures via a laser (LIPSS) is particularly simple and in principle allows the development of a high repetition rate target system. In a first step, the target surface is nanostructured by applying about 20 strongly attenuated laser pulses. A representative scanning electron microscopy image of such a surface is shown in fig. 2. The structural parameters are similar to those that maximize laser absorption. Structural analysis and simulations show that these structures possess nearly optimal parameters for maximum laser absorption. In the following step, a single fully amplified pulse is focused onto this nanostructured area. Dr. Andrea Lübcke and her co-workers have investigated the influence of those nanostructures on the proton spectrum for different laser intensities. They chose a laser contrast that is optimal at highest intensities. First of all, the scientists could show that nanostructures remain functional even at highest intensities at the present contrast conditions in the sense that they increase the laser absorption as evident from an increase of Kα yield (see. Fig. 2a). For relatively low intensities, nanostructures significantly enhance both the conversion efficiency and proton energies. For example, at 5x1017 W/cm2 the maximum proton energies were increased by a factor of four, the conversion efficiency from laser to proton energy was even enhanced by two orders of magnitude. However, at highest laser intensities with optimal laser plasma parameters no significant benefits from the nanostructures for ion acceleration were measured (Fig. 2b,c). The researchers speculate about fundamental limitations in the energy transfer processes. The scientists were, however, not fully surprised by these results: As in many optimization problems, there are different paths to the optimum and combining them usually does not lead to an even better result. So far, these experiments performed at extreme conditions cannot be theoretically simulated in every respect. It is therefore the merit of this work to have clarified under which conditions the use of nanostructures is beneficial and in which direction new theoretical investigations can be initiiated. Abb. 1 (click to enlarge) Fig. 1: Laser accelerated ions, becoming visible in a Wilson chamber. Abb. 1 (click to enlarge) Fig. 2: Typical scanning electron microscopy image of nanostructured titanium surface (top). The Kα yield (a) of the nanostructured target is enhanced compared to the plane target over the entire investigated intensity range and indicates that nanostructures are functional even at highest intensity. In contrast, the conversion efficiency (energy transfer into fast protons) (b, logarithmic scale) and the maximum proton energy (c) of the two different targets approach each other at highest intensities. Original publication: Scientific Reports 7, 44030 (2017) doi:10.1038/srep44030 Prospects of target nanostructuring for laser proton acceleration Andrea Luebcke, Alexander A. Andreev, Sandra Hoehm, Ruediger Grunwald, Lutz Ehrentraut, Matthias Schnuerer Dr. Andrea Luebcke Tel. 030 6392 1247 Dr. Matthias Schnuerer Tel. 030 6392 1350 Lattice of nanotraps and line narrowing in Raman gas 8 February 2017 Decreasing the emission linewidth from a molecule is one of the key aims in precision spectroscopy. One approach is based on cooling molecules to near absolute zero. An alternative way is to localize the molecules on subwavelength scale. A novel approach in this direction uses a standing wave in a gas-filled hollow fibre. It creates an array of deep, nanometer-scale traps for Raman-active molecules, resulting inlinewidth narrowing by a factor of 10 000. The radiation emitted by atoms and molecules is usually spectrally broadened due to the motion of the emitters, which results in the Doppler effect. Overcoming this broadening is a difficult task, in particular for molecules. One possibility to overcome the molecular motion is by building deep potential traps with small dimensions. Previously, this was done e.g. by arranging several counterpropagating beams in a complicated setup, with limited success. In a cooperation effort of the Max Born Institute (A. Husakou) and Xlim Institute in Limoges, researchers show that subwavelength localization and line narrowing is possible in a very simple arrangement due to self-organization of Raman gas (molecular hydrogen) in a hollow photonic crystal fibre. Due to Raman scattering, the continuous-wave pump light transforms into the so-called Stokes sideband, which travels back and forth in the fibre due to reflections from fibre ends and forms a stationary interference pattern - a standing wave with interchanging regions of high and low field [Fig. 1]. In the high-field regions, the Raman transition is saturated and is not active, and the molecules have high potential energy since they are partially in the excited state. In the low-field region, the molecules are Raman-active, and they have low potential energy since they are close to the ground state. These low-field regions form an array of roughly 40 000 narrow, strong traps, which contain localized Raman-active molecules. The size of these traps is around 100 nm (1 nm = 10-9m), which is much smaller than the light wavelength of 1130 nm. Therefore the emitted Stokes sidebands have a very narrow spectral width of only 15 kHz - this is 10 000 times narrower than the Doppler-broadened sidebands for the same conditions! The self-organization of the gas manifests also on the macroscopic scale. First, the calculations show that the Raman process mainly happens exactly in the fibre section where the standing wave is formed, as shown in the top panel of Fig. 1. Second, the macroscopic gradient of the potential leads to the gas flow towards the fibre end, which is observed by eye in the experiment. This strong localization and the linewidth narrowing can find various uses, e.g. in spectroscopy. However, it can also be used as well as a method to periodically modulate the density of the gas, which is naturally suited for developing quasi-phase-matching schemes for other nonlinear processes, such as effective generation of high harmonics. Fig. 1 (click to enlarge) Fig. 1: On the macroscopic scale, the pump light transforms into forward-propagating Stokes (FS) radiation, which is partially reflected from the fibre end and becomes backward-propagating Stokes radiation (BS) which is also amplified by the pump. In the region where both FS and BS are strong, they form interference pattern of standing wave, which is shown on the microscopic scale. In the low-field regions (denoted by red-color molecules) the molecules are in the ground state and strongly trapped, as shown by the potential in the bottom panel. Exactly these trapped molecules are Raman-active, leading to line narrowing. Original publication: Nature Communications 7, 12779 (2016) doi:10.1038/ncomms12779 "Raman gas self-organizing into deep nano-trap lattice" M. Alharbi, A. Husakou, M. Chafer, B. Debord, F. Gérôme and F. Benabid Dr. A. Husakou Tel. 030 6392 1280 Ultrasmall atom motions recorded with ultrashort x-ray pulses 1st Februar 2017 Periodic motions of atoms over a length of a billionth of a millionth of a meter (10-15 m) are mapped by ultrashort x-ray pulses. In a novel type of experiment, regularly arranged atoms in a crystal are set into vibration by a laser pulse and a sequence of snapshots is generated via changes of x-ray absorption. A crystal represents a regular and periodic spatial arrangement of atoms or ions which is held together by forces between their electrons. The atomic nuclei in this array can undergo different types of oscillations around their equilibrium positions, the so-called lattice vibrations or phonons. The spatial elongation of nuclei in a vibration is much smaller than the distance between atoms, the latter being determined by the distribution of electrons. Nevertheless, the vibrational motions act back on the electrons, modulate their spatial distribution and change the electric and optical properties of the crystal on a time scale which is shorter than 1 ps (10-12 s). To understand these effects and exploit them for novel, e.g., acoustooptical, devices, one needs to image the delicate interplay of nuclear and electronic motions on a time scale much shorter than 1 ps. In a recent Rapid Communication in Physical Review B, researchers from the Max Born Institute in Berlin (Germany), the Swiss Federal Laboratories for Materials Science and Technology in Dübendorf (Switzerland), and the National Institute of Standards and Technology, Gaithersburg (USA) apply a novel method of optical pump - soft x-ray probe spectroscopy for generating coherent atomic vibrations in small LiBH4 crystals, and reading them out via changes of x-ray absorption. In their experiments, an optical pump pulse centered at 800 nm excites via impulsive Raman scattering a coherent optical phonon with Ag symmetry [movie]. The atomic motions change the distances between the Li+ und (BH4)- ions. The change in distance modulates the electron distribution in the crystal and, thus, the x-ray absorption spectrum of the Li+ ions. In this way, the atomic motions a mapped into a modulation of soft x-ray absorption on the so-called Li K-edge around 60 eV. Ultrashort x-ray pulses measure the x-ray absorption change at different times. From this series of snapshots the atomic motions are reconstructed. This novel experimental scheme is highly sensitive and allows for the first time to kick off and detect extremely small amplitudes of atomic vibrations. In our case, the Li+ ions move over a distance of only 3 femtometers = 3 x 10-15 m which is comparable to the diameter of the Li+ nucleus and 100000 times smaller than a distance between the ions in the crystal. The experimental observations are in excellent agreement with in-depth theoretical calculations of transient x-ray absorption. This new type of optical pump-soft x-ray probe spectroscopy on a femtosecond time scale holds strong potential for measuring and understanding the interplay of nuclear and electronic motions in liquid and solid matter, a major prerequisite for theoretical simulations and applications in technology. Abb. 1 (click to enlarge) Abb. 1: In an x-ray absorption experiment light excites a strongly bound core electron into a conduction band state. On the left of the figure such a transition is shown. An electron which is strongly bound to a Lithium nucleus (green) is excited into a conduction band state (red) that interacts with both the Lithium nucleus and Borohydride group. This conduction band state is therefore sensitive to a modulation of the distance Q between Lithium nucleus and Borohydride group and as a result the x-ray absorption process is sensitive to such a modulation (cf. Figs. 2(b) and 3(d) in the main article). On the right side of the figure the Lithium K-edge x-ray absorption spectrum for different strongly exaggerated displacements is shown. Movie Movie: What happens in the unit cell of crystalline LiBH4 after impulsive Raman excitation with a femtosecond laser pulse? Upper panel: measured transient absorption change Δ A(t) (symbols) as we vary the time delay between infrared pump pulses and soft x-ray probe pulses at photon energy of ħω = 61.5 eV [cf. Fig. 3(a) in the main article. The lower box shows the atoms in the unit cell of LiBH4 with red boron atoms, gray hydrogen atoms, and green Li atoms. The moving blue circle in the upper panel is synchronized with the moving atoms in the lower panel. The amplitude of the motion is strongly exaggerated (i.e. times 30000) to visualize the pattern of the motion. The reddish color of the unit cell indicates the intensity of the infrared pump pulse. Original publication: Physical Review B 95, 081101 (R) (2017) Ultrafast modulation of electronic structure by coherent phonon excitations J. Weisshaupt, A. Rouzée, M. Woerner, M. J. J. Vrakking, T. Elsaesser, E. L. Shirley, and A. Borgschulte Dr. Michael Woerner Tel. 030 6392 1470 Jannick Weisshaupt Tel. 030 6392 1471 Dr. Arnaud Rouzée Tel. 030 6392 1240 Prof. Dr. Marc Vrakking Tel. 030 6392 1200 Prof. Dr. Thomas Elsaesser Tel. 030 6392 1400 Unified time and frequency picture of ultrafast atomic excitation in strong fields 5 January 2017 The insight that light sometimes needs to be treated as an electromagnetic wave and sometimes as a stream of energy quanta called photons is as old as quantum physics. In the case of interaction of strong laser fields with atoms the dualism finds its analogue in the intuitive pictures used to explain ionization and excitation: The multiphoton picture and the tunneling picture. In a combined experimental and theoretical study on ultrafast excitation of atoms in intense short pulse laser fields scientists of the Max Born Institute succeeded to show that the prevailing and seemingly disparate intuitive pictures usually used to describe interaction of atoms with intense laser fields can be ascribed to a single nonlinear process. Moreover, they show how the two pictures can be united. The work appeared in the journal Physical Review Letters and has been chosen to be an Editors' suggestion for its particular importance, innovation and broad appeal. Beside the fundamental aspects the work opens new pathways to determine laser intensities with high precision and to control coherent Rydberg population by the laser intensity. Although the Keldysh parameter, introduced in the 1960's by the eponymous Russian physicist, clearly distinguishes the multiphoton picture and the tunneling picture, it has remained an open question, particularly in the field of strong field excitation, how to reconcile the two seemingly opposing approaches. In the multiphoton picture the photon character shines through as resonant enhancement in the excitation yield whenever an integer multiple of the photon energy matches the excitation energy of atomic states. However, the energy of atomic states is shifted upwards with increasing laser intensity. This results in resonant-like enhancements in the excitation yield, even at fixed laser frequency (photon energy). In fact, the enhancement occurs periodically, whenever the energy shift corresponds to an additional photon energy (channel closing). In the tunneling picture the laser field is considered as an electromagnetic wave, where only the oscillating electric field is retained. Excitation can be viewed as a process, where initially the bound electron is liberated by a tunneling process, when the laser field reaches a cycle maximum. In many cases the electron does not gain enough drift energy from the laser field to escape the Coulomb potential of the parent ion by the end of the laser pulse, which would lead to ionization of the atom. Instead, it remains bound in an excited Rydberg state. In the tunneling picture there is no room for resonances in the excitation since tunneling proceeds in a quasi-static electric field, where the laser frequency is irrelevant. In the study the excitation yield of Ar and Ne atoms as a function of the laser intensity has been directly measured for the first time, covering both the multiphoton and tunneling regimes. In the multiphoton regime pronounced resonant enhancements in the yield have been observed, particularly in the vicinity of the channel closings, while in the tunneling regime no such resonances appeared. However, here excitation has been observed even in an intensity regime which lies above the threshold for expected complete ionization. The numerical solution of the time dependent Schrödinger equation for the investigated atoms in a strong laser field provided excellent agreement of the theory with the experimental data in both regimes. A more detailed analysis revealed that both pictures represent a complementary description in the time and frequency domain of the same nonlinear process. If one considers excitation in the time domain one can assume that electron wave packets are created periodically at the field cycle maxima. In the multiphoton regime it can be shown that the wave packets are created predominantly close to the maximum intensity of the pulse and thus interfere constructively only if the intensity is close to a channel closing. With this, regular enhancement in the excitation spectrum results effectively only at the photon energy separation. In the tunneling regime the wavepackets are also created periodically at the field cycle maxima, however, predominantly at the rising edge of the laser pulse which, in turn, leads to an irregular interference pattern and consequently, to irregular variations in the excitation spectrum. These rapid variations are not resolved in the experiment and the detected excitation spectrum is smooth. Fig. 1 (click to enlarge) Fig. 1: Yield of excited atoms as a function of the laser intensity. At a laser intensity of 200TW/cm2, in the vicinity of a 6 photon channel closing, a strong resonant enhancement of a factor 100 is visible. For the argon data, the theoretical curve is also displayed (red dashed curve), which is in excellent agreement with the experimental data. Original publication: Phys. Rev. Lett. 118, 013003 (2017) doi:10.1103/PhysRevLett.118.013003 "Unified Time and Frequency Picture of Ultrafast Atomic Excitation in Strong Laser Fields" H. Zimmermann, S. Patchkovskii, M. Ivanov, and U. Eichmann Dr. S. Patchkovskii Tel. 030 6392 1241 Prof. Dr. U. Eichmann Tel. 030 6392 1371 Amplification of relativistic Electron Pulses by Direct Laser Field Acceleration 5 January 2017 Controlled direct acceleration of electrons in very strong laser fields can offer a path towards ultra-compact accelerators. Such a direct acceleration requires rectification and decoupling of the oscillating electromagnetic laser field from the electrons in a suitable way. Researchers worldwide try to tackle this challenge. In experiments at the Max Born Institute, direct laser acceleration of electrons could now be demonstrated and understood in detail theoretically. This concept is an important step towards the creation of relativistic and ultra-short electron pulses within very short acceleration distances below one millimeter. Resulting compact electron and related x-ray sources have a broad spectrum of applications in spectroscopy, structural analysis, biomedical sciences and for nanotechnology. The way electrons can be accelerated up to relativistic kinetic energies in strong laser fields is a fundamental issue in the physics of light-matter interaction. Although the electromagnetic fields of a laser pulse force a free electron previously at rest to oscillations with extremely high velocities, these oscillations cease again when the light pulse has passed by. A net energy transfer by such a direct acceleration of a charged particle in the laser field cannot take place. This fundamental principle - often discussed in physics exams - is valid for certain boundary conditions of the spatial extent and intensity of the laser pulse. Only for particular, different boundary conditions, electrons can indeed receive a net energy transfer via acceleration from the strong laser field. These conditions can be set e.g. by focusing of the laser pulse or the presence of strong electrostatic fields in a plasma. Worldwide, scientists are looking for solutions how fast electrons can be extracted from extremely strong laser fields and how one can obtain short electron pulses with a high charge density via ultra-short laser pulses. In light fields of relativistic intensity (I> 1018 W/cm2) electrons oscillate with velocities close to the speed of light. The corresponding kinetic energy reaches values from MeV to GeV (at I> 1022 W/cm2. Strong light fields are realized by focusing ultra-short laser pulses with high energy down to areas of few micrometers. The resulting spatial intensity distribution does already enable the acceleration of the electrons up to high kinetic energies. This process is known as ponderomotive acceleration. It is an essential process for the interaction between strong light fields and matter. Various theoretical studies, however, have predicted that the number of electrons and their kinetic energy can be further significantly increased by a direct acceleration in the laser field, but only if the electron-light interaction is interrupted in a properly tailored way. These considerations were the starting point for the experiments by Julia Braenzel and her colleagues at the Max Born Institute. In the experiments at MBI, the electrons were decoupled from the light pulse at a particular moment in time, using a separator foil that is opaque for the laser light but can transmit fast electrons. We could show that this method leads to an increase of the number of electrons with high velocities. At first, a 70 TW Ti:Sapphire laser pulse (2 J @ 35 fs) irradiates an 30 - 100 nm thin target foil consisting of a PVF-polymer. In the laser propagation direction, about 109electrons are accelerated up to several MeV energy via the ponderomotive force. During this interaction the foil is almost fully ionized and transformed into plasma. For sufficiently thin target foil thicknesses below 100nm a fraction of the incident laser light can be transmitted through the plasma. The transmitted light starts to overtake the electrons already emitted in this direction. This corresponds to a quasi-intrinsically synchronized injection of slow electrons into the transmitted, but still relativistic laser field (<8 x 1018W/cm2). If a second thin separator foil is placed at the correct distance behind the first one, amplification in the electron signal for a particular energy interval is observed. Fig. 1a) shows a schematic of the temporal evolution in the experiment and Fig. 1b) presents a direct comparison of the detected electron spectral distribution for a single foil and a double foil configuration, where the second foil acts as a separator. This foil is opaque for the laser light but is transparent for the fast electrons and hence enables a decoupling of both. The time at which the interaction between electrons and transmitted light is interrupted depends on the distance between the two foils. The experiments carried out in the group of Matthias Schnürer demonstrate that an amplification of the electron signal can obtained and is maximized for a particular distance. The amplification vanishes for very big distances. Numerous measurements as well as numeric simulations confirmed the hypothesis that electrons with high kinetic energy can indeed be extracted out of the light field if they are decoupled appropriately. If the separator foils is located at an optimized position, slow electrons with kinetic energies below 100keV are accelerated to about ten times higher kinetic energies. This effect leads to a concentration of electrons in a narrow energy interval. In contrast to experiments using the different mechanism of laser wake field acceleration, where the production of GeV electrons has already been demonstrated, the direct laser acceleration demonstrated here can be scaled up to high laser intensities and high plasma densities. Beyond the fundamental insight in laser-matter interactions, the direct laser acceleration demonstrated in this work holds promise for the future realization of compact sources of relativistic electrons. Fig. 1a (click to enlarge) Fig. 1a: Schematic of the direct electron acceleration in a laser field and its realization in the experiment. Steinmeyer Fig. 1b: Detected electrons in the laser propagation direction from a single (F1) and double foil (F1F2) target configuration, where the soncond foils acts as a speparator. The plastic foils used were about F1=35nm and F2=85 nm thick. N e values represent the integrated electron numbers for the whole detection range (0,2-7,5 MeV) with respect to the spectrometer aperture. Fig. 1b (click to enlarge)   Original Publication: Phys. Rev. Lett. 118, 014801 (2017) doi:10.1103/PhysRevLett.118.014801 "Amplification of Relativistic Electron Bunches by Acceleration in Laser Fields" J. Braenzel, A.A. Andreev, F. Abicht, L. Ehrentraut, K. Platonov, and M. Schnürer Julia Bränzel Tel. 030 6392 1338 Dr. Matthias Schnuerer Tel. 030 6392 1315 Archive .... here
8d880e45d20caa72
6 The hydrogen atom Using the time-independent Schrödinger equation with the potential energy term V = –e2/r, where e is the absolute value of the charge both of the electron and of the proton, we again find that bound states exist only for specific values of the total energy E. These are exactly the values that Bohr had obtained via his 1913 postulate. Just as factorizing ψ(x,y,z,t) into Ψ(x,y,z) and [1:−Et/ℏ] led to a time-independent Schrödinger equation and a discrete set of values En, so factorizing Ψ(r,φ,θ) — which is Ψ(x,y,z) in polar coordinates — into ψ(r,θ) and [1;Lzφ/ℏ] leads to a φ-independent Schrödinger equation and a discrete set of values Lz. polar coordinates Figure 2.6.1 Polar coordinates The φ-independent Schrödinger equation contains a real parameter whose possible values are given by L(L+1)ℏ2, where L is an integer satisfying the condition 0 ≤ L ≤ n-1. The possible values of Lz are integers satisfying the inequality |Lz| ≤ L. The possible combinations of the quantum numbers n, L, and Lz are thus n = 1xxxL = 0xxxLz = 0 n = 2xxxL = 0xxxLz = 0 n = 2xxxL = 1xxxLz = –1, 0, +1 n = 3xxxL = 0xxxLz = 0 n = 3xxxL = 1xxxLz = –1, 0, +1 n = 3xxxL = 2xxxLz = –2, –1, 0, +1, +2 All of these states are stationary. n is known as the principal quantum number, L as the angular momentum (or orbital, or azimuthal) quantum number, and Lz as the magnetic quantum number (hence the letter m is often used instead). States with L = 0, 1, 2, 3 were originally labeled s, p, d, f — for “sharp,” “principal,” “diffuse,” and “fundamental,” respectively. The purpose of these letters was to characterize spectral lines. States with higher L follow the alphabet (g, h, …). Figure 2.6.2 maps the radial dependencies of the first three s states, which are spherically symmetric. The plots can be identified by the number N of their nodes (N = n-1). s orbitals Figure 2.6.2 Radial dependencies of the states with quantum numbers 1s, 2s, and 3s. Figures 2.6.3 and 2.6.4 plot the position probability distributions defined by some non-spherical stationary states with m = 0. Figure 2.6.3 emphasizes the fuzziness of these orbitals at the expense of their rotational symmetry. By plotting surfaces of constant probability, Figure 2.6.4 emphasizes their 3-dimensional shape at the expense of their fuzziness. orbitals ray traced Figure 2.6.3 The position probability distributions associated with the following orbitals. First row: 2p0, 3p0, 3d0. Second row: 4p0, 4d0, 4f0. Third row: 5d0, 5f0, 5g0. Imagining method: ray-traced. Not to scale. orbitals surfaces Figure 2.6.4 The position probability distributions associated with the same orbitals as in Figure 2.6.3. Imagining method: surface of constant probability. Not to scale. It must be stressed that what we see in these images is neither the nucleus nor the electron but the fuzzy position of the electron relative to the nucleus. Nor do we see this fuzzy position “as it is.” What we see is the plot of a position probability distribution. This is defined by outcomes of three measurements, determining the values of n, L, and Lz, and it defines a fuzzy position by determining the probabilities of the possible outcomes of a subsequent measurement of the position of the electron relative to the nucleus. Here is how such a probability can be calculated. Imagine a small region V of space in the vicinity of the nucleus — so small that the probability density ρ (probability per unit volume) inside it can be considered constant. The probability of finding the electron inside V (if the appropriate measurement is made) is the product ρV. If the gray inside V is a lighter shade, this probability is lower; if it’s a darker shade, this probability is higher. To calculate the probability associated with a larger region, divide it into sufficiently many sufficiently small regions and add up the probabilities associated with them. Since the dependence on φ is contained in the factor [1;Lzφ/ℏ], it cannot be seen in plots of |Ψ(r,φ,θ)|2. To make this dependence visible, it is customary to replace the complex number [1;Lzφ/ℏ] by its real part, as has been done in Figure 2.6.5. orbitals with m>0 Figure 2.6.5 Orbitals with non-zero m. First row: 4f1, 5f1. Second row: 5f2, 5f3. Third row: 5g1, 5g3. Fourth row: 5g3, 5g4. Not to scale. blank
8570dc7e6eb7f82a
You are currently browsing the monthly archive for December 2010. One of my favourite unsolved problems in harmonic analysis is the restriction problem. This problem, first posed explicitly by Elias Stein, can take many equivalent forms, but one of them is this: one starts with a smooth compact hypersurface {S} (possibly with boundary) in {{\bf R}^d}, such as the unit sphere {S = S^2} in {{\bf R}^3}, and equips it with surface measure {d\sigma}. One then takes a bounded measurable function {f \in L^\infty(S,d\sigma)} on this surface, and then computes the (inverse) Fourier transform \displaystyle \widehat{fd\sigma}(x) = \int_S e^{2\pi i x \cdot \omega} f(\omega) d\sigma(\omega) of the measure {fd\sigma}. As {f} is bounded and {d\sigma} is a finite measure, this is a bounded function on {{\bf R}^d}; from the dominated convergence theorem, it is also continuous. The restriction problem asks whether this Fourier transform also decays in space, and specifically whether {\widehat{fd\sigma}} lies in {L^q({\bf R}^d)} for some {q < \infty}. (This is a natural space to control decay because it is translation invariant, which is compatible on the frequency space side with the modulation invariance of {L^\infty(S,d\sigma)}.) By the closed graph theorem, this is the case if and only if there is an estimate of the form \displaystyle \| \widehat{f d\sigma} \|_{L^q({\bf R}^d)} \leq C_{q,d,S} \|f\|_{L^\infty(S,d\sigma)} \ \ \ \ \ (1) for some constant {C_{q,d,S}} that can depend on {q,d,S} but not on {f}. By a limiting argument, to provide such an estimate, it suffices to prove such an estimate under the additional assumption that {f} is smooth. Strictly speaking, the above problem should be called the extension problem, but it is dual to the original formulation of the restriction problem, which asks to find those exponents {1 \leq q' \leq \infty} for which the Fourier transform of an {L^{q'}({\bf R}^d)} function {g} can be meaningfully restricted to a hypersurface {S}, in the sense that the map {g \mapsto \hat g|_{S}} can be continuously defined from {L^{q'}({\bf R}^d)} to, say, {L^1(S,d\sigma)}. A duality argument shows that the exponents {q'} for which the restriction property holds are the dual exponents to the exponents {q} for which the extension problem holds. There are several motivations for studying the restriction problem. The problem is connected to the classical question of determining the nature of the convergence of various Fourier summation methods (and specifically, Bochner-Riesz summation); very roughly speaking, if one wishes to perform a partial Fourier transform by restricting the frequencies (possibly using a well-chosen weight) to some region {B} (such as a ball), then one expects this operation to well behaved if the boundary {\partial B} of this region has good restriction (or extension) properties. More generally, the restriction problem for a surface {S} is connected to the behaviour of Fourier multipliers whose symbols are singular at {S}. The problem is also connected to the analysis of various linear PDE such as the Helmholtz equation, Schro\”dinger equation, wave equation, and the (linearised) Korteweg-de Vries equation, because solutions to such equations can be expressed via the Fourier transform in the form {fd\sigma} for various surfaces {S} (the sphere, paraboloid, light cone, and cubic for the Helmholtz, Schrödinger, wave, and linearised Korteweg de Vries equation respectively). A particular family of restriction-type theorems for such surfaces, known as Strichartz estimates, play a foundational role in the nonlinear perturbations of these linear equations (e.g. the nonlinear Schrödinger equation, the nonlinear wave equation, and the Korteweg-de Vries equation). Last, but not least, there is a a fundamental connection between the restriction problem and the Kakeya problem, which roughly speaking concerns how tubes that point in different directions can overlap. Indeed, by superimposing special functions of the type {\widehat{fd\sigma}}, known as wave packets, and which are concentrated on tubes in various directions, one can “encode” the Kakeya problem inside the restriction problem; in particular, the conjectured solution to the restriction problem implies the conjectured solution to the Kakeya problem. Finally, the restriction problem serves as a simplified toy model for studying discrete exponential sums whose coefficients do not have a well controlled phase; this perspective was, for instance, used by Ben Green when he established Roth’s theorem in the primes by Fourier-analytic methods, which was in turn one of the main inspirations for our later work establishing arbitrarily long progressions in the primes, although we ended up using ergodic-theoretic arguments instead of Fourier-analytic ones and so did not directly use restriction theory in that paper. The estimate (1) is trivial for {q=\infty} and becomes harder for smaller {q}. The geometry, and more precisely the curvature, of the surface {S}, plays a key role: if {S} contains a portion which is completely flat, then it is not difficult to concoct an {f} for which {\widehat{f d\sigma}} fails to decay in the normal direction to this flat portion, and so there are no restriction estimates for any finite {q}. Conversely, if {S} is not infinitely flat at any point, then from the method of stationary phase, the Fourier transform {\widehat{d\sigma}} can be shown to decay at a power rate at infinity, and this together with a standard method known as the {TT^*} argument can be used to give non-trivial restriction estimates for finite {q}. However, these arguments fall somewhat short of obtaining the best possible exponents {q}. For instance, in the case of the sphere {S = S^{d-1} \subset {\bf R}^d}, the Fourier transform {\widehat{d\sigma}(x)} is known to decay at the rate {O(|x|^{-(d-1)/2})} and no better as {d \rightarrow \infty}, which shows that the condition {q > \frac{2d}{d-1}} is necessary in order for (1) to hold for this surface. The restriction conjecture for {S^{d-1}} asserts that this necessary condition is also sufficient. However, the {TT^*}-based argument gives only the Tomas-Stein theorem, which in this context gives (1) in the weaker range {q \geq \frac{2(d+1)}{d-1}}. (On the other hand, by the nature of the {TT^*} method, the Tomas-Stein theorem does allow the {L^\infty(S,d\sigma)} norm on the right-hand side to be relaxed to {L^2(S,d\sigma)}, at which point the Tomas-Stein exponent {\frac{2(d+1)}{d-1}} becomes best possible. The fact that the Tomas-Stein theorem has an {L^2} norm on the right-hand side is particularly valuable for applications to PDE, leading in particular to the Strichartz estimates mentioned earlier.) Over the last two decades, there was a fair amount of work in pushing past the Tomas-Stein barrier. For sake of concreteness let us work just with the restriction problem for the unit sphere {S^2} in {{\bf R}^3}. Here, the restriction conjecture asserts that (1) holds for all {q > 3}, while the Tomas-Stein theorem gives only {q \geq 4}. By combining a multiscale analysis approach with some new progress on the Kakeya conjecture, Bourgain was able to obtain the first improvement on this range, establishing the restriction conjecture for {q > 4 - \frac{2}{15}}. The methods were steadily refined over the years; until recently, the best result (due to myself) was that the conjecture held for all {q > 3 \frac{1}{3}}, which proceeded by analysing a “bilinear {L^2}” variant of the problem studied previously by Bourgain and by Wolff. This is essentially the limit of that method; the relevant bilinear {L^2} estimate fails for {q < 3 + \frac{1}{3}}. (This estimate was recently established at the endpoint {q=3+\frac{1}{3}} by Jungjin Lee (personal communication), though this does not quite improve the range of exponents in (1) due to a logarithmic inefficiency in converting the bilinear estimate to a linear one.) On the other hand, the full range {q>3} of exponents in (1) was obtained by Bennett, Carbery, and myself (with an alternate proof later given by Guth), but only under the additional assumption of non-coplanar interactions. In three dimensions, this assumption was enforced by replacing (1) with the weaker trilinear (and localised) variant \displaystyle \| \widehat{f_1 d\sigma_1} \widehat{f_2 d\sigma_2} \widehat{f_3 d\sigma_3} \|_{L^{q/3}(B(0,R))} \leq C_{q,d,S_1,S_2,S_3,\epsilon} R^\epsilon \ \ \ \ \ (2) \displaystyle \|f_1\|_{L^\infty(S_1,d\sigma_1)} \|f_2\|_{L^\infty(S_2,d\sigma_2)} \|f_3\|_{L^\infty(S_3,d\sigma_3)} where {\epsilon>0} and {R \geq 1} are arbitrary, {B(0,R)} is the ball of radius {R} in {{\bf R}^3}, and {S_1,S_2,S_3} are compact portions of {S} whose unit normals {n_1(),n_2(),n_3()} are never coplanar, thus there is a uniform lower bound \displaystyle |n_1(\omega_1) \wedge n_2(\omega_2) \wedge n_3(\omega_3)| \geq c for some {c>0} and all {\omega_1 \in S_1, \omega_2 \in S_2, \omega_3 \in S_3}. If it were not for this non-coplanarity restriction, (2) would be equivalent to (1) (by setting {S_1=S_2=S_3} and {f_1=f_2=f_3}, with the converse implication coming from Hölder’s inequality; the {R^\epsilon} loss can be removed by a lemma from a paper of mine). At the time we wrote this paper, we tried fairly hard to try to remove this non-coplanarity restriction in order to recover progress on the original restriction conjecture, but without much success. A few weeks ago, though, Bourgain and Guth found a new way to use multiscale analysis to “interpolate” between the result of Bennett, Carbery and myself (that has optimal exponents, but requires non-coplanar interactions), with a more classical square function estimate of Córdoba that handles the coplanar case. A direct application of this interpolation method already ties with the previous best known result in three dimensions (i.e. that (1) holds for {q > 3 \frac{1}{3}}). But it also allows for the insertion of additional input, such as the best Kakeya estimate currently known in three dimensions, due to Wolff. This enlarges the range slightly to {q > 3.3}. The method also can extend to variable-coefficient settings, and in some of these cases (where there is so much “compression” going on that no additional Kakeya estimates are available) the estimates are best possible. As is often the case in this field, there is a lot of technical book-keeping and juggling of parameters in the formal arguments of Bourgain and Guth, but the main ideas and “numerology” can be expressed fairly readily. (In mathematics, numerology refers to the empirically observed relationships between various key exponents and other numerical parameters; in many cases, one can use shortcuts such as dimensional analysis or informal heuristic, to compute these exponents long before the formal argument is completely in place.) Below the fold, I would like to record this numerology for the simplest of the Bourgain-Guth arguments, namely a reproof of (1) for {p > 3 \frac{1}{3}}. This is primarily for my own benefit, but may be of interest to other experts in this particular topic. (See also my 2003 lecture notes on the restriction conjecture.) In order to focus on the ideas in the paper (rather than on the technical details), I will adopt an informal, heuristic approach, for instance by interpreting the uncertainty principle and the pigeonhole principle rather liberally, and by focusing on main terms in a decomposition and ignoring secondary terms. I will also be somewhat vague with regard to asymptotic notation such as {\ll}. Making the arguments rigorous requires a certain amount of standard but tedious effort (and is one of the main reasons why the Bourgain-Guth paper is as long as it is), which I will not focus on here. Read the rest of this entry » I’ve just uploaded to the arXiv my paper “Outliers in the spectrum of iid matrices with bounded rank perturbations“, submitted to Probability Theory and Related Fields. This paper is concerned with outliers to the circular law for iid random matrices. Recall that if {X_n} is an {n \times n} matrix whose entries are iid complex random variables with mean zero and variance one, then the {n} complex eigenvalues of the normalised matrix {\frac{1}{\sqrt{n}} X_n} will almost surely be distributed according to the circular law distribution {\frac{1}{\pi} 1_{|z| \leq 1} d^2 z} in the limit {n \rightarrow \infty}. (See these lecture notes for further discussion of this law.) The circular law is also stable under bounded rank perturbations: if {C_n} is a deterministic rank {O(1)} matrix of polynomial size (i.e. of operator norm {O(n^{O(1)})}), then the circular law also holds for {\frac{1}{\sqrt{n}} X_n + C_n} (this is proven in a paper of myself, Van Vu, and Manjunath Krisnhapur). In particular, the bulk of the eigenvalues (i.e. {(1-o(1)) n} of the {n} eigenvalues) will lie inside the unit disk {\{ z \in {\bf C}: |z| \leq 1 \}}. However, this leaves open the possibility for one or more outlier eigenvalues that lie significantly outside the unit disk; the arguments in the paper cited above give some upper bound on the number of such eigenvalues (of the form {O(n^{1-c})} for some absolute constant {c>0}) but does not exclude them entirely. And indeed, numerical data shows that such outliers can exist for certain bounded rank perturbations. In this paper, some results are given as to when outliers exist, and how they are distributed. The easiest case is of course when there is no bounded rank perturbation: {C_n=0}. In that case, an old result of Bai and Yin and of Geman shows that the spectral radius of {\frac{1}{\sqrt{n}} X_n} is almost surely {1+o(1)}, thus all eigenvalues will be contained in a {o(1)} neighbourhood of the unit disk, and so there are no significant outliers. The proof is based on the moment method. Now we consider a bounded rank perturbation {C_n} which is nonzero, but which has a bounded operator norm: {\|C_n\|_{op} = O(1)}. In this case, it turns out that the matrix {\frac{1}{\sqrt{n}} X_n + C_n} will have outliers if the deterministic component {C_n} has outliers. More specifically (and under the technical hypothesis that the entries of {X_n} have bounded fourth moment), if {\lambda} is an eigenvalue of {C_n} with {|\lambda| > 1}, then (for {n} large enough), {\frac{1}{\sqrt{n}} X_n + C_n} will almost surely have an eigenvalue at {\lambda+o(1)}, and furthermore these will be the only outlier eigenvalues of {\frac{1}{\sqrt{n}} X_n + C_n}. Thus, for instance, adding a bounded nilpotent low rank matrix to {\frac{1}{\sqrt{n}} X_n} will not create any outliers, because the nilpotent matrix only has eigenvalues at zero. On the other hand, adding a bounded Hermitian low rank matrix will create outliers as soon as this matrix has an operator norm greater than {1}. When I first thought about this problem (which was communicated to me by Larry Abbott), I believed that it was quite difficult, because I knew that the eigenvalues of non-Hermitian matrices were quite unstable with respect to general perturbations (as discussed in this previous blog post), and that there were no interlacing inequalities in this case to control bounded rank perturbations (as discussed in this post). However, as it turns out I had arrived at the wrong conclusion, especially in the exterior of the unit disk in which the resolvent is actually well controlled and so there is no pseudospectrum present to cause instability. This was pointed out to me by Alice Guionnet at an AIM workshop last week, after I had posed the above question during an open problems session. Furthermore, at the same workshop, Percy Deift emphasised the point that the basic determinantal identity \displaystyle \det(1 + AB) = \det(1 + BA) \ \ \ \ \ (1) for {n \times k} matrices {A} and {k \times n} matrices {B} was a particularly useful identity in random matrix theory, as it converted problems about large ({n \times n}) matrices into problems about small ({k \times k}) matrices, which was particularly convenient in the regime when {n \rightarrow \infty} and {k} was fixed. (Percy was speaking in the context of invariant ensembles, but the point is in fact more general than this.) From this, it turned out to be a relatively simple manner to transform what appeared to be an intractable {n \times n} matrix problem into quite a well-behaved {k \times k} matrix problem for bounded {k}. Specifically, suppose that {C_n} had rank {k}, so that one can factor {C_n = A_n B_n} for some (deterministic) {n \times k} matrix {A_n} and {k \times n} matrix {B_n}. To find an eigenvalue {z} of {\frac{1}{\sqrt{n}} X_n + C_n}, one has to solve the characteristic polynomial equation \displaystyle \det( \frac{1}{\sqrt{n}} X_n + A_n B_n - z ) = 0. This is an {n \times n} determinantal equation, which looks difficult to control analytically. But we can manipulate it using (1). If we make the assumption that {z} is outside the spectrum of {\frac{1}{\sqrt{n}} X_n} (which we can do as long as {z} is well away from the unit disk, as the unperturbed matrix {\frac{1}{\sqrt{n}} X_n} has no outliers), we can divide by {\frac{1}{\sqrt{n}} X_n - z} to arrive at \displaystyle \det( 1 + (\frac{1}{\sqrt{n}} X_n-z)^{-1} A_n B_n ) = 0. Now we apply the crucial identity (1) to rearrange this as \displaystyle \det( 1 + B_n (\frac{1}{\sqrt{n}} X_n-z)^{-1} A_n ) = 0. The crucial point is that this is now an equation involving only a {k \times k} determinant, rather than an {n \times n} one, and is thus much easier to solve. The situation is particularly simple for rank one perturbations \displaystyle \frac{1}{\sqrt{n}} X_n + u_n v_n^* in which case the eigenvalue equation is now just a scalar equation \displaystyle 1 + \langle (\frac{1}{\sqrt{n}} X_n-z)^{-1} u_n, v_n \rangle = 0 that involves what is basically a single coefficient of the resolvent {(\frac{1}{\sqrt{n}} X_n-z)^{-1}}. (It is also an instructive exercise to derive this eigenvalue equation directly, rather than through (1).) There is by now a very well-developed theory for how to control such coefficients (particularly for {z} in the exterior of the unit disk, in which case such basic tools as Neumann series work just fine); in particular, one has precise enough control on these coefficients to obtain the result on outliers mentioned above. The same method can handle some other bounded rank perturbations. One basic example comes from looking at iid matrices with a non-zero mean {\mu} and variance {1}; this can be modeled by {\frac{1}{\sqrt{n}} X_n + \mu \sqrt{n} \phi_n \phi_n^*} where {\phi_n} is the unit vector {\phi_n := \frac{1}{\sqrt{n}} (1,\ldots,1)^*}. Here, the bounded rank perturbation {\mu \sqrt{n} \phi_n \phi_n^*} has a large operator norm (equal to {|\mu| \sqrt{n}}), so the previous result does not directly apply. Nevertheless, the self-adjoint nature of the perturbation has a stabilising effect, and I was able to show that there is still only one outlier, and that it is at the expected location of {\mu \sqrt{n}+o(1)}. If one moves away from the case of self-adjoint perturbations, though, the situation changes. Let us now consider a matrix of the form {\frac{1}{\sqrt{n}} X_n + \mu \sqrt{n} \phi_n \psi_n^*}, where {\psi_n} is a randomised version of {\phi_n}, e.g. {\psi_n := \frac{1}{\sqrt{n}} (\pm 1, \ldots, \pm 1)^*}, where the {\pm 1} are iid Bernoulli signs; such models were proposed recently by Rajan and Abbott as a model for neural networks in which some nodes are excitatory (and give columns with positive mean) and some are inhibitory (leading to columns with negative mean). Despite the superficial similarity with the previous example, the outlier behaviour is now quite different. Instead of having one extremely large outlier (of size {\sim\sqrt{n}}) at an essentially deterministic location, we now have a number of eigenvalues of size {O(1)}, scattered according to a random process. Indeed, (in the case when the entries of {X_n} were real and bounded) I was able to show that the outlier point process converged (in the sense of converging {k}-point correlation functions) to the zeroes of a random Laurent series \displaystyle g(z) = 1 - \mu \sum_{j=0}^\infty \frac{g_j}{z^{j+1}} where {g_0,g_1,g_2,\ldots \equiv N(0,1)} are iid real Gaussians. This is basically because the coefficients of the resolvent {(\frac{1}{\sqrt{n}} X_n - zI)^{-1}} have a Neumann series whose coefficients enjoy a central limit theorem. On the other hand, as already observed numerically (and rigorously, in the gaussian case) by Rajan and Abbott, if one projects such matrices to have row sum zero, then the outliers all disappear. This can be explained by another appeal to (1); this projection amounts to right-multiplying {\frac{1}{\sqrt{n}} X_n + \mu \sqrt{n} \phi_n \psi_n^*} by the projection matrix {P} to the zero-sum vectors. But by (1), the non-zero eigenvalues of the resulting matrix {(\frac{1}{\sqrt{n}} X_n + \mu \sqrt{n} \phi_n \psi_n^*)P} are the same as those for {P (\frac{1}{\sqrt{n}} X_n + \mu \sqrt{n} \phi_n \psi_n^*)}. Since {P} annihilates {\phi_n}, we thus see that in this case the bounded rank perturbation plays no role, and the question reduces to obtaining a circular law with no outliers for {P \frac{1}{\sqrt{n}} X_n}. As it turns out, this can be done by invoking the machinery of Van Vu and myself that we used to prove the circular law for various random matrix models. This week I am at the American Institute of Mathematics, as an organiser on a workshop on the universality phenomenon in random matrices. There have been a number of interesting discussions so far in this workshop. Percy Deift, in a lecture on universality for invariant ensembles, gave some applications of what he only half-jokingly termed “the most important identity in mathematics”, namely the formula \displaystyle \hbox{det}( 1 + AB ) = \hbox{det}(1 + BA) whenever {A, B} are {n \times k} and {k \times n} matrices respectively (or more generally, {A} and {B} could be linear operators with sufficiently good spectral properties that make both sides equal). Note that the left-hand side is an {n \times n} determinant, while the right-hand side is a {k \times k} determinant; this formula is particularly useful when computing determinants of large matrices (or of operators), as one can often use it to transform such determinants into much smaller determinants. In particular, the asymptotic behaviour of {n \times n} determinants as {n \rightarrow \infty} can be converted via this formula to determinants of a fixed size (independent of {n}), which is often a more favourable situation to analyse. Unsurprisingly, this trick is particularly useful for understanding the asymptotic behaviour of determinantal processes. There are many ways to prove the identity. One is to observe first that when {A, B} are invertible square matrices of the same size, that {1+BA} and {1+AB} are conjugate to each other and thus clearly have the same determinant; a density argument then removes the invertibility hypothesis, and a padding-by-zeroes argument then extends the square case to the rectangular case. Another is to proceed via the spectral theorem, noting that {AB} and {BA} have the same non-zero eigenvalues. By rescaling, one obtains the variant identity \displaystyle \hbox{det}( z + AB ) = z^{n-k} \hbox{det}(z + BA) which essentially relates the characteristic polynomial of {AB} with that of {BA}. When {n=k}, a comparison of coefficients this already gives important basic identities such as {\hbox{tr}(AB) = \hbox{tr}(BA)} and {\hbox{det}(AB) = \hbox{det}(BA)}; when {n} is not equal to {k}, an inspection of the {z^{n-k}} coefficient similarly gives the Cauchy-Binet formula (which, incidentally, is also useful when performing computations on determinantal processes). Thanks to this formula (and with a crucial insight of Alice Guionnet), I was able to solve a problem (on outliers for the circular law) that I had in the back of my mind for a few months, and initially posed to me by Larry Abbott; I hope to talk more about this in a future post. Today, though, I wish to talk about another piece of mathematics that emerged from an afternoon of free-form discussion that we managed to schedule within the AIM workshop. Specifically, we hammered out a heuristic model of the mesoscopic structure of the eigenvalues {\lambda_1 \leq \ldots \leq \lambda_n} of the {n \times n} Gaussian Unitary Ensemble (GUE), where {n} is a large integer. As is well known, the probability density of these eigenvalues is given by the Ginebre distribution \displaystyle \frac{1}{Z_n} e^{-H(\lambda)}\ d\lambda where {d\lambda = d\lambda_1 \ldots d\lambda_n} is Lebesgue measure on the Weyl chamber {\{ (\lambda_1,\ldots,\lambda_n) \in {\bf R}^n: \lambda_1 \leq \ldots \leq \lambda_n \}}, {Z_n} is a constant, and the Hamiltonian {H} is given by the formula \displaystyle H(\lambda_1,\ldots,\lambda_n) := \sum_{j=1}^n \frac{\lambda_j^2}{2} - 2 \sum_{1 \leq i < j \leq n} \log |\lambda_i-\lambda_j|. At the macroscopic scale of {\sqrt{n}}, the eigenvalues {\lambda_j} are distributed according to the Wigner semicircle law Indeed, if one defines the classical location {\gamma_i^{cl}} of the {i^{th}} eigenvalue to be the unique solution in {[-2\sqrt{n}, 2\sqrt{n}]} to the equation \displaystyle \int_{-2\sqrt{n}}^{\gamma_i^{cl}/\sqrt{n}} \rho_{sc}(x)\ dx = \frac{i}{n} then it is known that the random variable {\lambda_i} is quite close to {\gamma_i^{cl}}. Indeed, a result of Gustavsson shows that, in the bulk region when {\epsilon n < i < (1-\epsilon) n} for some fixed {\epsilon > 0}, {\lambda_i} is distributed asymptotically as a gaussian random variable with mean {\gamma_i^{cl}} and variance {\sqrt{\frac{\log n}{\pi}} \times \frac{1}{\sqrt{n} \rho_{sc}(\gamma_i^{cl})}}. Note that from the semicircular law, the factor {\frac{1}{\sqrt{n} \rho_{sc}(\gamma_i^{cl})}} is the mean eigenvalue spacing. At the other extreme, at the microscopic scale of the mean eigenvalue spacing (which is comparable to {1/\sqrt{n}} in the bulk, but can be as large as {n^{-1/6}} at the edge), the eigenvalues are asymptotically distributed with respect to a special determinantal point process, namely the Dyson sine process in the bulk (and the Airy process on the edge), as discussed in this previous post. Here, I wish to discuss the mesoscopic structure of the eigenvalues, in which one involves scales that are intermediate between the microscopic scale {1/\sqrt{n}} and the macroscopic scale {\sqrt{n}}, for instance in correlating the eigenvalues {\lambda_i} and {\lambda_j} in the regime {|i-j| \sim n^\theta} for some {0 < \theta < 1}. Here, there is a surprising phenomenon; there is quite a long-range correlation between such eigenvalues. The result of Gustavsson shows that both {\lambda_i} and {\lambda_j} behave asymptotically like gaussian random variables, but a further result from the same paper shows that the correlation between these two random variables is asymptotic to {1-\theta} (in the bulk, at least); thus, for instance, adjacent eigenvalues {\lambda_{i+1}} and {\lambda_i} are almost perfectly correlated (which makes sense, as their spacing is much less than either of their standard deviations), but that even very distant eigenvalues, such as {\lambda_{n/4}} and {\lambda_{3n/4}}, have a correlation comparable to {1/\log n}. One way to get a sense of this is to look at the trace \displaystyle \lambda_1 + \ldots + \lambda_n. This is also the sum of the diagonal entries of a GUE matrix, and is thus normally distributed with a variance of {n}. In contrast, each of the {\lambda_i} (in the bulk, at least) has a variance comparable to {\log n/n}. In order for these two facts to be consistent, the average correlation between pairs of eigenvalues then has to be of the order of {1/\log n}. Below the fold, I give a heuristic way to see this correlation, based on Taylor expansion of the convex Hamiltonian {H(\lambda)} around the minimum {\gamma}, which gives a conceptual probabilistic model for the mesoscopic structure of the GUE eigenvalues. While this heuristic is in no way rigorous, it does seem to explain many of the features currently known or conjectured about GUE, and looks likely to extend also to other models. Read the rest of this entry » Tanja Eisner and I have just uploaded to the arXiv our paper “Large values of the Gowers-Host-Kra seminorms“, submitted to Journal d’Analyse Mathematique. This paper is concerned with the properties of three closely related families of (semi)norms, indexed by a positive integer {k}: • The Gowers uniformity norms {\|f\|_{U^k(G)}} of a (bounded, measurable, compactly supported) function {f: G \rightarrow {\bf C}} taking values on a locally compact abelian group {G}, equipped with a Haar measure {\mu}; • The Gowers uniformity norms {\|f\|_{U^k([N])}} of a function {f: [N] \rightarrow {\bf C}} on a discrete interval {\{1,\ldots,N\}}; and • The Gowers-Host-Kra seminorms {\|f\|_{U^k(X)}} of a function {f \in L^\infty(X)} on an ergodic measure-preserving system {X = (X,{\mathcal X},\mu,T)}. These norms have been discussed in depth in previous blog posts, so I will just quickly review the definition of the first norm here (the other two (semi)norms are defined similarly). The {U^k(G)} norm is defined recursively by setting \displaystyle \| f \|_{U^1(G)} := |\int_G f\ d\mu| \displaystyle \|f\|_{U^k(G)}^{2^k} := \int_G \| \Delta_h f \|_{U^{k-1}(G)}^{2^{k-1}}\ d\mu(h) where {\Delta_h f(x) := f(x+h) \overline{f(x)}}. Equivalently, one has \displaystyle \|f\|_{U^k(G)} := (\int_G \ldots \int_G \Delta_{h_1} \ldots \Delta_{h_k} f(x)\ d\mu(x) d\mu(h_1) \ldots d\mu(h_k))^{1/2^k}. Informally, the Gowers uniformity norm {\|f\|_{U^k(G)}} measures the extent to which (the phase of {f}) behaves like a polynomial of degree less than {k}. Indeed, if {\|f\|_{L^\infty(G)} \leq 1} and {G} is compact with normalised Haar measure {\mu(G)=1}, it is not difficult to show that {\|f\|_{U^k(G)}} is at most {1}, with equality if and only if {f} takes the form {f = e(P) := e^{2\pi iP}} almost everywhere, where {P: G \rightarrow {\bf R}/{\bf Z}} is a polynomial of degree less than {k} (which means that {\partial_{h_1} \ldots \partial_{h_k} P(x) = 0} for all {x,h_1,\ldots,h_k \in G}). Our first result is to show that this result is robust, uniformly over all choices of group {G}: Theorem 1 ({L^\infty}-near extremisers) Let {G} be a compact abelian group with normalised Haar measure {\mu(G)=1}, and let {f \in L^\infty(G)} be such that {\|f\|_{L^\infty(G)} \leq 1} and {\|f\|_{U^k(G)} \geq 1-\epsilon} for some {\epsilon > 0} and {k \geq 1}. Then there exists a polynomial {P: G \rightarrow {\bf R}/{\bf Z}} of degree at most {k-1} such that {\|f-e(P)\|_{L^1(G)} = o(1)}, where {o(1)} is bounded by a quantity {c_k(\epsilon)} that goes to zero as {\epsilon \rightarrow 0} for fixed {k}. The quantity {o(1)} can be described effectively (it is of polynomial size in {\epsilon}), but we did not seek to optimise it here. This result was already known in the case of vector spaces {G = {\bf F}_p^n} over a fixed finite field {{\bf F}_p} (where it is essentially equivalent to the assertion that the property of being a polynomial of degree at most {k-1} is locally testable); the extension to general groups {G} turns out to fairly routine. The basic idea is to use the recursive structure of the Gowers norms, which tells us in particular that if {\|f\|_{U^k(G)}} is close to one, then {\|\Delta_h f\|_{U^{k-1}(G)}} is close to one for most {h}, which by induction implies that {\Delta_h f} is close to {e(Q_h)} for some polynomials {Q_h} of degree at most {k-2} and for most {h}. (Actually, it is not difficult to use cocycle equations such as {\Delta_{h+k} f = \Delta_h f \times T^h \Delta_k f} (when {|f|=1}) to upgrade “for most {h}” to “for all {h}“.) To finish the job, one would like to express the {Q_h} as derivatives {Q_h = \partial_h P} of a polynomial {P} of degree at most {k-1}. This turns out to be equivalent to requiring that the {Q_h} obey the cocycle equation \displaystyle Q_{h+k} = Q_h + T^h Q_k where {T^h F(x) := F(x+h)} is the translate of {F} by {h}. (In the paper, the sign conventions are reversed, so that {T^h F(x) := F(x-h)}, in order to be compatible with ergodic theory notation, but this makes no substantial difference to the arguments or results.) However, one does not quite get this right away; instead, by using some separation properties of polynomials, one can show the weaker statement that \displaystyle Q_{h+k} = Q_h + T^h Q_k + c_{h,k} \ \ \ \ \ (1) where the {c_{h,k}} are small real constants. To eliminate these constants, one exploits the trivial cohomology of the real line. From (1) one soon concludes that the {c_{h,k}} obey the {2}-cocycle equation \displaystyle c_{h,k} + c_{h+k,l} = c_{h,k+l} + c_{k,l} and an averaging argument then shows that {c_{h,k}} is a {2}-coboundary in the sense that \displaystyle c_{h,k} = b_{h+k} - b_h - b_k for some small scalar {b_h} depending on {h}. Subtracting {b_h} from {Q_h} then gives the claim. Similar results and arguments also hold for the {U^k([N])} and {U^k(X)} norms, which we will not detail here. Dimensional analysis reveals that the {L^\infty} norm is not actually the most natural norm with which to compare the {U^k} norms against. An application of Young’s convolution inequality in fact reveals that one has the inequality \displaystyle \|f\|_{U^k(G)} \leq \|f\|_{L^{p_k}(G)} \ \ \ \ \ (2) where {p_k} is the critical exponent {p_k := 2^k/(k+1)}, without any compactness or normalisation hypothesis on the group {G} and the Haar measure {\mu}. This allows us to extend the {U^k(G)} norm to all of {L^{p_k}(G)}. There is then a stronger inverse theorem available: Theorem 2 ({L^{p_k}}-near extremisers) Let {G} be a locally compact abelian group, and let {f \in L^{p_k}(G)} be such that {\|f\|_{L^{p_k}(G)} \leq 1} and {\|f\|_{U^k(G)} \geq 1-\epsilon} for some {\epsilon > 0} and {k \geq 1}. Then there exists a coset {H} of a compact open subgroup {H} of {G}, and a polynomial {P: H to {\bf R}/{\bf Z}} of degree at most {k-1} such that {\|f-e(P) 1_H\|_{L^{p_k}(G)} = o(1)}. Conversely, it is not difficult to show that equality in (2) is attained when {f} takes the form {e(P) 1_H} as above. The main idea of proof is to use an inverse theorem for Young’s inequality due to Fournier to reduce matters to the {L^\infty} case that was already established. An analogous result is also obtained for the {U^k(X)} norm on an ergodic system; but for technical reasons, the methods do not seem to apply easily to the {U^k([N])} norm. (This norm is essentially equivalent to the {U^k({\bf Z}/\tilde N{\bf Z})} norm up to constants, with {\tilde N} comparable to {N}, but when working with near-extremisers, norms that are only equivalent up to constants can have quite different near-extremal behaviour.) In the case when {G} is a Euclidean group {{\bf R}^d}, it is possible to use the sharp Young inequality of Beckner and of Brascamp-Lieb to improve (2) somewhat. For instance, when {k=3}, one has \displaystyle \|f\|_{U^3({\bf R}^d)} \leq 2^{-d/8} \|f\|_{L^2({\bf R}^d)} with equality attained if and only if {f} is a gaussian modulated by a quadratic polynomial phase. This additional gain of {2^{-d/8}} allows one to pinpoint the threshold {1-\epsilon} for the previous near-extremiser results in the case of {U^3} norms. For instance, by using the Host-Kra machinery of characteristic factors for the {U^3(X)} norm, combined with an explicit and concrete analysis of the {2}-step nilsystems generated by that machinery, we can show that \displaystyle \|f\|_{U^3(X)} \leq 2^{-1/8} \|f\|_{L^2(X)} whenever {X} is a totally ergodic system and {f} is orthogonal to all linear and quadratic eigenfunctions (which would otherwise form immediate counterexamples to the above inequality), with the factor {2^{-1/8}} being best possible. We can also establish analogous results for the {U^3([N])} and {U^3({\bf Z}/N{\bf Z})} norms (using the inverse {U^3} theorem of Ben Green and myself, in place of the Host-Kra machinery), although it is not clear to us whether the {2^{-1/8}} threshold remains best possible in this case. RSS Google+ feed Get every new post delivered to your Inbox. Join 3,875 other followers
c505efd7558f85c7
Thursday, March 22, 2012 Science and religion The relationship between science and religion has been a topic of discussion recently. New Scientist has articles about the attempts of scientists to explain spirituality and religion (see for instance this and this). Also Bee has written about this under the title What can science do for you? and this posting is a typo-free version of the comment to this posting with some additions. What makes for a scientist so difficult to understand spirituality is the failure to realize that genuine spirituality is not a method to achieve something. For a scientist life is endless struggling to achieve some goal by applying some methods: problem solving, fighting against colleagues, intriguing to get a research position or funding, etc.. It is natural that the scientific explanations of spirituality follow the same simple format. For a scientists it is difficult to believe that a person who becomes aware of the existence of higher levels of conscious existence does not calculate that it is good to have this experience since in statistical sense it maximizes her personal happiness. Neither is this experience a result of some method to achieve a relief from a fear of death or of life or to achieve maximal pleasure. It is something completely spontaneous and makes you to realize how extremely limited your everyday consciousness is and how hopelessly it is narrowed down by your ego. What makes for a member of church so difficult to understand spirituality is that organized religions indeed teach that by applying some method which includes blind belief on dogmas, the registered member of the community can get in contact with God. Even the idea about single God represents example about how the greed for power tends to corrupt spirituality: gods as those conscious entities above us like we above our neurons are replaced with God - the ultimate conqueror and absolute ruler. And after all, spiritual experience is only the realization that higher levels of conscious existence and intelligence are there. This realization comes when one is able for a moment to get rid of ego and live just in this moment. But there is no method to achieve it! This view is by no means new. For the first time I discovered it from writings of Krishnamurti for about 26 years ago as I tried to understand my own great experience. The writings of Krishnamurti are a blow against face of anyone who has adopted the naive "scientific" view about reality but I felt that Krishnamurti was basically right. I felt that he must have experienced something similar to what I had experienced and I of course hoped to get these two magic weeks back. Certainly I hoped to find a method allowing to achieve this from the writings of Krishnamurti, and I refused to believe when Krishnamurti told again and again that there is no method! After these years it is easy to agree with Krishnamurti's view about egos as the source of the problems of society. Ego is the castle that we build in the hope of achieving safety. This ego isolates us and in isolation fears multiply and we become paranoids. Coming out from the castle of ego to the fresh air and meeting the reality as it is, is the only solution to our problems. Isms cannot help us since they only help to build new castles. The bad news for the scientist is that there is no method to achieve this. At some moments we are able to just calmly observe our suffering without any kind of violence for our mental images, and the miracle of re-creation takes place. At 10:21 AM, Blogger Ulla said... There is a huge difference between religion and spirituality. Religion is telling you what and how you should act, and its purpose is to actually diminish and control the interference with higher consciousness (system where you are a subsystem). This is also the reason religion and states are linked. Eduction is regulated by these two. Spirituality is basically free, and can therefore be extremely creative and 'dangerous' for the 'system'. It was a tool in gnosticism, and the basic difference that made them heretic. There is nothing so 'sweet' and addictive as the white light in the head. :D No drug is stronger than spirituality so today we have an enormous longing for spirituality, seen in the amounts of abusers. They get the wrong 'drug'. As you know I don't agree about the ego and Self. Ego, from environment, education should vanish, but self is essential as tool for the interference. Some unfortunate also succeed in abandoning the self and live the ego, with deep tragedy as consequence. At 11:35 AM, Anonymous ◘Fractality◘ said... Science has been hijacked by reductionism and the intellectually dishonest skeptic hegemony has ruthlessly excluded the "eccentric" and alternative scientists (as Matti has documented) Here is a relevant quote from MAX PLANCK: A lot of the scientists who created unprecedented scientific theory were considered "nutty" by the fundamentalist skeptics. Nikola Tesla, Albert Einstein, John Lilly, Francis Crick, Werner von Braun, Issac Newton, etc... Newton talked to angels. Einstein wondered if the entire universe was conscious. Von Braun believed that mathematics proved the immortality of the soul. Watson and Crick credited LSD with understanding the structure of DNA. At 1:03 AM, Anonymous said... To Ulla: A comment about the relationship of self and ego. I see self is basic element of consciousness. Self hierarchy can be equated with hierarchy of quantum jumps and self hierarchy makes possible for self to experience subselves as mental images. These subselves would give rise to an experience about flow of time. But I feel still puzzled when trying to understand in detail of what I am saying;-). Clear signal that something important is waiting to become understood. Ego is the model of self which highly advanced self loves to construct. Certainly useful for practical purposes. This model involves a lot of cognition and memories and expectations and desires. The message of Krishnamurti is that self usually tends to equate itself with ego or what ego should be in future: reality with its model or dream about good reality. Ego - something static - becomes a model of self which is continually recreated. This yields the conflict and suffering, this causes violence by restricting free will to constraints of ego and forcing self to endlessly murder subselves representing undesired mental images inconsistent with what ego requires self to be. A good analogy comes -somewhat unexpectedly- from LHC;-). The data gathering at LHC is based on the assumption that the new physics, which must be there, is consistent with the standard proposals. Therefore one throws away from the data all anomalies which do not allow an interpretation as allowed new physics! Brilliant! The outcome is that no new physics is seen although it should be there! Eventually we must begin from scratch and admit that we "knew" too much. In the similar manner ego paralyzes self and prevents its evolution! At 1:09 PM, Blogger Ulla said... Ego is a CONSTRUCTED self, Matti, due to what others want from you, parents, schoolsystems, community, religion, MORAL, etc. There is a deep difference between moral and ethics, as deep as between ego and self. The problem is a too weak self, that cannot stand up to the pressure from environment. This makes it hard to recieve gifts :) There are two choises, give up the Self or give up the Ego. I think today that most give up their selves, and start play roles, in aim to avoid conflicts. "Ego involves a lot of cognition and memories and expectations and desires." Yes, so it is created, constructed, not something fundamental. In entanglement language this makes disorder? Makes our basic energy level higher, so it creates stress. This is the reason we should skip it. Can you see now? I have tried so many times :D Free will is restricted, yes. This is just one of the things biology has got upside down. Like the 40Hz gamma signal and EEG in general. What is ATTENTION and HOMEOSTASIS? Think about that. At 6:11 PM, Blogger Santeri Satama said... In our local mythology Soul is a trinity of 'itse' (self), 'henki' (spirit) and 'luonto' (nature/character). Europeans have mocked that Finns - and other native peoples - have poorly developed category of personhood aka "ego". So Ulla is on the right trail that ego/subjecthood/personhood is a social construct, but not only that, it is the social construct of imperialistic civilizations and dualism of ego the conquerer and controller and nature to be conquered and controlled. Ego is thus not an individual matter, but collective mental disorder. Dualism of 'spirit' (entanglement?) aspiring to define and control 'nature'(ZEO?), and loosing sense of self in the process. In our mythology and art of shamanistic self-healing there is the spiritual journey to regain the self, the lost part of soul. Matti's theory is such a spiritual journey. And "self" (as in "Know Thyself") is not just the mental image of self-referentiality and infinite regress of Russian dolls - all the way down to the Source and all the way up to the principle of Maximation of All Forms. This is much more and much less, just sensing this body as is this moment. At 9:49 PM, Anonymous said... Ego is collective construct, a model, reflecting the values of the society. Today ego is marketing and selling itself, making a product of itself and endlessly competing. Something rather boring in in its completely open opportunism;-). Could one see soul as a more refined predecessor of ego at more religious era? Or as something different, maybe self? In any case, also soul is often thought to be a permanent entity, actually something fundamental which by definition does not change. Self as the moment of re-creation (self hierarchy must be mentioned in the same breath to make some sense of this;-)). "Know thyself" would translate to "re-create", "evolve". At 5:19 AM, Blogger Santeri Satama said... Perhaps there being both time-dependent and time-independent Schrödinger equations is somehow related? Dunno, but have been thinking and sensing the question about sentience and it's relation to question about conscience. In this experience body-sense is not limited to classical matter, but the form of "tight" in the middle (heart) and soft around the edges suggests that the "strength" of spatial reach of body-sense in this form drops in square-roots and/or cube roots from the center. Which could suggest direct and consciouss sensing of gravitational and/or electromagnetic fields from this position or form of observer participation. This form of body-sense is not meant to suppose that sentient forms are limited to what was just described, and in fact there was a glimpse of multitude of geometric worlds and forms, with words "observer-space" attached to the experience. At 10:28 AM, Blogger Ulla said... Well, Self is slowly changing. We only need to think at a stroke patient to realize that. Also the self is different with different ages. I doubt this is what the oracle thought of with 'know thyself'. Or maybe a part of it? We can change if we want to. A more stable and longlasting construction is the personality. But also that can change. So self is hierarchy depending on systems/subsystems. Strokes makes those self-systems impossible to use, connect (phosphorylation?), but they are not destroyed. Drugs can change the Self and even the personality. Something realized change the self instantly, in the jump. Note that this is descreate, linked to matter (state function reduction). This is 'known' without studies before (through Jungs window). The term re-creation tells that there are something old that changes, as in a symmetry breaking shift. But also completely new things can 'hit' you through the window. Like prophesy (from the future?)? What is the difference between personality/self and Soul? The Nature aspect tells us that the genome are changing too, and it is different in different parts of the body. This due to methylation/acetylation mainly, so then also the Soul changes? Or do we have a complex and fractal soul? Or a Soul in form of a more continous magnetic body, not directly linked to the Nature aspect? But then again this view gets troubles with the dark matter directing the genome? So this Soul is some kind of collective, if it is stable? A hierarchy? A dark matter Soul and a material Ego as collective dualistic 'pairs'? Do we need the Ego and Soul at all to describe this? Do they contribute with clarity? Or is the collective phenomens enough to describe them? Can the Soul be a higher level Self encompassing many lives? A time hierarchy? Note that the presens is favioured in spiritual experiencings, which means silencing of environmental sensings and the brain chatter and thinking, keeping us in past or future. Introspection, going into the self, what dowe meet there? The Old One. :D Big ignorance! The Soul is deeply anchored in our culture, and religion. A fight for souls (in numbers) not personalities, selves. As the Soul just IS without qualities, or with just +or-? Like energy? Gravity? This is actually the problem with abandoning the Self. Without Self we cannot connect to the Truth out there. One more reason to say 'Know Thyself'. Nature is actually a part of our Self, quite literally, so WHY should it be conquered? So we can conquer each other? Keep the tensions? Why do we accept all this? To what do we need it? Jung had an answer. At 11:02 AM, Blogger Ulla said... Jill Bolte Taylor again, so fascinating story. Matti, DID you get the book? Y/n? At 5:30 PM, Blogger Santeri Satama said... Just a short linguistic comment. The word 'person' comes from the Latin word for 'actor mask', and one of the strange features of English is that humans are by norm referred to as 'persons'. And besides 'natural persons' there are also 'legal persons'. The Finnish translation of 'person' is 'henkilö', derived from 'henki'. Finnish 'luonne' is usually translated by the originally Greek word 'character'. At 8:34 PM, Blogger Santeri Satama said... There are lots of conseptual problems with "consciousness", which can be said to be secundary to to more fundamental sensing, as there is lots of sensing going on that stays subconscious. This conseptual frame suggests that questions about quantum consciousness should build upon more general theory of quantum sensing, and cellular quantum magnetoception seems like obvious starting point, and classical senses and brain neurology being secundary channels and filters of more fundamental magnetoceptic quantum sensing. At 10:14 PM, Anonymous said... To Ulla: I dimly remember that I have received a book telling about personal brain damage and written by a neuroscientist. I remember that I read part of the book and decided to continue reading but I must have forgot the whole thing in the midst of nasty little health problems. I tried to find the book but failed. Was it web link after all? I any case, the amount of inhibition in brain increases with evolutionary level and creativity requires getting rid of inhibition so that brain damage might help. Or maybe damage replaces conceptual memory based on symbols with direct sensory memory, this would dream of an artist. As you know, in temporal lobes electric excitation can excite this kind of memories. Autists sometimes have this kind of memory. It would be nice to how Chopin, who suffered from strange attacks resembling delirium, remembered music. Maybe left brain could containg the controlling model of socially acceptable me, the ego. Genuine creativity is never socially acceptable;-). Despite all the positive talk about creativity, creativity is something very, very irritating. At 10:24 PM, Anonymous said... To Santeri: To me "henkilö" is something very passive. Just the social ID although it comes from "henki" which one might associate with "soul" (or with ability to breath, something very impersonal!). "Luonne" is more like "personality". Then there is also "temperament" which has rather precise meaning in recent day research: something below personality, related to genes, and not changeable. I wonder if the social problems due to rigid egos be partially solved if people would become fully aware that "personality" is a social role and would experience it as a channel of creative expression just as actors do. Having fun with one's social role. This would of course require basic safety and trust in society but just this we are gradually losing with the recent dominating values taking us back to jungle. At 11:59 PM, Blogger Ulla said... The book was on a CD. The second half is more interesting. Inhibitions in output (behavior) or input (sensing) or both? remember that consciousness can only be diminished (inhibited). The behaviour also means learning. Having fun with ones social role? Too many are too seriously identified with it and nothing else. Their selves are rudimentary. They would need a schamanic journey :) to enlarge their CONSCIOUSNESS (which they have inhibited in aim of getting more intelligence, 'meaning' instead). This also means the ability to FEEL. Emotions cannot be excluded and treated as something exclusive. They are very much the essence of cognitions. Just one more of the ad hocs. It is said that emotions (as anger) should be inhibited, and our cortex is not sufficiently good at that, but what happen if we inhibit them? We loose life and create health problems to ourselves. Bad temper is better to experience than trying to inhibit it etc. That is what life is about. Experiencing, perciving. This should be so obvious. At 4:54 AM, Blogger Ulla said... Note, 3 parts earlier Most of us, without realizing it, fall back into the anonymity of the crowd. In fact, we only realize this after the fact when we have our next ‘awakening’ experience and pop out of the matrix again. Post a Comment << Home
c34fcfd2dce5cd7b
Chirikov standard map From Scholarpedia Boris Chirikov and Dima Shepelyansky (2008), Scholarpedia, 3(3):3550. doi:10.4249/scholarpedia.3550 revision #137209 [link to/cite this article] Jump to: navigation, search Curator and Contributors 1.00 - Dima Shepelyansky The Chirikov standard map [1], [2] is an area-preserving map for two canonical dynamical variables, i.e., momentum and coordinate \( (p,x)\). It is described by the equations: \[\tag{1} \begin{array}{lcr} \bar{p} = p+K\sin x \\ \bar{x} = x+\bar{p} \end{array} \] where the bars indicate the new values of variables after one map iteration and \(K\) is a dimensionless parameter that influences the degree of chaos. Due to the periodicity of \( \sin x \) the dynamics can be considered on a cylinder (by taking \( x \!\!\! \mod{2\pi} \)) or on a torus (by taking both \( x,p \!\!\! \mod{2\pi} \)). The map is generated by the time dependent Hamiltonian \( H(p,x,t)= p^2/2 +K \cos(x) \, \delta_1(t) \), where \( \delta_1(t) \) is a periodic \( \delta- \)function with period 1 in time. The dynamics is given by a sequence of free propagations interleaved with periodic kicks. Examples of the Poincare sections of the standard map on a torus are shown in the following Figs. 1,2,3. Figure 1: K=0.5 Figure 2: K=0.971635 Figure 3: K=5 Below the critical parameter \( K < K_c \) (Fig.1) the invariant Kolmogorov-Arnold-Moser (KAM) curves restrict the variation of momentum \( p \) to be bounded. The golden KAM curve with the rotation number \[ r=r_g=(\sqrt{5}-1)/2 =0.618033... \] is destroyed at \(K=K_g=0.971635... \) [3], [4] (Fig.2). This Fig. shows a generic phase space structure typical for various area-preserving maps with smooth generating functions: stability islands are embedded in a chaotic sea, similar structure appears on smaller and smaller scales. In a vicinity of a critical invariant curve, with a golden tail in a continued fraction expansion of \( r\), the phase space structure is universal for all smooth maps [4]. Above the critical value \( K > K_c \) (see Fig.3 showing a chaotic component and visible islands of stability) the variation of \(p\) becomes unbounded and is characterized by a diffusive growth \( p^2 \sim D_0 t\) with number of map iterations \( t \). Here \( D_0 \) is a diffusion rate with \( D_0 \approx (K-K_c)^3/3 \) for \( K_c < K < 4 \) and \( D_0 \approx D_{ql}=K^2/2 \) for \( 4 < K \) [2], [5]. There are strong arguments in favor of the equality \( K_c = K_g \) but rigorously it is only proven that there are no KAM curves for \( K > 63/64 = 0.984375 \) [6]. With the numerical results [3], [4] this implies inequality for the global chaos border, \( K_g \leq K_c < 63/64 \). A simple analytical criterion proposed in 1959 and now known as the Chirikov resonance-overlap criterion [7] gives the chaos border \( K_c = \pi^2/4\) [1] and after some improvements leads to \( K_c \approx 1.2\) [2],[8]. This accuracy is not so impressive compared to modern numerical methods but still up to now this criterion remains the only simple analytical tool for determining the chaos border in various Hamiltonian dynamical systems. The Kolmogorov-Sinai entropy of the map is well described by relation \( h \approx \ln(K/2)\) valid for \( K > 4 \)[1], [2]. Universality and Applications The map (1) describes a situation when nonlinear resonances are equidistant in phase space that corresponds to a local description of dynamical chaos. Due to this property various dynamical systems and maps can be locally reduced to the standard map and due to this reason the term standard map was coined in [2]. Thus, the standard map describes a universal, generic behavior of area-preserving maps with divided phase space when integrable islands of stability are surrounded by a chaotic component. A short list of systems reducible to the standard map is given below: • chaotic layer around separatrix of a nonlinear resonance induced by a monochromatic force (the whisker map) [2] • charged particle confinement in mirror magnetic traps [1], [2], [7], [9] • fast crossing of nonlinear resonance [1], [10] • particle dynamics in accelerators [11] • comet dynamics in solar system [12] with a rather similar map for the comet Halley [13] • microwave ionization of Rydberg atoms (linked to the Kepler map) [14] and autoionization of molecular Rydberg states [15] • electron magnetotransport in a resonant tunneling diode [16] Open Problems • In spite of fundamental advances in ergodic theory [17], a rigorous proof of the existence of a set of positive measure of orbits with positive entropy is still missing, even for specific values of \( K \) (see e.g. [18]). • What are the fractal properties of critical chaos parameter \( K_c(r) \) as a function of arithmetic properties of the rotation number \( r \) of KAM curve? do local maxima correspond only to a golden tail of continuous fraction expansion [3], [4] or they may have tails with Markov numbers as it is conjectured in [19]? (see also [20]) • Due to trajectory sticking around stability islands the statistics of Poincare recurrences in Hamiltonian systems with divided phase space (see e.g. Fig.2 with a critical golden KAM curve) is characterized by an algebraic decay \( P(\tau) \propto 1/\tau^\alpha \) with \( \alpha \approx 1.5 \) while a theory based on the universality in a vicinity of critical golden curve gives \( \alpha \approx 3 \ ;\) this difference persists up to 1013 map iterations; as a result correlation functions decay rather slowly \( C(\tau) \sim \tau P(\tau) \propto 1/\tau^{\alpha-1} \) that can lead to a divergence of diffusion rate \( D \sim \tau C(\tau) \) (see [21] and Refs. therein) Quantum Map Figure 4: Dependence of rescaled rotator energy \( E/(k^2/4) \) on time \( t \) for \( K=kT=5, \hbar=0.25 (k=20, T=0.25) \); the full curve shows numerical data and the straight line gives the diffusive energy growth in the classical case (from [23]). The quantization of the standard map is obtained by considering variables in (1) as the Heisenberg operators with the commutation relation \( [p,x] = -i \hbar \), where \( \hbar \) is an effective dimensionless Planck constant. In a same way it is possible to use the Schrödinger equation with the Hamiltonian \( H(\hat{p},\hat{x},t) \) given above and \( \hat{p}=-i\hbar \partial/\partial x \). Integration on one period gives the quantum map for the wave function \( \psi \): \[\tag{2} \bar{\psi} = \hat{U} \psi = e^{-i{\hat p}^2/2\hbar} e^{-i K/\hbar \cos {\hat x}} \psi \] where bar marks the new value of \( \psi \) after one map iteration. Due to space periodicity of the Hamiltonian the momentum can be presented in the form \( p=\hbar (n + \beta) \), where \( n \) is an integer and \( \beta \) is a quasimomentum preserved by the evolution operator \( \hat U \) . The case with \( \beta =0 \) corresponds to a periodic boundary conditions with \( \psi(x+2\pi) =\psi(x) \) and is known as the kicked rotator introduced in [22]. Other notations with \( \hbar \rightarrow T\), \( K/\hbar \rightarrow k\) are also used to mark the dependence on the period \( T\) between kicks, then \( K = k T\). The diffusion rate over quantum levels \( n \) is \( D=D_0/\hbar^2= n^2/t \approx K^2/2\hbar^2 =k^2/2\), thus the rotator energy \( E = <n^2>/2\) grows linearly with time. Quantum interference effects lead to a suppression of this semiclassical diffusion [22] on the diffusive time scale \( t_D \) so that the quantum probability spreads effectively only on a finite number of states \( \Delta n \sim \sqrt{D t_D} \) (Fig.4). According to the analytical estimates obtained in [23]: \[\tag{3} t_D \sim \Delta n \sim D \sim k^2 \sim D_0/\hbar^2 . \] This diffusive time scale is much larger than the Ehrenfest time scale [23], [24] \( t_E \sim \ln(1/\hbar)/2h \) after which a minimal coherent wave packet spreads over the whole phase space due to the exponential instability of classical dynamics. For \( t < t_E \) a quantum wave packet follows the chaotic dynamics of a classical trajectory as it is guaranteed by the Ehrenfest theorem [23]. For the case of Fig.4 the Kolmogorov-Sinai entropy \( h \approx 1 \) and the Ehrenfest time \( t_E \sim 1 \) is extremely short comparing to the diffusive time \( t_D \sim D \sim 200 \). The quantum suppression of chaotic diffusion is similar to the Anderson localization in disordered systems if to consider the level number as an effective site number in a disordered lattice, such an analogy has been established in [25]. However, in contrast to a disordered potential for the case of Anderson localization, in the quantum map (2) diffusion and chaos have a pure deterministic origin appearing as a result of dynamical chaos in the classical limit. Figure 5: Dependence of the localization length \( l \) on the quantum parameter of chaos \( K \rightarrow K_q=2k \sin T/2 \). The circles and the curve are, respectively, the numerical data and the theory for the classical diffusion \( D(K) \) (see [8]). The quantum data for \( l \) are shown by \( + \) (for \( 0<T<\pi \)) and by \( \times \) (for \( \pi<T<2\pi \)); here \( k=30; D_{ql}=k^2/2 \) (from [27]). Due to that this phenomenon is called the dynamical localization. The eigenstates of the unitary evolution operator \( \hat U \) are exponentially localized over momentum states \( \psi_m(n) \sim \exp(-|n-m|/l)/\sqrt{l} \) with the localization length \( l \sim \Delta n \sim t_D \) given by the relation [26], [27] \[\tag{4} l=D(K)/2 =D_0(K)/2\hbar^2, \] where \( D \) is the semiclassical diffusion expressed via a square number of levels per period of perturbation. For \( \hbar = T > 1 \) the chaos parameter \( K \) in the dependence \( D(K) \) should be replaced by its quantum value \( K \rightarrow K_q = 2 k \sin T/2 \) [27]. The quantum localization length \( l \) repeats the characteristic oscillations of the classical diffusion as it is shown in Fig.5. The relation (4) assumes that \( T/4\pi \) is a typical irrational number while for rational values of this ratio the phenomenon of quantum resonance takes place and the energy grows quadratically with time for rational values of quasimomentum [28]. The derivations of the relation (4) based on the field theory methods applied to dynamical systems with chaotic diffusion can be find in [29], [30] (see also Refs. therein). If the quantum map (2) is taken on a torus with \( N \) levels then the level spacing statistics is described by the Poisson law for \( N \gg l \) and by the Wigner-Dyson law of the random matrix theory for \( N \ll l \) [24],[31]. In the later case the quantum eigenstates are ergodic on a torus in agreement with the Shnirelman theorem and the level spacing statistics agrees with the Bohigas-Giannoni-Schmit conjecture (see books on quantum chaos in Recommended Reading). The quantum map (2) was built up experimentally with cold atoms in a kicked optical lattice by the group of M.Raizen [32]. Such a case corresponds to a particle in an infinite periodic lattice with averaging over many various \( \beta \). The quantum resonances at \( \beta \approx 0 \) were also experimentally observed with the Bose-Einstein condensate (BEC) in [33]. Quantum accelerator modes for kicked atoms falling in the gravitational field were found and analyzed in [34]. Extensions and Related Quantum Systems Due to universal properties of the standard map its quantum version also finds applications for various systems and various physical effects: • dynamical localization for ionization of excited hydrogen atoms in a microwave field was theoretically predicted in [35] and was experimentally observed by the group of P.Koch [36] (see more details in [14],[37],[38]) • quantum particle in a triangular well and monochromatic field with a quantum delocalization transition [39] • the kicked Harper model where in contrast to the relation (4) the quantum delocalization can take place due to quasi-periodicity of unperturbed spectrum (see [40], [41] and Refs. therein) • 3D Anderson transition in kicked rotator with modulated kick strength and quantum transport in mesoscopic conductors (see [42] and Refs. therein) • fractal Weyl law for the quantum standard map with absorption (see [44] and Refs. therein) Figure 6: Dependence of rescaled energy \( E/(k^2/4) \) on time in the classical map (1) at \( K=5 \); time reversal is performed at \( t=150 \); numerical simulations are done on BESM-6 with relative accuracy \( \epsilon \approx 10^{-12} \) (from [46]). Time Reversibility and Boltzmann - Loschmidt Dispute Figure 7: Same as in Fig.6 but for the quantum map (2) with \( K=5, \hbar=0.25 \), the straight line shows the classical diffusion; time reversal is performed at the moment \( t=150 \) marked by the vertical line, numerical simulations are done on the same computer BESM-6, in addition random quantum phases \( 0<\Delta \phi <0.1 \) are added for quantum amplitudes in momentum representation at the moment of time reversal (from [46]). The statistical theory of gases developed by Boltzmann leads to macroscopic irreversibility and entropy growth even if dynamical equations of motion are time reversible. This contradiction was pointed out by Loschmidt and is now known as the Loschmidt paradox. The reply of Boltzmann relied on the technical difficulty of velocity reversal for material particles: a story tells that he simply said "then go and do it" [45]. The modern resolution of this famous dispute, which took place around 1876 in Wien, came with the development of the theory of dynamical chaos (see e.g. [8], [17]). Indeed, for chaotic dynamics the Kolmogorov-Sinai entropy is positive and small perturbations grow exponentially with time, making the motion practically irreversible. This fact is convenient to illustrate on the example of the standard map which dynamics is time reversible, e.g. by inverting all velocities at the middle of free propagation between two kicks (see Fig.6). This explanation is valid for classical dynamics, while the case of quantum dynamics requires special consideration. Indeed, in the quantum case the exponential growth takes place only during the rather short Ehrenfest time, and the quantum evolution remains stable and reversible in presence of small perturbations [46] (see Fig.7). Quantum reversibility in presence of various perturbations has been actively studied in recent years and is now described through the Loschmidt echo (see [47] and Refs. therein). A method of approximate time reversal of matter waves for ultracold atoms in the regime of quantum chaos, like those in [32], [33], is proposed in [48]. In this method a large fraction of the atoms returns back even if the time reversal is not perfect. This fraction of the atoms exhibits Loschmidt cooling which can decrease their temperature by several orders of magnitude. At the same time a kicked BEC of attractive atoms (soliton) described by the Gross-Pitaevskii equation demonstrates a truly chaotic dynamics for which the exponential instability breaks the time reversibility [49]. However, since a number of atoms in BEC is finite and since BEC is a really quantum object one should expect that the Ehrenfest time is still very short and hence the time reversibility should be preserved in presence of small errors if the second quantization is taken into account. Links to Other Physical Topics Frenkel-Kontorova Model The Frenkel-Kontorova model describes a one-dimensional chain of atoms/particles with harmonic couplings placed in a periodic potential [50]. This model was introduced with the aim to study crystal dislocations but it also successfully applies for the description of commensurate-incommensurate phase transitions, epitaxial monolayers on the crystal surface, ionic conductors, glassy materials, charge-density waves and dry friction [51]. The Hamiltonian of the model is \( H= \sum_i \left({P_i^2 \over 2} + {(x_i -x_{i-1})^2 \over 2}- K \cos x_i \right)\), where \( P_i, x_i \) are momentum and position of atom \( i \). At the equilibrium the momenta \( P_i =0\) and \( \partial H/\partial x_i =0\) so that the positions of atoms are described by the map (1) with \( p_{i+1} = x_{i+1}- x_i , \; p_{i+1}= p_i +K\sin x_i\). The density of atoms corresponds to the rotation number \( r \) of an invariant KAM curve. For the golden density with \( r =r_g\) the chain slides in the periodic potential for \( K < K_g \) (KAM curve regime) while for \( K > K_g \) the transition by the breaking of analyticity, or Aubry transition, takes place, the chain becomes pinned and atoms form an invariant Cantor set called cantorus (see [52] and Aubry-Mather theory). In this regime the phonon spectrum has a gap so that the phonon excitations are suppressed at low temperature. The mathematical Aubry-Mather theory guarantees that the ground state of the chain exists and is unique. However there exist exponentially many static equilibrium configurations which are exponentially close to the energy of the ground state. The energies of these configurations form a fractal quasi-degenerate band structure and become mixed at any physically realistic temperature. Thus, such configurations can be viewed as a dynamical spin glass. For a case of Coulomb interactions between particles (e.g. ions or electrons) one obtains a problem of Wigner crystal in a periodic potential which again is locally described by the Frenkel-Kontorova model since the map (1) gives the local description of the dynamics. For the quantum Frenkel-Kontorova model the dynamics of atoms (ions) in the chain is quantum. In this case the quantum vacuum fluctuations and instanton tunneling lead to a quantum melting of pinned phase: above a certain effective Planck constant a quantum phase transition takes place from pinned instanton glass to sliding phonon gas (see [53] and Refs. therein). Quantum Computing One iteration of maps (1) and (2) can be simulated on a quantum computer in a polynomial number of quantum gates for an exponentially large vector representing a Liouville density distribution or a quantum state. The quantum algorithm of such a quantum computation is described in [54], effects of quantum errors are analyzed in [55] (see also Refs. therein). Historical Notes The standard map (1) in a form of recursive relation for atoms in a periodic potential appears already in the works of Kontorova and Frenkel [50]. As a dynamical map it first appeared as a description of electron dynamics in a new relativistic accelerator proposed by V.I.Veksler (Dokl. Akad. Nauk SSSR 43: 346 (1944)). The regime of a stable regular acceleration was studied later also by A.A.Kolomensky (Zh. Tekh. Fiz. 30: 1347 (1960)) and S.P.Kapitsa, V.N.Melekhin ("Microtron", Nauka, Moscow (1969) in Russian). Among the early researchers of model (1) was also British physicist J.B.Taylor (unpublished reports). The description of chaos in map (1) and its main properties, including chaos border, diffusion rate and positive entropy, was given in [1]. The term "standard map" appeared in [2], "Chirikov-Taylor map" [8] and "Chirikov standard map" [16] are also used, the quantum standard map or kicked rotator was first considered in [22]. Appearance of other terms: Kolmogorov-Arnold-Moser theory [1], Arnold diffusion [1], Kolmogorov-Sinai entropy [2], Ehrenfest time [24]. Recommended Reading B.V.Chirikov, "Research concerning the theory of nonlinear resonance and stochasticity", Preprint N 267, Institute of Nuclear Physics, Novosibirsk (1969), (Engl. Trans., CERN Trans. 71-40 (1971)) B.V.Chirikov, "A universal instability of many-dimensional oscillator systems", Phys. Rep. 52: 263 (1979). B.V.Chirikov, "Time-dependent quantum systems" in "Chaos and quantum mechanics", Les Houches Lecture Series, Vol. 52, pp.443-545, Eds. M.-J.Giannoni, A.Voros, J.Zinn-Justin, Elsevier Sci. Publ., Amsterdam (1991) A.J.Lichtenberg, M.A.Lieberman, "Regular and chaotic dynamics", Springer, Berlin (1992). F.Haake, "Quantum signatures of chaos", Springer, Berlin (2001). L.E.Reichl, "The Transition to chaos in conservative classical systems and quantum manifestations", Springer, Berlin (2004). Internal references • Martin Gutzwiller (2007) Quantum chaos. Scholarpedia, 2(12):3146. External Links Selected publications of Boris Chirikov [1] Sputnik of Chaos [2] Google query for "standard map" [3] B.V.Chirikov, "Research concerning the theory of nonlinear resonance and stochasticity", Preprint N 267, Institute of Nuclear Physics, Novosibirsk (1969) [4], (Engl. Trans., CERN Trans. 71-40 (1971)) [5]. B.V.Chirikov, "A universal instability of many-dimensional oscillator systems", Phys. Rep. 52: 263 (1979) [6]. J.M.Greene, "Method for determining a stochastic transition", J. Math. Phys. 20(6): 1183 (1979). R.S.MacKay, "A renormalization approach to invariant circles in area-preserving maps", Physica D 7(1-3): 283 (1983). R.S.MacKay, J.D.Meiss, I.C.Percival, "Transport in Hamiltonian systems", Physica D 13(1-2): 55 (1984). R.S.MacKay, I.C.Percival, "Converse KAM - theory and practice", Comm. Math. Phys. 94(4): 469 (1985). B.V.Chirikov, "Resonance processes in magnetic traps", At. Energ. 6: 630 (1959) (in Russian [7]) (Engl. Transl., J. Nucl. Energy Part C: Plasma Phys. 1: 253 (1960) [8]). B.V.Chirikov, "Particle confinement and adiabatic invariance", Proc. R. Soc. Lond. A 413: 145 (1987) [9]. B.V.Chirikov, D.L.Shepelyanskii, "Diffusion during multiple passage through a nonlinear resonance", Sov. Phys. Tech. Phys. 27(2): 156 (1982) [10] (in Russian [11]) F.M.Izraelev, "Nearly linear mappings and their applications", Physica D 1(3): 243 (1980). T.Y.Petrowsky, "Chaos and cometary clouds in the solar system", Phys. Lett. A 117(7): 328 (1986). B.V.Chirikov, V.V.Vecheslavov, "Chaotic dynamics of comet Halley", Astron. Astrophys. 221: 146 (1989) [12]. G.Casati, I.Guarneri, D.L.Shepelyansky, "Hydrogen atom in monochromatic field: chaos and dynamical photonic localization", IEEE J. of Quant. Elect. 24: 1420 (1988). F.Benvenuto, G.Casati, D.L.Shepelyansky, "Chaotic autoionization of molecular Rydberg states", Phys. Rev. Lett. 72: 1818 (1994). D.L.Shepelyansky, A.D.Stone, "Chaotic Landau level mixing in classical and quantum wells", Phys. Rev. Lett. 74: 2098 (1995). I.P.Cornfeld, S.V.Fomin, Ya.G.Sinai, "Ergodic theory", Springer, Berlin (1982). A.Giorgilli, V.F.Lazutkin, "Some remarks on the problem of ergodicity of the standard map", Phys. Lett. A 272: 359 (2000). B.V.Chirikov, D.L.Shepelyansky, "Chaos border and statistical anomalies", Eds. D.V.Shirkov, D.I.Kazakov and A.A.Vladimirov, World Sci. Publ., Singapore, "Renormalization Group" p.221 (1988) [13]. J.M.Greene, R.S.MacKay, J.Stark, "Boundary circles for area-preserving maps", Physica D 21(2-3): 267 (1986). B.V.Chirikov, D.L.Shepelyansky, "Asymptotic statistics of Poincare recurrences in Hamiltonian systems with Divided Phase Space", Phys. Rev. Lett. 82: 528 (1999) [14]; 89: 239402 (2002) [15] . F.M.Izrailev, G.Casati, J.Ford, B.V.Chirikov, "Stochastic behavior of a quantum pendulum under a periodic perturbation", Preprint 78-46, Institute of Nuclear Physics, Novosibirsk (1978) (extended version in Russian) [16]; G.Casati, B.V.Chirikov, F.M.Izrailev, J.Ford, Lecture Notes in Physics, Springer, Berlin, 93: 334 (1979) [17]. B.V.Chirikov, F.M.Izrailev, D.L.Shepelyansky, "Dynamical stochasticity in classical and quantum mechanics", Sov. Scient. Rev. C 2: 209 (1981) (Section C - Mathematical Physics Reviews, Ed. S.P.Novikov vol.2, Harwood Acad. Publ., Chur, Switzerland (1981)) [18]. B.V.Chirikov, F.M.Izrailev, D.L.Shepelyansky, "Quantum chaos: localization vs. ergodicity", Physica D 33: 77 (1988) [19]. S.Fishman, D.R.Grempel, R.E.Prange, "Chaos, quantum recurrences, and Anderson localization", Phys. Rev. Lett. 49: 509 (1982). B.V.Chirikov, D.L.Shepelyanskii, "Localization of dynamical chaos in quantum systems", Izv. Vyssh. Ucheb. Zaved. Radiofizika 29(9): 1041 (1986) (in Russian [20]); (English Trans. Plenum Publ. [21] ). D.L.Shepelyansky, "Localization of diffusive excitation in multi-level systems", Physica D 28: 103 (1987). F.M.Izrailev, D.L.Shepelyanskii, "Quantum resonance for a rotator in a nonlinear periodic field", Theor. Math. Phys. 43(3): 553 (1980); see also I. Dana and D.L. Dorofeev, "General quantum resonances of kicked particle", Phys. Rev. E 73: 026206 (2006). K.M.Frahm, "Localization in a rough billiard: a sigma model formulation", Phys. Rev. B 55: 8626(R) (1997). C.Tian, A.Kamenev, A.Larkin, "Weak dynamical localization in periodically kicked cold atomic gases", Phys. Rev. Lett. 93: 124101 (2004). F.M.Izrailev, "Simple models of quantum chaos: spectrum and eigenfunctions", Phys. Rep. 196: 299 (1990). F.L.Moore, J.C.Robinson, C.F.Bharucha, B.Sundaram, M.G.Raizen, "Atom optics realization of the quantum \( \delta \)-kicked rotor", Phys. Rev. Lett. 75: 4598 (1995). C.Ryu, M.F.Anderen, A.Vaziri, M.B.d'Arcy, J.M.Grossman, K.Helmerson, W.D.Phillips, "High-order quantum resonances observed in a periodically kicked Bose-Einstein condensate", Phys. Rev. Lett. 96: 160403 (2006). A.Buchleitner, M.B.d'Arcy, S.Fishman, S.A.Gardiner, I.Guarneri, Z.-Y.Ma, L.Rebuzzini, G.S.Summy, "Quantum accelerator modes from the Farey tree", Phys. Rev. Lett. 96: 164101 (2006). G.Casati, B.V.Chirikov, D.L.Shepelyansky, ""Quantum limitations for chaotic excitation of the hydrogen atom in a monochromatic field", Phys. Rev. Lett. 53: 2525 (1984) [22]. E.J.Galvez, B.E.Sauer, L.Moorman, P.M.Koch, D.Richards, "Microwave ionization of H atoms: breakdown of classical dynamics for high frequencies", Phys. Rev. Lett. 61: 2011 (1988). G.Casati, B.V.Chirikov, D.L.Shepelyansky, I.Guarneri, ""Relevance of classical chaos in quantum mechanics: the hydrogen atom in a monochromatic field", Phys. Rep. 154: 77 (1987) [23]. P.M.Koch, K.A.H. van Leeuwen, "The importance of resonances in microwave “ionization” of excited hydrogen atoms", Phys. Rep. 255: 289 (1995). F.Benvenuto, G.Casati, I.Guarneri, D.L.Shepelyansky, "A quantum transition from localized to extended states in a classically chaotic system", Z.Phys.B - Cond. Matt. 84: 159 (1991). R.Lima, D.L.Shepelyansky, "Fast delocalization in a model of quantum kicked rotator", Phys. Rev. Lett. 67: 1377 (1991). T.Prosen, I.I.Saija, N.Shah, "Dimer decimation and intricately nested localized-ballistic phases of a kicked Harper model", Phys. Rev. Lett. 87: 066601 (2001). F.Borgonovi, D.L.Shepelyansky, "Two interacting particles in an effective 2-3-d random potential", J. de Physique I France 6: 287 (1996) and F. Borgonovi, I.Guarneri and L. Rebuzzini, "Chaotic diffusion and statistics of universal scattering fluctuations", Phys. Rev. Lett. 72: 1463 (1994). G.G.Carlo, G.Benenti, D.L.Shepelyansky, "Dissipative quantum chaos: transition from wave packet collapse to explosion", Phys. Rev. Lett. 95: 164101 (2005). D.L.Shepelyansky, "Fractal Weyl law for quantum fractal eigenstates", Phys. Rev. E 77: 015202(R) (2008). J.E.Mayer, M.Goppert-Mayer, "Statistical mechanics", John Wiley & Sons, N.Y. (1977). D.L.Shepelyansky, "Some statistical properties of simple classically stochastic quantum systems", Physica D 8: 208 (1983). T.Gorin, T.Prosen, T.H.Seligman, M.Znidaric, "Dynamics of Loschmidt echos and fidelity decay", Phys. Rep. 435: 33 (2006). J.Martin, B.Georgeot, D.L.Shepelyansky, "Loschmidt cooling by time reversal of atomic matter waves", arxiv:0710.4860[cond-mat]), Phys. Rev. Lett. 100: 044106 (2008). F.Benvenuto, G.Casati, A.S.Pikovsky, D.L.Shepelyansky, "Manifestations of classical and quantum chaos in nonlinear wave propagation", Phys. Rev. A 44: 3423(R) (1991). T.A.Kontorova, Ya.I.Frenkel, "On the theory of plastic deformation and doubling", Zh. Eksp. Teor. Fiz 8: 89 (1938); 8: 1340 (1938); 8: 1359 (1938) (in Russian). O.M. Braun, Yu.S. Kivshar, "The Frenkel-Kontorova Model: Concepts, Methods, and Applications", Springer, Berlin (2004). S.Aubry, "The twist map, the extended Frenkel-Kontorova model and the devil's staircase", Physica D 7(1-3): 240 (1983). I.Garcia-Mata, O.V.Zhirov, D.L.Shepelyansky, "Frenkel-Kontorova model with cold trapped ions", Eur. Phys. J. D 41: 325 (2007). B.Georgeot, D.L.Shepelyansky, "Exponential gain in quantum computing of quantum chaos and localization", Phys. Rev. Lett. 86: 2890 (2001). K.M.Frahm, R.Fleckinger, D.L.Shepelyansky, "Quantum chaos and random matrix theory for fidelity decay in quantum computations with static imperfections", Eur. Phys. J. D 29: 139 (2004). See also Hamiltonian systems, Mapping, Chaos, Kolmogorov-Arnold-Moser Theory, Kolmogorov-Sinai entropy, Aubry-Mather theory, Quantum chaos Personal tools Focal areas
4eccdd7655658721
Conscious Events as Orchestrated Space-Time Selections Stuart Hameroff and Roger Penrose What is consciousness? Some philosophers have contended that "qualia," or an experiential medium from which consciousness is derived, exists as a fundamental component of reality. Whitehead, for example, described the universe as being comprised of "occasions of experience." To examine this possibility scientifically, the very nature of physical reality must be re-examined. We must come to terms with the physics of space-time--as is described by Einstein's general theory of relativity--and its relation to the fundamental theory of matter--as described by quantum theory. This leads us to employ a new physics of objective reduction: " OR" which appeals to a form of quantum gravity to provide a useful description of fundamental processes at the quantum/classical borderline (Penrose, 1994; 1996). Within the OR scheme, we consider that consciousness occurs if an appropriately organized system is able to develop and maintain quantum coherent superposition until a specific "objective" criterion (a threshold related to quantum gravity) is reached; the coherent system then self-reduces (objective reduction: OR). We contend that this type of objective self-collapse introduces non-computability, an essential feature of consciousness. OR is taken as an instantaneous event--the climax of a self-organizing process in fundamental space-time--and a candidate for a conscious Whitehead "occasion" of experience. How could an OR process occur in the brain, be coupled to neural activities, and account for other features of consciousness? We nominate an OR process with the requisite characteristics to be occurring in cytoskeletal microtubules within the brain's neurons (Penrose and Hameroff, 1995; Hameroff and Penrose, 1995; 1996). In this model, quantum-superposed states develop in microtubule subunit proteins ("tubulins"), remain coherent and recruit more superposed tubulins until a mass-time-energy threshold (related to quantum gravity) is reached. At that point, self-collapse, or objective reduction (OR) abruptly occurs. We equate the pre-reduction, coherent superposition ("quantum computing") phase with pre-conscious processes, and each instantaneous (and non-computable) OR, or self-collapse, with a discrete conscious event. Sequences of OR events give rise to a "stream" of consciousness. Microtubule-associated-proteins can "tune" the quantum oscillations of the coherent superposed states; the OR is thus self-organized, or "orchestrated" ("Orch OR"). Each Orch OR event selects (non-computably) microtubule subunit states which regulate synaptic/neural functions using classical signaling. The quantum gravity threshold for self-collapse is relevant to consciousness, according to our arguments, because macroscopic superposed quantum states each have their own space-time geometries (Penrose, 1994; 1996). These geometries are also superposed, and in some way "separated," but when sufficiently separated, the superposition of space-time geometries becomes significantly unstable, and reduce to a single universe state. Quantum gravity determines the limits of the instability; we contend that the actual choice of state made by Nature is non-computable. Thus each Orch OR event is a self-selection of space-time geometry, coupled to the brain through microtubules and other biomolecules. If conscious experience is intimately connected with the very physics underlying space-time structure, then Orch OR in microtubules indeed provides us with a completely new and uniquely promising perspective on the hard problem of consciousness. Introduction: Self-Selection in an Experiential Medium? The "hard problem" of incorporating the phenomenon of consciousness into a scientific world-view involves finding scientific explanations of qualia, or the subjective experience of mental states (Chalmers, 1995; 1996). On this, reductionist science is still at sea. Why do we have an inner life, and what exactly is it? One set of philosophical positions, addressing the hard problem, views consciousness as a fundamental component of physical reality. For example an extreme view - "panpsychism" - is that consciousness is a quality of all matter: atoms and their subatomic components having elements of consciousness (e.g. Spinoza, 1677; Rensch, 1960). "Mentalists" such as Leibniz and Whitehead (e.g. 1929) contended that systems ordinarily considered to be physical are constructed in some sense from mental entities. Bertrand Russell (1954) described "neutral monism" in which a common underlying entity, neither physical nor mental, gave rise to both. Recently Stubenberg (1996) has claimed that qualia are that common entity. In monistic idealism, matter and mind arise from consciousness - the fundamental constituent of reality (e.g. Goswami, 1993). Wheeler (1990) has suggested that information is fundamental to the physics of the universe. From this, Chalmers (1995;1996) proposes a double-aspect theory in which information has both physical and experiential aspects. Among these positions, the philosophy of Alfred North Whitehead (1929; 1933) may be most directly applicable. Whitehead describes the ultimate concrete entities in the cosmos as being actual "occasions of experience," each bearing a quality akin to "feeling." Whitehead construes "experience" broadly - in a manner consistent with panpsychism - so that even "temporal events in the career of an electron have a kind of 'protomentality'." Whitehead's view may be considered to differ from panpsychism, however, in that his discrete 'occasions of experience' can be taken to be related to "quantum events" (Shimony, 1993). In the standard descriptions of quantum mechanics, randomness occurs in the events described as quantum state reductions--these being events which appear to take place when a quantum-level process gets magnified to a macroscopic scale. Quantum state reduction (here denoted by the letter R; cf. Penrose, 1989, 1994) is the random procedure that is adopted by physicists in their descriptions of the quantum measurement process. It is still a highly controversial matter whether R is to be taken as a "real" physical process, or whether it is some kind of illusion and not to be regarded as a fundamental ingredient of the behavior of Nature. Our position is to take R to be indeed real--or, rather to regard it as a close approximation to an objectively real process OR (objective reduction), which is to be a non-computable process instead of merely a random one (see Penrose 1989; 1994). In almost all physical situations, OR would come about in situations in which the random effects of the environment dominate, so OR would be virtually indistinguishable from the random R procedure that is normally adopted by quantum theorists. However, when the quantum system under consideration remains coherent and well isolated from its environment, then it becomes possible for its state to collapse spontaneously, in accordance with the OR scheme we adopt, and to behave in non-computable rather than random ways. Moreover, this OR scheme intimately involves the geometry of the physical universe at its deepest levels. Our viewpoint is to regard experiential phenomena as also inseparable from the physical universe, and in fact to be deeply connected with the very laws which govern the physical universe. The connection is so deep, however, that we perceive only glimmerings of it in our present day physics. One of these glimmerings, we contend, is a necessary non-computability in conscious thought processes; and we argue that this non-computability must also be inherent in the phenomenon of quantum state self-reduction--the "objective reduction" (OR) referred to above. This is the main thread of argument in Shadows of the Mind (Penrose, 1994). The argument that conscious thought, whatever other attributes it may also have, is non-computable (as follows most powerfully from certain deductions from Gödel's incompleteness theorem) grabs hold of one tiny but extremely valuable point. This means that at least some conscious states cannot be derived from previous states by an algorithmic process - a property which distinguishes human and other animal minds from computers. Non-computability per se does not directly address the 'hard problem' of the nature of experience, but it is a clue to the kind of physical activity that lies behind it. This points to OR, an underlying physical action of a completely different character from that which seems to underlie non-conscious activity. Following this clue with sensitivity and patience should ultimately lead to real progress towards understanding mental phenomena in their inward manifestations as well as outward. In the OR description, consciousness occurs if an organized quantum system is able to isolate and sustain coherent superposition until its quantum gravity threshold for space-time separation is met; it then self-reduces (non-computably). For consciousness to occur, self-reduction is essential, as opposed to reduction being triggered by the system's random environment. (In the latter case, the reduction would itself be effectively random and would lack useful non-computability, being unsuitable for direct involvement in consciousness.) We take the self-reduction to be an instantaneous event -- the climax of a self-organizing process fundamental to the structure of space-time - and apparently consistent with a Whitehead "occasion of experience." As OR could, in principle, occur ubiquitously within many types of inanimate media, it may seem to imply a form of 'panpsychism' (in which individual electrons, for example, possess an experiential quality). However according to the principles of OR (as expounded in Penrose 1994; 1996), a single superposed electron would spontaneously reduce its state (assuming it could maintain isolation) only once in a period much longer than the present age of the universe. Only large collections of particles acting coherently in a single macroscopic quantum state could possibly sustain isolation and support coherent superposition in a time frame brief enough to be relevant to our consciousness. Thus only very special circumstances could support consciousness: 1. High degree of coherence of a quantum state - a collective mass of particles in superposition for a time period long enough to reach threshold, and brief enough to be useful in thought processes. 2. Ability for the OR process to be at least transiently isolated from a 'noisy' environment until the spontaneous state reduction takes place. This isolation is required so that reduction is not simply random. Mass movement in the environment which entangles with the quantum state would effect a random (not non-computable) reduction. 3. Cascades of ORs to give a "stream" of consciousness, and huge numbers of OR events taking place during the course of a lifetime. By reaching quantum gravity threshold, each OR event has a fundamental bearing on space-time geometry. One could say that a cascade of OR events charts an actual course of physical space-time geometry selections. It may seem surprising that quantum gravity effects could plausibly have relevance at the physical scales relevant to brain processes. For quantum gravity is normally viewed as having only absurdly tiny influences at ordinary dimensions. However, we shall show later that this is not the case, and the scales determined by basic quantum gravity principles are indeed those that are relevant for conscious brain processes. We must ask how such an OR process could actually occur in the brain. How could it be coupled to neural activities at a high rate of information exchange; how could it account for preconscious to conscious transitions, have spatial and temporal binding, and both simultaniety and time flow? We here nominate an OR process with the requisite characteristics occurring in cytoskeletal microtubules within the brain's neurons. In our model, microtubule-associated proteins "tune" the quantum oscillations leading to OR; we thus term the process "orchestrated objective reduction" (Orch OR). Space-Time: Quantum Theory and Einstein's Gravity Quantum theory describes the extraordinary behavior of the matter and energy which comprise our universe at a fundamental level. At the root of quantum theory is the wave/particle duality of atoms, molecules and their constituent particles. A quantum system such as an atom or sub-atomic particle which remains isolated from its environment behaves as a "wave of possibilities" and exists in a coherent complex-number valued "superposition" of many possible states. The behavior of such wave-like, quantum-level objects can be satisfactorily described in terms of a state vector which evolves deterministically according to the Schrödinger equation (unitary evolution), denoted by U. Somehow, quantum microlevel superpositions lead to unsuperposed stable structures in our macro-world. In a transition known as wave function collapse, or reduction (R), the quantum wave to alternative possibilities reduces to a single macroscopic reality, an "eigenstate" of some appropriate operator. (This would be just one out of many possible alternative eigenstates relevant to the quantum operator.) This process is invokes in the description of a macroscopic measurement, when effects are magnified from the small, quantum scale to the large, classical scale. According to conventional quantum theory (as part of the standard "Copenhagen interpretation"), each choice of eigenstate is entirely random, weighted according to a probability value that can be calculated from the previous state according to the precise procedures of quantum formalism. This probabilistic ingredient was a feature with which Einstein, among others, expressed displeasure: "You believe in a God who plays dice and I in complete law and order"(from a letter to Max Born). Penrose (1989; 1994) has contended that, at a deeper level of description, the choices may more accurately arise as a result of some presently unknown "non-computational" mathematical/physical (i.e., "Platonic realm") theory, that is they cannot be deduced algorithmically. Penrose argues that such non-computability is essential to consciousness, because (at least some) conscious mental activity is unattainable by computers. It can be argued that present-day physics has no clear explanation for the cause and occurrence of wave function collapse R. Experimental and theoretical evidence through the 1930's led quantum phycisists (such as Schrödinger, Heisenberg, Dirac, von Neumann and others) to postulate that quantum-coherent superpositions persist indefinitely in time, and would, in principle be maintained from the micro to macro levels. Or perhaps they would persist until conscious observation collapses, or reduces, the wave function (subjective reduction, or "SR"). Accordingly, even macroscopic objects, if unobserved, could remain superposed. To illustrate the apparent absurdity of this notion, Erwin Schrödinger (e.g. 1935) described his now-famous "cat in a box" being simultaneously both dead and alive until the box was opened and the cat observed. As a counter to this unsettling prospect, various new physical schemes for collapse according to objective criteria (objective reduction - "OR") have recently been proposed. According to such a scheme, the growth and persistence of superposed states could reach a critical threshold, at which collapse, or OR rapidly occurs (e.g. Pearle, 1989 ; Ghirardi et al, 1986). Some such schemes are based specifically on gravitational effects mediating OR (e.g.Károlyházy, 1986; Diósi, 1989; Ghirardi et al., 1990; Penrose, 1989;1994; Pearle and Squires, 1994; Percival, 1995). Table 1 categorizes types of reduction. Context Cause of Collapse (Reduction) Description Acronym Quantum coherent superposition No collapse Evolution of the wave function (Schrödinger equation) U Conventional quantum theory (Copenhagen interpretation) Environmental entanglement, Measurement, Conscious observation Reduction; Subjective reduction R New physics (Penrose, 1994) Self-collapse -quantum gravity induced (Penrose, Diósi, etc) Objective reduction OR Consciousness (present paper) Self-collapse, quantum gravity threshold in microtubules orchestrated by MAPs etc Orchestrated objective reduction Orch OR Table 1 Descriptions of wave function collapse. The physical phenomenon of gravity, described to a high degree of accuracy by Isaac Newton's mathematics in 1687, has played a key role in scientific understanding. However, in 1915, Einstein created a major revolution in our scientific world-view. According to Einstein's theory, gravity plays a unique role in physics for several reasons (cf. Penrose, 1994). Most particularly, these are: 1. Gravity is the only physical quality which influences causal relationships between space-time events. 2. Gravitational force has no local reality, as it can be eliminated by a change in space-time coordinates; instead, gravitational tidal effects provide a curvature for the very space-time in which all other particles and forces are contained. It follows from this that gravity cannot be regarded as some kind of "emergent phenomenon," secondary to other physical effects, but is a "fundamental component" of physical reality. There are strong arguments (e.g. Penrose, 1987; 1995) to suggest that the appropriate union of general relativity (Einstein's theory of gravity) with quantum mechanics - a union often referred to as "quantum gravity" - will lead to a significant change in both quantum theory and general relativity, and, when the correct theory is found, will yield a profoundly new understanding of physical reality. And although gravitational forces between objects are exceedingly weak (feebler than, for example, electrical forces by some 40 orders of magnitude), there are significant reasons for believing that gravity has a fundamental influence on the behavior of quantum systems as they evolve from the micro to the macro levels. The appropriate union of quantum gravity with biology, or at least with advanced biological nervous systems, may yield a profoundly new understanding of consciousness. Curved Space-Time Superpositions and Objective Reduction ("OR") According to modern accepted physical pictures, reality is rooted in 3-dimensional space and a 1-dimensional time, combined together into a 4-dimensional space-time. This space-time is slightly curved, in accordance with Einstein's general theory of relativity, in a way which encodes the gravitational fields of all distributions of mass density. Each mass density effects a space-time curvature, albeit tiny. This is the standard picture according to classical physics. On the other hand, when quantum systems have been considered by physicists, this mass-induced tiny curvature in the structure of space-time has been almost invariably ignored, gravitational effects having been assumed to be totally insignificant for normal problems in which quantum theory is important. Surprising as it may seem, however, such tiny differences in space-time structure can have large effects, for they entail subtle but fundamental influences on the very rules of quantum mechanics. Superposed quantum states for which the respective mass distributions differ significantly from one another will have space-time geometries which correspondingly differ. Thus, according to standard quantum theory, the superposed state would have to involve a quantum superposition of these differing space-times. In the absence of a coherent theory of quantum gravity there is no accepted way of handling such a superposition. Indeed the basic principles of Einstein's general relativity begin to come into profound conflict with those of quantum mechanics (cf. Penrose, 1996). Nevertheless, various tentative procedures have been put forward in attempts to describe such a superposition. Of particular relevance to our present proposals are the suggestions of certain authors (i.e., Karolyhazy, 1996; 1974; Karolyhazy et al., 1986; Kibble, 1991, Diósi, 1989; Ghirardi et al, 1990; Pearle and Squires, 1995; Percival, 1995; Penrose, 1993; 1994; 1996) that it is at this point that an objective quantum state reduction (OR) ought to occur, and the rate or timescale of this process can be calculated from basic quantum gravity considerations. These particular proposals differ in certain detailed respects, and for definiteness we shall follow the specific suggestions made in Penrose (1194; 1996). Accordingly, the quantum superposition of significantly differing space-times is unstable, with a lifetime given by that timescale. Such a superposed state will decay - or "reduce" - into a single universe state, which is one or the other of the space-time geometries involved in that superposition. Whereas such an OR action is not a generally recognized part of the normal quantum-mechanical procedures, there is no plausible or clear-cut alternative that standard quantum theory has to offer. This OR procedure avoids the need for "multiple universes" (cf. Everett, 1957; Wheeler, 1957, for example). There is no agreement, among quantum gravity experts, about how else to address this problem. For the purposes of the present article, it will be assumed that a gravitationally induced OR action is indeed the correct resolution of this fundamental conundrum. Figure 1. Quantum coherent superposition represented as a separation of space-time. In the lowest of the three diagrams, a bifurcating space-time is depicted as the union ("glued together version") of the two alternative space-time histories that are depicted at the top of the Figure. The bifurcating space-time diagram illustrates two alternative mass distributions actually in quantum superposition, whereas the top two diagrams illustrate the two individual alternatives which take part in the superposition (adapted from Penrose, 1994 - p. 338). Figure 1 (adapted from Penrose, 1994, p. 338) schematically illustrates the way in which space-time structure can be affected when two macroscopically different mass distributions take part in a quantum superposition. Each mass distribution gives rise to a separate space-time, the two differing slightly in their curvatures. So long as the two distributions remain in quantum superposition, we must consider that the two space-times remain in superposition. Since, according to the principles of general relativity, there is no natural way to identify the points of one space-time with corresponding points of the other, we have to consider the two as separated from one another in some sense, resulting in a kind of "blister" where the space-time bifurcates. A bifurcating space-time is depicted in the lowest of the three diagrams, this being the union ("glued together version") of the two alternative space-time histories that are depicted at the top of Figure 1. The initial part of each space-time is at the lower end of each individual space-time diagram. The bottom space-time diagram (the bifurcating one) illustrates two alternative mass distributions actually in quantum superposition, whereas the top two illustrate the two individual alternatives which take part in the superposition. The combined space-time describes a superposition in which the alternative locations of a mass move gradually away from each other as we proceed in the upward direction in the diagram. Quantum- mechanically (so long as OR has not taken place), we must think of the "physical reality" of this situation as being illustrated as an actual superposition of these two slightly differing space-time manifolds, as indicated in the bottom diagram. As soon as OR has occurred, one of the two individual space-times takes over, as depicted as one of the two sheets of the bifurcation. For clarity only, the bifurcating parts of these two sheets are illustrated as being one convex and the other concave. Of course there is additional artistic license involved in drawing the space-time sheets as 2-dimensional, whereas the actual space-time constituents are 4-dimensional. Moreover, there is no significance to be attached to the imagined "3-dimensional space" within which the space-time sheets seem to be residing. There is no "actual" higher dimensional space there, the "intrinsic geometry" of the bifurcating space-time being all that has physical significance. When the "separation" of the two space-time sheets reaches a critical amount, one of the two sheets "dies" - in accordance with the OR criterion - the other being the one that persists in physical reality. The quantum state thus reduces (OR), by choosing between either the "concave" or "convex" space-time of Figure 1. It should be made clear that this measure of separation is only very schematically illustrated as the "distance" between the two sheets in the lower diagram in Figure 1. As remarked above, there is no physically existing "ambient higher dimensional space" inside which the two sheets reside. The degree of separation between the space-time sheets is a more abstract mathematical thing; it would be more appropriately described in terms of a symplectic measure on the space of 4-dimensional metrics (cf. Penrose, 1993) - but the details (and difficulties) of this will not be important for us here. It may be noted, however, that this separation is a space-time separation, not just a spatial one. Thus the time of separation contributes as well as the spatial displacement. Roughly speaking, it is the product of the temporal separation T with the spatial separation S that measures the overall degree of separation, and OR takes place when this overall separation reaches the critical amount. [This critical amount would be of the order of unity, in absolute units, for which the Planck-Dirac constant h (actually "hbar": Planck's constant over 2pi), the gravitational constant G, and the velocity of light c, all take the value unity, cf. Penrose, 1994 - pp. 337-339.] Thus for small S, the lifetime T of the superposed state will be large; on the other hand, if S is large, then T will be small. To calculate S, we compute (in the Newtonian limit of weak gravitational fields) the gravitational self-energy E of the difference between the mass distributions of the two superposed states. (That is, one mass distribution counts positively and the other, negatively; see Penrose, 1994; 1995.) The quantity S is then given by: S = E E = h / T Schematically, since S represents three dimensions of displacement rather than the one dimension involved in T, we can imagine that this displacement is shared equally between each of these three dimensions of space - and this is what has been depicted in Figure 3 (below). However, it should be emphasized that this is for pictorial purposes only, the appropriate rule being the one given above. These 2 equations relate the mass distribution, time of coherence, and space-time separation for a given OR event. If, as some philosophers contend, experience is contained in space-time, OR events are self-organizing processes in that experiential medium, and a candidate for consciousness. But where in the brain, and how, could coherent superposition and OR occur? A number of sites and various types of quantum interactions have been proposed. We strongly favor microtubules as an important ingredient, however various organelles and biomolecular structures including clathrins, myelin (glial cells), pre-synaptic vesicular grids (Beck and Eccles, 1992) and neural membrane proteins (Marshall, 1989) might also participate. Properties of brain structures suitable for quantum coherent superposition, OR and relevant to consciousness might include: 1) high prevalence; 2) functional importance (for example regulating neural connectivity and synaptic function); 3) periodic, crystal-like lattice dipole structure with long range order; 4) ability to be transiently isolated from external interaction/observation; 5) functionally coupled to quantum-level events; 6) hollow, cylindrical (possible wave guide); and 7) suitable for information processing. Membranes, membrane proteins, synapses, DNA and other types of structures have some, but not all, of these characteristics. Cytoskeletal microtubules appear to qualify in all respect.. Interiors of living cells, including the brain's neurons, are spatially and dynamically organized by self-assembling protein networks: the cytoskeleton. Within neurons, the cytoskeleton establishes neuronal form, and maintains and regulates synaptic connections. Its major components are microtubules, hollow cylindrical polymers of individual proteins known as tubulin. Microtubules ("MTs") are interconnected by linking proteins (microtubule-associated proteins: "MAPs") to other microtubules and cell structures to form cytoskeletal lattice networks (Figure 2). Figure 3. Microtubule structure: a hollow tube of 25 nanometers diameter, consisting of 13 columns of tubulin dimers. Each tubulin molecule is capable of (at least) two conformations. (Reprinted with permission from Penrose, 1994, p. 359.) Figure 4. Top: Two states of tubulin in which a single quantum event (electron localization) within a central hydrophobic pocket is coupled to a global protein conformation. Switching between the two states can occur on the order of nanoseconds to picoseconds. Bottom: Tubulin in quantum coherent superposition of both states. MTs are hollow cylinders 25 nanometers (nm) in diameter whose lengths vary and may be quite long within some nerve axons. MT cylinder walls are comprised of 13 longitudinal protofilaments which are each a series of subunit proteins known as tubulin (Figure 3). Each tubulin subunit is a polar, 8 nm dimer which consists of two slightly different 4 nm monomers (alpha and beta tubulin - Figure 4). Tubulin dimers are dipoles, with surplus negative charges localized toward monomers (DeBrabander, 1982), and within MTs are arranged in a hexagonal lattice which is slightly twisted, resulting in helical pathways which repeat every 3, 5, 8 and other numbers of rows. Traditionally viewed as the cell's "bone-like" scaffolding, microtubules and other cytoskeletal structures also appear to fill communicative and information processing roles. Numerous types of studies link the cytoskeleton to cognitive processes (for review, cf. Hameroff and Penrose, 1996). Theoretical models and simulations suggest how conformational states of tubulins within microtubule lattices can interact with neighboring tubulins to represent, propagate and process information as in molecular-level "cellular automata," or "spin glass" type computing systems (Figure 5; e.g. Hameroff and Watt, 1982; Rasmussen et al, 1990; Tuszynski et al, 1995). Figure 5. Microtubule automaton simulation (from Rasmussen et al, 1990). Black and white tubulins correspond to states shown in Figure 2. Eight nanosecond time steps of a segment of one microtubule are shown in "classical computing" mode in which patterns move, evolve, interact and lead to emergence of new patterns. In Hameroff and Penrose (1996; and in summary form, Penrose and Hameroff, 1995), we present a model linking microtubules to consciousness, using quantum theory as viewed in the particular "realistic" way that is described in Shadows of the Mind (Penrose, 1994). Figure 7. Schematic graph of proposed quantum coherence (number of tubulins) emerging vs time in microtubules. 500 milliseconds is time for pre-conscious processing (e.g. Libet, 1979). Area under curve connects mass-energy differences with collapse time in accordance with gravitational OR. This degree of coherent superposition of differing space-time geometries leads to abrupt quantum classical reduction ("self-collapse" or "orchestrated objective reduction: Orch OR"). Figure 8. Quantum coherence in microtubules schematically graphed on longer time scale for 5 different states related to consciousness. Area under each curve equivalent in all cases. A. Normal experience: as in Figure 8. B. Anesthesia: anesthetics bind in hydrophobic pockets and prevent quantum delocalizability and coherent superposition (e.g. Louria and Hameroff, 1996). C. Heightened Experience: increased sensory experience input (for example) increases rate of emergence of quantum coherent superposition. Orch Or threshold is reached faster (e.g. 250 msec) and Orch Or frequency is doubled. D. Altered State: even greater rate of emergence of quantum coherence due to sensory input and other factors promoting quantum state (e.g. meditation, psychedelic drug etc.). Predisposition to quantum state results in baseline shift and only partial collapse so that conscious experience merges with normally sub-conscious quantum computing mode. E. Dreaming: prolonged quantum coherence time. Figure 9. Quantum coherence in microtubules. Having emerged from resonance in classical automaton patterns, quantum coherence non-locally links superpositioned tubulins (gray) within and among microtubules. Upper microtubule: cutaway view shows coherent photons generated by quantum ordering of water on tubulin surfaces, propagating in microtubule waveguide. MAP (microtubule-associated-protein) attachments breach isolation and prevent quantum coherence; MAP attachment sites thus act as "nodes" which tune and orchestrate quantum oscillations and set possibilities and probabilities for collapse outcomes ("orchestrated objective reduction: Orch OR"). In our model, quantum coherence emerges, and is isolated, in brain microtubules until the differences in mass-energy distribution among superposed tubulin states reach the threshold of instability described above, related to quantum gravity (Figure 6). The resultant self-collapse (OR), considered to be a time-irreversible process, creates an instantaneous "now" event. Sequences of such events create a flow of time, and consciousness (Figures 7 and 8). We envisage that attachments of MAPs on microtubules "tune" quantum oscillations, and "orchestrate" possible collapse outcomes (Figure 9). Thus we term the particular self-organizing OR occurring in MAP-connected microtubules, and relevant to consciousness, orchestrated objective reduction ("Orch OR"). Orch OR events are thus self-selecting processes in fundamental space-time geometry. If experience is truly a component of fundamental space-time, Orch OR may begin to explain the "hard problem" of consciousness. Summary of the Orch OR Model for Consciousness The full details of this model are given in Hameroff and Penrose (1996). The picture we are putting forth involves the following ingredients: 1. Aspects of quantum theory (e.g. quantum coherence) and of the suggested physical phenomenon of quantum wave function "self-collapse" (objective reduction: OR - Penrose, 1994; 1996) are essential for consciousness, and occur in cytoskeletal microtubules (MTs) and other structures within each of the brain's neurons. 2. Conformational states of MT subunits (tubulins) are coupled to internal quantum events, and cooperatively interact with other tubulins in both classical and quantum computation (Hameroff et al, 1992; Rasmussen et al, 1990 - Figures 4, 5 and 6). 3. Quantum coherence occurs among tubulins in MTs, pumped by thermal and biochemical energies (perhaps in the manner proposed by Frohlich, 1968; 1970; 1975). Evidence for coherent excitations in proteins has recently been reported by Vos et al (1993). It is also considered that water at MT surfaces is "ordered," dynamically coupled to the protein surface. Water ordering within the hollow MT core (acting like a quantum wave guide) may result in quantum coherent photons (as suggested by the phenomena of "super-radiance" and "self-induced transparency" - Jibu et al, 1994; 1995). We require that coherence be sustained (protected from environmental interaction) for up to hundreds of milliseconds by isolation a) within hollow MT cores; b) within tubulin hydrophobic pockets; c) by coherently ordered water; d) sol-gel layering (Hameroff and Penrose, 1996). Feasibility of quantum coherence in the seemingly noisy, chaotic cell environment is supported by the observation that quantum spins from biochemical radical pairs which become separated retain their correlation in cytoplasm (Walleczek, 1995). 1. During pre-conscious processing, quantum coherent superposition/computation occurs in MT tubulins and continues until the mass-distribution difference among the separated states of tubulins reaches a threshold related to quantum gravity. Self-collapse (OR) then occurs (Figures 6 & 7). 2. The OR self-collapse process results in classical "outcome states" of MT tubulins which then implement neurophysiological functions. According tocertain ideas for OR (Penrose, 1994), the outcome states are "non-computable"; that is, they cannot be determined algorithmically from the tubulin states at the beginning of the quantum computation. 3. Possibilities and probabilities for post-OR tubulin states are influenced by factors including initial tubulin states, and attachments of microtubule-associated proteins (MAPs) acting as "nodes" which tune and "orchestrate" the quantum oscillations (Figure 9). We thus term the self-tuning OR process in microtubules "orchestrated objective reduction - Orch OR"). 4. According to the arguments for OR put forth in Penrose (1994), superposed states each have their own space-time geometries. When the degree of coherent mass-energy difference leads to sufficient separation of space-time geometry, the system must choose and decay (reduce, collapse) to a single universe state. Thus Orch OR involves self-selections in fundamental space-time geometry (Figures 10 & 11). Figure 10. Schematic space-time separation illustration of three superposed tubulins. The space-time differences are very tiny in ordinary terms ( 10-40 nm), but relatively large mass movements (e.g. hundreds of tubulin conformations, each moving from 10-6 nm to 0.2 nm) indeed have precisely such very tiny effects on the space-time curvature. Figure 11. Center: Three superposed tubulins (e.g. Figure 4) with corresponding schematic space-time separation illustrations (Figures 1 and 10). Surrounding the superposed tubulins are the eight possible post-reduction "eigenstates" for tubulin conformation, and corresponding space-time geometry. 1. To quantify the Orch OR process, in the case of a pair of roughly equally superposed states, each of which has a reasonably well-defined mass distribution, we calculate the gravitational self-energy E of the difference between these two mass distributions, and then obtain the approximate lifetime T for the superposition to decay into one state or the other by the formula T=h/E. Here h is Planck's constant over 2pi. We call T the coherence time for the superposition (how long coherence is sustained). If we assume a coherence time T= 500 msec (shown by Libet, 1979, and others to be a relevant time for pre-conscious processing), we calculate E, and determine the number of MT tubulins whose coherent superposition for 500 msec will elicit Orch OR. This turns out to be about 109 tubulins. 2. A typical brain neuron has roughly 107 tubulins (Yu and Baas, 1994). If, say, 10 percent of tubulins within each neuron are involved in the quantum coherent state, then roughly 103 (one thousand) neurons would be required to sustain coherence for 500 msec, at which time the quantum gravity threshold is reached and occurs. 3. Orch OR then 10. We consider each self-organized Orch OR as a single conscious event; cascades of such events would constitute a "stream" of consciousness. If we assume some form of excitatory input (e.g. you are threatened, or enchanted) in which quantum coherence emerges faster, then, for example, 1010coherent tubulins could Orch OR after 50 msec (e.g. Figure 8c). Turning to see a bengal tiger in your face might perhaps elicit 1011 in 5 msec, or more tubulins, faster. A slow emergence of coherence (your forgotten phone bill) may require longer times. A single electron would require more than the age of the universe. 4. Quantum states are non-local (because of quantum entanglement--or "Einstein-Podolsky-Rosen" (EPR) effects), so that the entire non-localized state reduces all at once. This can happen if the mass movement that induces collapse takes place in a small region encompassed by the state, or if it takes place uniformly over a large region. Thus, each instantaneous Orch OR could "bind" various superpositions which may have evolved in separated spatial distributions and over different time scales, but whose net displacement self-energy reaches threshold at a particular moment. Information is bound into an instantaneous event (a "conscious now"). Cascades of Orch ORs could then represent our familiar "stream of consciousness," and create a "forward" flow of time (Aharonov and Vaidman, 1990; Elitzur, 1996; Tollaksen, 1996). The Orch OR model thus appears to accommodate some important features of consciousness: 1. control/regulation of neural action 2. pre-conscious to conscious transition 3. non-computability 4. causality 5. binding of various (time scale and spatial) superpositions into instantaneous "now" 6. a "flow" of time 7. a connection to fundamental space-time geometry in which experience may be based. Conclusion: What is it like to be a worm? The Orch OR model has the implication that an organism able to sustain quantum coherence among, for example, 109 tubulins for 500 msec might be capable of having a conscious experience. More tubulins coherent for a briefer period, or fewer for a longer period (E =h/T) will also have conscious events. Human brains appear capable of, for example, 1011 tubulin, 5 msec "bengal tiger experiences," but what about simpler organisms? From an evolutionary standpoint, introduction of a dynamically functional cytoskeleton (perhaps symbiotically from spirochetes, e.g. Margulis, 1975) greatly enhanced eukaryotic cells by providing cell movement, internal organization, separation of chromosomes and numerous other functions. As cells became more specialized with extensions like axopods and eventually neural processes, increasingly larger cytoskeletal arrays providing transport and motility may have developed quantum coherence via the Fröhlich mechanism as a by-product of their functional coordination. Another possible scenario for emergence of quantum coherence leading to Orch OR and conscious events is "cellular vision." Albrecht-Buehler (1992) has observed that single cells utilize their cytoskeletons in "cellular vision" - detection, orientation and directional response to beams of red/infra-red light. Jibu et al (1995) argue that this process requires quantum coherence in microtubules and ordered water, and Hagan (1995) suggests the quantum effects/cellular vision provided an evolutionary advantage for cytoskeletal arrays capable of quantum coherence. For whatever reason quantum coherence emerged, one could then suppose that, one day, an organism achieved sufficient microtubule quantum coherence to elicit Orch OR, and had a "conscious" experience. At what level of evolutionary development might this primitive consciousness have emerged? A single cell organism like Paramecium is extremely clever, and utilizes its cytoskeleton extensively. Could a paramecium be conscious? Assuming a single paramecium contains, like each neuronal cell, 107 tubulins, then for a paramecium to elicit Orch OR, 100% of its tubulins would need to remain in quantum coherent superposition for nearly a minute. This seems unlikely. Consider the nematode worm C elegans. It's 302 neuron nervous system is completely mapped. Could C elegans support Orch OR? With 3 x 109 tubulins, C elegans would require one third of its tubulins to sustain quantum coherent superposition for 500 msec. This seems unlikely, but not altogether impossible. If not C elegans, then perhaps Aplysia with a thousand neurons, or some higher organism. Orch OR provides a theoretical framework to entertain such possibilities. Would a primitive Orch OR experience be anything like ours? If C elegans were able to self-collapse, what would it be like to be a worm? (Nagel, 1974) A single, 109 tubulin, 500 msec Orch OR in C elegans should be equal in gravitational self-energy (and thus perhaps, experiential intensity) to one of our "everyday experiences." A major difference is that we would have many Orch OR events sequentially (up to, say, 109 per second) whereas C elegans could generate, at most, 2 per second. C elegans would also presumably lack extensive memory and associations, and have poor sensory data, but nonetheless, by our criteria a 109 tubulin, 500 msec Orch OR in C elegans would be a conscious experience: a mere smudge of known reality, the next space-time move. Consciousness has an important place in the universe. Orch OR in microtubules is a model depicting consciousness as sequences of non-computable self-selections in fundamental space-time geometry. If experience is a quality of space-time, then Orch OR indeed begins to address the "hard problem" of consciousness in a serious way. Reprinted from Journal of Consciousness Studies (2)1:36-53, 1996 special issue on the "hard problem" of conscious experience Acknowledgments: Thanks to Dave Cantrell for artwork and to Carol Ebbecke for technical support. Aharonov, Y., and Vaidman, L., (1990) Properties of a quantum system during the time interval between two measurements. Phys. Rev. A. 41:11. Albrecht-Buehler, G., (1992) Rudimentary form of "cellular vision" Cell Biol 89, 8288-8292 Beck, F. and Eccles, J.C. (1992) Quantum aspects of brain activity and the role of consciousness. Proc. Natl. Acad. Sci. USA 89(23):11357-11361. Chalmers, D. (1996) Facing up to the problem of consiousness. In: Toward a Science of Consciousness - The First Tucson Discussions and Debates, S.R. Hameroff, A. Kaszniak and A.C. Scott (eds.), MIT Press, Cambridge, MA. Chalmers, D. (1996) Toward a Theory of Consciousness. Springer-Verlag, Berlin. Conze, E., (1988) Buddhist Thought in India, Louis de La Vallee Poussin (trans.), Abhidharmako"sabhaa.syam: English translation by Leo M. Pruden, 4 vols (Berkeley) pp 85-90. Di¢si, L. (1989) Models for universal reduction of macroscopic quantum fluctuations. Phys. Rev. A. 40:1165-1174. Elitzur, (1996) Time and consciousness: The uneasy bearing of relativity theory on the mind-body problem. In: Toward a Science of Consciousness - The First Tucson Discussions and Debates, S.R. Hameroff, A. Kaszniak and A.C. Scott (eds.), MIT Press, Cambridge, MA. Everett, H., (1957) Relative state formulation of quantum mechanics. In Quantum Theory and Measurement, J.A. Wheeler and W.H. Zurek (eds.) Princeton University Press, 1983; originally in Rev. Mod. Physics, 29:454-462. Frohlich, H. (1968) Long-range coherence and energy storage in biological systems. Int. J. Quantum Chem. 2:641-9. Frohlich, H. (1970) Long range coherence and the actions of enzymes. Nature 228:1093. Frohlich, H. (1975) The extraordinary dielectric properties of biological materials and the action of enzymes. Proc. Natl. Acad. Sci. 72:4211-4215. Ghirardi, G.C., Grassi, R., and Rimini, A. (1990) Continuous-spontaneous reduction model involving gravity. Phys. Rev. A. 42:1057-1064. Ghirardi, G.C., Rimini, A., and Weber, T. (1986) Unified dynamics for microscopic and macroscopic systems. Phys. Rev. D. 34:470. Goswami, A., (1993) The Self-Aware Universe: How Consciousness Creates the Material World. Tarcher/Putnam, New York. Hagan, S., (1995) personal communication Hameroff, S.R., Dayhoff, J.E., Lahoz-Beltra, R., Samsonovich, A., and Rasmussen, S. (1992) Conformational automata in the cytoskeleton: models for molecular computation. IEEE Computer (October Special Issue on Molecular Computing) 30-39. Hameroff, S.R., and Penrose, R., (1995) Orchestrated reduction of quantum coherence in brain microtubules: A model for consciousness. Neural Network World 5 (5) 793-804. Hameroff, S.R., and Penrose, R., (1996) Orchestrated reduction of quantum coherence in brain microtubules: A model for consciousness. In: Toward a Science of Consciousness - The First Tucson Discussions and Debates, S.R. Hameroff, A. Kaszniak and A.C. Scott (eds.), MIT Press, Cambridge, MA. Jibu, M., Hagan, S., Hameroff, S.R., Pribram, K.H., and Yasue, K. (1994) Quantum optical coherence in cytoskeletal microtubules: implications for brain function. BioSystems 32:195-209. Jibu, M., Yasue, K., Hagan, S., (1995) Water laser as cellular "vision", submitted. K rolh zy, F., Frenkel, A., and Lukacs, B. (1986) On the possible role of gravity on the reduction of the wave function. In Quantum Concepts in Space and Time, R. Penrose and C.J. Isham (eds.), Oxford University Press. Libet, B., Wright, E.W. Jr., Feinstein, B., and Pearl, D.K. (1979) Subjective referral of the timing for a conscious sensory experience. Brain 102:193-224. Louria, D., and Hameroff, S. (1996) Computer simulation of anesthetic binding in protein hydrophobic pockets. In: Toward a Science of Consciousness - The First Tucson Discussions and Debates, S.R. Hameroff, A. Kaszniak and A.C. Scott (eds.), MIT Press, Cambridge, MA. Marshall, I.N. (1989) Consciousness and Bose-Einstein condensates. New Ideas in Psychology 7:73 83. Margulis, L., (1975) Origin of Eukaryotic Cells. Yale University Press, New Haven. Nagel, T. (1974; 1981) What is it like to be a bat? in The Mind's I. Fantasies and Reflections on Self and Soul. (eds) D.R. Hofstadter and D.C. Dennett, Basic Books, N.Y. pp 391-403, 1981 (Originally published in The Philosophical Review, October, 1974) Pearle, P. (1989) Combining stochastic dynamical state vector reduction with spontaneous localization. Phys. Rev. D. 13:857-868. Pearle, P., and Squires, E. (1994) Bound-state excitation, nucleon decay experiments and models of wave-function collapse. Phys. Rev. Letts. 73(1):1-5. Penrose, R. (1987) Newton, quantum theory and reality. In 300 Years of Gravity S.W. Hawking and W. Israel (eds.) Cambridge University Press. Penrose, R. (1989) The Emperor's New Mind, Oxford Press, Oxford, U.K. Penrose, R. (1993) Gravity and quantum mechanics. In General Relativity and Gravitation. Proceedings of the Thirteenth International Conference on General Relativity and Gravitation held at Cordoba, Argentina 28 June-4 July 1992. Part 1: Plenary Lectures. (eds. R.J. Gleiser, C.N. Kozameh and O.M. Moreschi) Institute of Physics Publications, Bristol. Penrose, R. (1995) On gravity's role in quantum state reduction Penrose, R., anmd Hameroff, S.R., What gaps? Reply to Grush and Churchland. Journal of Consciousness Studies 2(2):99-112. Rasmussen, S., Karampurwala, H., Vaidyanath, R., Jensen, K.S., and Hameroff, S. (1990) Computational connectionism within neurons: A model of cytoskeletal automata subserving neural networks. Physica D 42:428-449. Rensch, B. (1960) Evolution Above the Species Level. Columbia university Press, New York. Russell, B. (1954) The Analysis of Matter. Dover, New York. Schr"dinger, E (1935) Die gegenwarten situation in der quantenmechanik. Naturwissenschaften, 23:807-812, 823-828, 844-849. (Translation by J. T. Trimmer (1980) in Proc. Amer. Phil. Soc., 124:323-338.) In Quantum Theory and Measurement (ed. J.A. Wheeler and W.H. Zurek). Princeton University Press, 1983. Shimony, A., (1993) Search for a Naturalistic World View - Volume II. Natural Science and Metaphysics. Cambridge University Press, Cambridge, U.K. Spinoza, B. (1677) Ethica in Opera quotque reperta sunt. 3rd edition, eds. J. van Vloten and J.P.N. Land (Netherlands: Den Haag) Stubenberg, L. (1996) The place of qualia in the world of science. In: Toward a Science of Consciousness - The First Tucson Discussions and Debates, S.R. Hameroff, A. Kaszniak and A.C. Scott (eds.), MIT Press, Cambridge, MA. Tart, C.T., (1995) personal communication and information gathered from "Buddha-1 newsnet" Tollaksen, J. (1996) New insights from quantum theory on time, consciousness, and reality. In: Toward a Science of Consciousness - The First Tucson Discussions and Debates, S.R. Hameroff, A. Kaszniak and A.C. Scott (eds.), MIT Press, Cambridge, MA. von Rospatt, A., (1995) The Buddhist Doctrine of Momentariness: A survey of the origins and early phase of this doctrine up to Vasubandhu (Stuttgart: Franz Steiner Verlag). Vos, M.H., Rappaport, J., Lambry, J. Ch.,Breton, J., Martin, J.L. Visualization of coherent nuclear motion in a membrane protein by femtosecond laser spectroscopy. Nature 363:320-325. Walleczek, J. (1995) Magnetokinetic effects on radical pairs: a possible paradigm for understanding sub-kT magnetic field interactions with biological systems. in Biological Effects of Environmental Electromagnetic Fields. M Blank (ed) Advances in Chemistry No 250 American Chemical society Books Washington DC (in press) Wheeler, J.A. (1957) Assessment of Everett's `relative state" formulation of quantum theory. Revs. Mod. Phys., 29:463-465. Wheeler, J.A. (1990) Information, physics, quantum: The search for links. In (W. Zurek, ed.) Complexity, Entropy, and the Physics of Information. Addison-Wesley. Whitehead, A.N., (1929) Science and the Modern World. Macmillan, N.Y. Whitehead, A.N., (1929) Process and Reality. Macmillan, N.Y. Yu, W., and Baas, P.W. (1994) Changes in microtubule number and length during axon differentiation. J. Neuroscience 14(5):2818-2829. Stuart Hameroff Departments of Anesthesiology  and Psychology University of Arizona Tucson, Arizona USA Roger Penrose Rouse Ball Professor of Mathematics University of Oxford Oxford, United Kingdom Correspondence to: Stuart Hameroff Department of Anesthesiology 1501 North Campbell Avenue Tucson, Arizona 85724 USA Telephone (520) 626-5605 FAX (520) 626-5596
3552236b54a96154
Take the 2-minute tour × For a report I'm writing on Quantum Computing, I'm interested in understanding a little about this famous equation. I'm an undergraduate student of math, so I can bear some formalism in the explanation. However I'm not so stupid to think I can understand this landmark without some years of physics. I'll just be happy to be able to read the equation and recognize it in its various forms. To be more precise, here are my questions. Hyperphysics tell me that Shrodinger's equation "is a wave equation in terms of the wavefunction". 1. Where is the wave equation in the most general form of the equation? $$\mathrm{i}\hbar\frac{\partial}{\partial t}\Psi=H\Psi$$ I thought wave equation should be of the type It's the difference in order of of derivation that is bugging me. From Wikipedia "The equation is derived by partially differentiating the standard wave equation and substituting the relation between the momentum of the particle and the wavelength of the wave associated with the particle in De Broglie's hypothesis." 2. Can somebody show me the passages in a simple (or better general) case? 3. I think this questions is the most difficult to answer to a newbie. What is the Hamiltonian of a state? How much, generally speaking, does the Hamiltonian have to do do with the energy of a state? 4. What assumptions did Schrödinger make about the wave function of a state, to be able to write the equation? Or what are the important things I should note in a wave function that are basilar to proof the equation? With both questions I mean, what are the passages between de Broglie (yes there are these waves) and Schrödinger (the wave function is characterized by)? 5. It's often said "The equation helps finds the form of the wave function" as often as "The equation helps us predict the evolution of a wave function" Which of the two? When one, when the other? share|improve this question Philisophically I always find requests to explain an equation for the laymen to be a little strange. The point of writing it in math is to have a precise, and complete representation of the theory... –  dmckee Dec 15 '12 at 16:13 You're right. That's why I tried to make it clear I'm not asking an explanation of the "equation" as you mean it, rather the meaning of the "symbols in it". In particulart question number 1 is the most important for me now. –  Temitope.A Dec 15 '12 at 17:04 For a connection between Schr. eq. and Klein-Gordon eq, see e.g. A. Zee, QFT in a Nutshell, Chap. III.5, and this Phys.SE post plus links therein. –  Qmechanic Dec 15 '12 at 18:21 add comment 3 Answers up vote 7 down vote accepted You should not think of the Schrödinger equation as a true wave equation. In electricity and magnetism, the wave equation is typically written as $$\frac{1}{c^2} \frac{\partial^2 u}{\partial t^2} = \frac{\partial^2 u}{\partial x^2}$$ with two temporal and two spatial derivatives. In particular, it puts time and space on 'equal footing', in other words, the equation is invariant under the Lorentz transformations of special relativity. The one-dimensional time-dependent Schrödinger equation for a free particle is $$ \mathrm{i} \hbar \frac{\partial \psi}{\partial t} = -\frac{\hbar^2}{2m} \frac{\partial^2 \psi}{\partial x^2}$$ which has one temporal derivative but two spatial derivatives, and so it is not Lorentz invariant (but it is Galilean invariant). For a conservative potential, we usually add $V(x) \psi$ to the right hand side. Now, you can solve the Schrödinger equation is various situations, with potentials and boundary conditions, just like any other differential equation. You in general will solve for a complex (analytic) solution $\psi(\vec r)$: quantum mechanics demands complex functions, whereas in the (classical, E&M) wave equation complex solutions are simply shorthand for real ones. Moreover, due to the probabilistic interpretation of $\psi(\vec r)$, we make the demand that all solutions must be normalized such that $\int |\psi(\vec r)|^2 dr = 1$. We're allowed to do that because it's linear (think 'linear' as in linear algebra), it just restricts the number of solutions you can have. This requirements, plus linearity, gives you the following properties: 1. You can put any $\psi(\vec r)$ into Schrödinger's equation (as long as it is normalized and 'nice'), and the time-dependence in the equation will predict how that state evolves. 2. If $\psi$ is a solution to a linear equation, $a \psi$ is also a solution for some (complex) $a$. However, we say all such states are 'the same', and anyway we only accept normalized solutions ($\int |a\psi(\vec r)|^2 dr = 1$). We say that solutions like $-\psi$, and more generally $e^{i\theta}\psi$, represent the same physical state. 3. Some special solutions $\psi_E$ are eigenstates of the right-hand-side of the time-dependent Schrödinger equation, and therefore they can be written as $$-\frac{\hbar^2}{2m} \frac{\partial^2 \psi_E}{\partial x^2} = E \psi_E$$ and it can be shown that these solutions have the particular time dependence $\psi_E(\vec r, t) = \psi_E(\vec r) e^{-i E t/\hbar}$. As you may know from linear algebra, the eigenstates decomposition is very useful. Physically, these solutions are 'energy eigenstates' and represent states of constant energy. 4. If $\psi$ and $\phi$ are solutions, so is $a \psi + b \phi$, as long as $|a|^2 + |b|^2 = 1$ to keep the solution normalized. This is what we call a 'superposition'. A very important component here is that there are many ways to 'add' two solutions with equal weights: $\frac{1}{\sqrt 2}(\psi + e^{i \theta} \phi)$ are solutions for all angles $\theta$, hence we can combine states with plus or minus signs. This turns out to be critical in many quantum phenomena, especially interference phenomena such as Rabi and Ramsey oscillations that you'll surely learn about in a quantum computing class. Now, the connection to physics. 1. If $\psi(\vec r, t)$ is a solution to the Schrödinger's equation at position $\vec r$ and time $t$, then the probability of finding the particle in a specific region can be found by integrating $|\psi^2|$ around that region. For that reason, we identify $|\psi|^2$ as the probability solution for the particle. • We expect the probability of finding a particle somewhere at any particular time $t$. The Schrödinger equation has the (essential) property that if $\int |\psi(\vec r, t)|^2 dr = 1$ at a given time, then the property holds at all times. In other words, the Schrödinger equation conserves probability. This implies that there exists a continuity equation. 2. If you want to know the mean value of an observable $A$ at a given time just integrate $$ <A> = \int \psi(\vec r, t)^* \hat A \psi(\vec r, t) d\vec r$$ where $\hat A$ is the linear operator associated to the observable. In the position representation, the position operator is $\hat A = x$, and the momentum operator, $\hat p = - i\hbar \partial / \partial x$, which is a differential operator. The connection to de Broglie is best thought of as historical. It's related to how Schrödinger figured out the equation, but don't look for a rigorous connection. As for the Hamiltonian, that's a very useful concept from classical mechanics. In this case, the Hamiltonian is a measure of the total energy of the system and is defined classically as $H = \frac{p^2}{2m} + V(\vec r)$. In many classical systems it's a conserved quantity. $H$ also lets you calculate classical equations of motion in terms of position and momentum. One big jump to quantum mechanics is that position and momentum are linked, so knowing 'everything' about the position (the wavefunction $\psi(\vec r))$ at one point in time tells you 'everything' about momentum and evolution. In classical mechanics, that's not enough information, you must know both a particle's position and momentum to predict its future motion. share|improve this answer Thank you! One last question. How do somebody relate the measurment principle to the equations, that an act of measurment will cause the state to collapse to an eigenstate? Or is time a concept indipendent of the equation? –  Temitope.A Dec 16 '12 at 11:37 Can states of entanglement be seen in the equation to? –  Temitope.A Dec 16 '12 at 11:47 Note that user10347 talks of a potential added to the differential equation. To get real world solutions that predict the result of a measurement one has to apply the boundary conditions of the problem. The "collapse" vocabulary is misleading. A measurement has a specific probability of existing in the space coordinates or with the fourvectors measured. The measurement itself disturbs the potential and the boundary conditions change, so that after the measurement different solutions/psi functions will apply. –  anna v Dec 16 '12 at 13:23 One type of measurement is strong measurement, where we the experimentalists, measure some differential operator $A$, and find some particular (real) number $a_i$, which is one of the eigenvalues of $A$. (Important detail: for $A$ to be measureable, it must have all real eigenvalues.) Then, we know the wavefunction "suddenly" turns into $\psi_i$, which is the eigenfunction of $A$ whose eigenvalue was that number $a_i$ we measured. The system has lost of knowledge of the original wavefunction $\psi$. The probability of measuring $a_i$ is $|<\psi_i | \psi>|^2$. –  emarti Dec 18 '12 at 7:12 @Temitope.A: Entanglement isn't obvious in anything here because I've only written single-particle wavefunctions. A two-particle wavefunction $\Psi(\vec r_1, \vec r_2)$ gives a probability $\int_{V_1}\int_{V_2}|\Psi|^2 d \vec r_1 d \vec r_2$ of detecting one particle in a region $V_1$ and a second particle in a region $V_2$. A simple solution for distinguishable particles is $\Psi(\vec r_1, \vec r_2) = \psi_1(\vec r_1) \psi_2(\vec r_2)$, and it can be shown that this satisfies all our conditions. An entangled state cannot be written so simply. (Indistinguishable particles take more care.) –  emarti Dec 18 '12 at 9:32 show 2 more comments What you write is the time-dependent Schrödinger equation. This is not the equation of a true wave. He postulated the equation using a heuristic approach and some ideas/analogies from optics, and he believed on the existence of a true wave. However, the correct interpretation of $\Psi$ was given by Born: $\Psi$ is an unobservable function, whose complex square $|\Psi|^2$ gives probabilities. In older literature $\Psi$ is still named the wavefunction, In modern literature the term state function is preferred. The terms "wave equation" and "wave formulation" are legacy terms. In fact, part of the confusion that had Schrödinger, when he believed that his equation described a physical wave, is due to the fact he worked with single particles. In that case $\Psi$ is defined in an abstract space which is isomorphic to the tri-dimensional space. However, when you consider a second particle and write $\Psi$ for a two-body system, the isomorphism is broken and the superficial analogy with a physical wave is completely lost. A good discussion of this is given in Ballentine textbook on quantum mechanics (section 4.2). The Schrödinger equation cannot be derived from wave theory. This is why the equation is postulated in quantum mechanics. There is no Hamiltonian for one state; the Hamiltonian is characteristic of a given system with independence of its state. Energy is a possible physical property of a system, one of the possible observables of a system; it is more correct to say that the Hamiltonian gives the energy of a system in the cases when the system is in a certain state. A quantum system always has a Hamiltonian, but not always has a defined energy. Only certain states $\Psi_E$ that satisfy the time-independent Schrödinger equation $H\Psi_E = E \Psi_E$ are associated to a value $E$ of energy. The quantum system can be in a superposition of the $\Psi_E$ states or can be in more general states for which energy is not defined. Wavefunctions $\Psi$ have to satisfy a number of basic requirements such as continuity, differentiability, finiteness, normalization... Some texts emphasize that the wavefunctions would be single-valued, but I already take this in the definition of function. The Schrödinger equation gives both "the form of the wave function" and "the evolution of a wave function". If you know $\Psi$ at some initial time and integrate the time-dependent Schrödinger equation you obtain the form of the wavefunction to some other instant: e.g. the integration is direct and gives $\Psi(t) = \mathrm{Texp}(-\mathrm{i}/\hbar \int_0^t H(t') dt') \Psi(0)$, where $\mathrm{Texp}$ denotes a time-ordered exponential. This equation also gives the evolution of the initial wavefunction $\Psi(0)$. When the Hamiltonian is time-independent, the solution simplifies to $\Psi(t) = \exp(-\mathrm{i}Ht/\hbar) \Psi(0)$. For stationary states, the time-dependent Schrödinger equation that you write reduces to the time-independent Schrödinger equation $H\Psi_E = E \Psi_E$; the demonstration is given in any textbook. For stationary states there is no evolution of the wavefunction, $\Psi_E$ does not depend on time, and solving the equation only gives the form of the wavefunction. share|improve this answer Good answer. I would only add that regarding the last point, I think the confusion comes from references to the "time-independent" Schrodinger eigenvalue equation $H\psi_E = E\psi_E$ being conflated with the "time-dependent" evolution equation $\mathrm{i}\hbar \dot{\psi} = H\psi$, when of course the two are entirely different beasts. –  Chris White Dec 15 '12 at 21:07 @ChrisWhite Good point. Made. –  juanrga Dec 16 '12 at 2:33 6 paragraph: maybe you should add that the equation only holds if H is time-independent. –  ungerade Dec 16 '12 at 12:19 @ungerade Another good point! Added evolution when H is time-dependent. –  juanrga Dec 16 '12 at 12:49 add comment If you take the wave equation $$\nabla^2\phi = \frac{1}{u^2}\frac{d^2\phi}{dt^2}\text{,}$$ and consider a single frequency component of a wave while taking out its time dependence, $\phi = \psi e^{-i\omega t}$, then: $$\nabla^2 \phi = -\frac{4\pi^2}{\lambda^2}\phi\text{,}$$ but that means the wave amplitude should satisfy an equation of the same form: $$\nabla^2 \psi = -\frac{4\pi^2}{\lambda^2}\psi\text{,}$$ and if you know the de Broglie relation $\lambda = h/p$, where for a particle of energy $E$ in a potential $V$ has momentum $p = \sqrt{2m(E-V)}$, so that: $$\underbrace{-\frac{\hbar^2}{2m}\nabla^2\psi + V\psi}_{\hat{H}\psi} = E\psi\text{,}$$ Therefore, the time-independent Schrödinger equation has a connection to the wave equation. The full Schrödinger equation can be recovered by putting time-dependence back in, $\Psi = \psi e^{-i\omega t}$ while respecting the de Broglie $E = \hbar\omega$: $$\hat{H}\Psi = (\hat{H}\psi)e^{-i\omega t} = \hbar\omega \psi e^{-i\omega t} = i\hbar\frac{\partial\Psi}{\partial t}\text{,}$$ and then applying the principle of superposition for the general case. However, in this process the repeated application of the de Broglie relations takes us away from either classical waves or classical particles; to what extent the resulting "wave function" should be considered a wave is mostly a semantic issue, but it's definitely not at all a classical wave. As other answers have delved into, the proper interpretation for this new "wave function" $\Psi$ is inherently probabilistic, with its modulus-squared representing a probability density and the gradient of the complex phase being the probability current (scaled by some constants and the probability density). As for the de Broglie relations themselves, it's possible to "guess" them from making an analogy from waves to particles. Writing $u = c/n$ and looking for solutions close to plane wave in form, $\phi = e^{A+ik_0(S-ct)}$, the wave equation gives: $$\begin{eqnarray*} \nabla^2A + (\nabla A)^2 &=& k_0^2[(\nabla S)^2 - n^2]\text{,}\\ \nabla^2 S +2\nabla A\cdot\nabla S &=& 0\text{.} \end{eqnarray*}$$ Under the assumption that the index of refraction $n$ changes slowly over distances on the order of the wavelength, then $A$ does not vary extremely, the wavelength is small, and so $k_0^2 \propto \lambda^{-2}$ is large. Therefore the term in the square brackets should be small, and we can make the approximation: $$(\nabla S)^2 = n^2\text{,}$$ which is the eikonal equation that links the wave equation with geometrical optics, in which motion of light of small wavelengths in a medium of well-behaved refractive index can be treated as rays, i.e., as if described by paths of particles/corpuscles. For the particle analogy to work, the eikonal function $S$ must take the role of Hamilton's characteristic function $W$ formed by separation of variables from the classical Hamilton-Jacobi equation into $W - Et$, which forces the latter to be proportional to the total phase of the wave, giving $E = h\nu$ for some unknown constant of proportionality $h$ (physically Planck's constant). The index of refraction $n$ corresponds to $\sqrt{2m(E-V)}$. This is discussed in, e.g., Goldstein's Classical Mechanics, if you're interested in details. share|improve this answer Your first equation is a wave equation, only if you substitute the total time derivatives by partial ones. Moreover, you introduce a $\Psi = \psi e^{-i\omega t} = \phi$, but the wavefunction $\Psi$ does not satisfy the first equation for a wave. –  juanrga Dec 18 '12 at 11:21 add comment Your Answer
79b6eb1d46f53671
Materials Research FP-MST is an ab initio electronic structure calculation package for solid state materials. It is based on full-potential (FP) multiple scattering theory (MST), also known as Korringa-Kohn-Rostoker (KKR) method, and solves the Green function for the Kohn-Sham’s equation in Density Functional Theory (DFT). FP-MST was developed by PSC staff and will be made available for download. Package Description FP-MST implements KKR method for solving the one-electron Schrödinger equation. Unlike many other ab initio packages, it solves for both core and valence states (also known as “all-electron” method), and it does not use pseudo-potential and plane-wave basis. FP-MST also implements Locally Self-consistent Multiple Scattering (LSMS) method, a linear scaling, real space MST approach to the solution of the Kohn-Sham’s equation. It is capable of performing the electronic structure calculation for systems that may require tens of thousands of atoms in the unit cell. FP-MST goes beyond the muffin-tin approximation employed in conventional KKR method, and is capable of performing full-potential electronic structure calculations without requiring the potential and charge density to be spherically symmetrical. FP-MST takes advantage of computational independence between the processes for each atom, energy mesh point along an energy contour, spin index, k-space mesh point, and angular momentum quantum number, and employs MPI and MPI-group for multi-level parallelization over the processes. FP-MST also takes advantage of GPGPU devices and is able to offload compute intensive calculations to the accelerators if they are available. FP-MST is designed to perform the following electronic structure calculations: • Conventional k-space approach to the electronic structure calculation for a crystal with periodic boundary conditions • Real space approach to the electronic structure calculation for a complex structure • Spin-polarized calculation for ferromagnetic or anti-ferromagnetic materials • Spin-canted calculation for materials with non-collinear magnetic structure • Scalar-relativistic (or semi-relativistic) electronic structure calculation • Conventional KKR-CPA method with muffin-tin approximation for random alloys • Full-potential KKR-CPA method for random alloys Starting Potential Various starting potentials for each element are provided for performing SCF electronic structure calculations. The potential is generated either from a single site calculation or from a SCF electronic structure calculation for a crystal. The muffin-tin potentials are in traditional KKR format. Other KKR-related Projects
a13bacd0ecc19f75
Abraham Meets Abraham from a Parallel Universe And he [Abraham] lifted up his eyes and looked, and, lo, three men stood over against him…  (Genesis 18:2) On this blog, we often discuss the collapse of the wavefunction as the result of a measurement. This phenomenon is called the “measurement problem.” There are several reasons, why the collapse of the wavefunction – part and parcel of the Copenhagen interpretation of quantum mechanics – is called a problem. Firstly, it does not follow from the Schrödinger equation, the main equation of quantum mechanics that describes the evolution of the wavefunction in time, and is added ad hoc. Secondly, nobody knows how the collapse happens or how long it takes to collapse the wavefunction.  This is not to mention that any notion that the collapse of the wavefunction is caused by human consciousness, [...]
4bf10604dd845c02
Archive for December, 2018 Incompleteness ex machina Sunday, December 30th, 2018 I have a treat with which to impress your friends at New Year’s Eve parties tomorrow night: a rollicking essay graciously contributed by a reader named Sebastian Oberhoff, about a unified and simplified way to prove all of Gödel’s Incompleteness Theorems, as well as Rosser’s Theorem, directly in terms of computer programs. In particular, this improves over my treatments in Quantum Computing Since Democritus and my Rosser’s Theorem via Turing machines post. While there won’t be anything new here for the experts, I loved the style—indeed, it brings back wistful memories of how I used to write, before I accumulated too many imaginary (and non-imaginary) readers tut-tutting at crass jokes over my shoulder. May 2019 bring us all the time and the courage to express ourselves authentically, even in ways that might be sneered at as incomplete, inconsistent, or unsound. Thursday, December 27th, 2018 I’m planning to be in Australia soon—in Melbourne January 4-10 for a friend’s wedding, then in Sydney January 10-11 to meet colleagues and give a talk. It will be my first trip down under for 12 years (and Dana’s first ever). If there’s interest, I might be able to do a Shtetl-Optimized meetup in Melbourne the evening of Friday the 4th (or the morning of Saturday the 5th), and/or another one in Sydney the evening of Thursday the 10th. Email me if you’d go, and then we’ll figure out details. The National Quantum Initiative Act is now law. Seeing the photos of Trump signing it, I felt … well, whatever emotions you might imagine I felt. Frank Verstraete asked me to announce that the University of Vienna is seeking a full professor in quantum algorithms; see here for details. Why are amplitudes complex? Monday, December 17th, 2018 [By prior agreement, this post will be cross-posted on Microsoft’s Q# blog, even though it has nothing to do with the Q# programming language.  It does, however, contain many examples that might be fun to implement in Q#!] Why should Nature have been quantum-mechanical?  It’s totally unclear what would count as an answer to such a question, and also totally clear that people will never stop asking it. Short of an ultimate answer, we can at least try to explain why, if you want this or that piece of quantum mechanics, then the rest of the structure is inevitable: why quantum mechanics is an “island in theoryspace,” as I put it in 2003. In this post, I’d like to focus on a question that any “explanation” for QM at some point needs to address, in a non-question-begging way: why should amplitudes have been complex numbers?  When I was a grad student, it was his relentless focus on that question, and on others in its vicinity, that made me a lifelong fan of Chris Fuchs (see for example his samizdat), despite my philosophical differences with him. It’s not that complex numbers are a bad choice for the foundation of the deepest known description of the physical universe—far from it!  (They’re a field, they’re algebraically closed, they’ve got a norm, how much more could you want?)  It’s just that they seem like a specific choice, and not the only possible one.  There are also the real numbers, for starters, and in the other direction, the quaternions. Quantum mechanics over the reals or the quaternions still has constructive and destructive interference among amplitudes, and unitary transformations, and probabilities that are absolute squares of amplitudes.  Moreover, these variants turn out to lead to precisely the same power for quantum computers—namely, the class BQP—as “standard” quantum mechanics, the one over the complex numbers.  So none of those are relevant differences. Indeed, having just finished teaching an undergrad Intro to Quantum Information course, I can attest that the complex nature of amplitudes is needed only rarely—shockingly rarely, one might say—in quantum computing and information.  Real amplitudes typically suffice.  Teleportationsuperdense coding, the Bell inequality, quantum money, quantum key distribution, the Deutsch-Jozsa and Bernstein-Vazirani and Simon and Grover algorithms, quantum error-correction: all of those and more can be fully explained without using a single i that’s not a summation index.  (Shor’s factoring algorithm is an exception; it’s much more natural with complex amplitudes.  But as the previous paragraph implied, their use is removable even there.) It’s true that, if you look at even the simplest “real” examples of quantum systems—or as a software engineer might put it, at the application layers built on top of the quantum OS—then complex numbers are everywhere, in a way that seems impossible to remove.  The Schrödinger equation, energy eigenstates, the position/momentum commutation relation, the state space of a spin-1/2 particle in 3-dimensional space: none of these make much sense without complex numbers (though it can be fun to try). But from a sufficiently Olympian remove, it feels circular to use any of this as a “reason” for why quantum mechanics should’ve involved complex amplitudes in the first place.  It’s like, once your OS provides a certain core functionality (in this case, complex numbers), it’d be surprising if the application layer didn’t exploit that functionality to the hilt—especially if we’re talking about fundamental physics, where we’d like to imagine that nothing is wasted or superfluous (hence Rabi’s famous question about the muon: “who ordered that?”). But why should the quantum OS have provided complex-number functionality at all?  Is it possible to answer that question purely in terms of the OS’s internal logic (i.e., abstract quantum information), making minimal reference to how the OS will eventually get used?  Maybe not—but if so, then that itself would seem worthwhile to know. If we stick to abstract quantum information language, then the most “obvious, elementary” argument for why amplitudes should be complex numbers is one that I spelled out in Quantum Computing Since Democritus, as well as my Is quantum mechanics an island in theoryspace? paper.  Namely, it seems desirable to be able to implement a “fraction” of any unitary operation U: for example, some V such that V2=U, or V3=U.  With complex numbers, this is trivial: we can simply diagonalize U, or use the Hamiltonian picture (i.e., take e-iH/2 where U=e-iH), both of which ultimately depend on the complex numbers being algebraically closed.  Over the reals, by contrast, a 2×2 orthogonal matrix like $$ U = \left(\begin{array}[c]{cc}1 & 0\\0 & -1\end{array}\right)$$ has no 2×2 orthogonal square root, as follows immediately from its determinant being -1.  If we want a square root of U (or rather, of something that acts like U on a subspace) while sticking to real numbers only, then we need to add another dimension, like so: $$ \left(\begin{array}[c]{ccc}1 & 0 & 0\\0 & -1 & 0\\0 & 0&-1\end{array}\right)=\left(\begin{array}[c]{ccc}1 & 0 & 0\\0 & 0 & 1\\0 & -1 & 0\end{array}\right) ^{2} $$ This is directly related to the fact that there’s no way for a Flatlander to “reflect herself” (i.e., switch her left and right sides while leaving everything else unchanged) by any continuous motion, unless she can lift off the plane and rotate herself through the third dimension.  Similarly, for us to reflect ourselves would require rotating through a fourth dimension. One could reasonably ask: is that it?  Aren’t there any “deeper” reasons in quantum information for why amplitudes should be complex numbers? Indeed, there are certain phenomena in quantum information that, slightly mysteriously, work out more elegantly if amplitudes are complex than if they’re real.  (By “mysteriously,” I mean not that these phenomena can’t be 100% verified by explicit calculations, but simply that I don’t know of any deep principle by which the results of those calculations could’ve been predicted in advance.) One famous example of such a phenomenon is due to Bill Wootters: if you take a uniformly random pure state in d dimensions, and then you measure it in an orthonormal basis, what will the probability distribution (p1,…,pd) over the d possible measurement outcomes look like?  The answer, amazingly, is that you’ll get a uniformly random probability distribution: that is, a uniformly random point on the simplex defined by pi≥0 and p1+…+pd=1.  This fact, which I’ve used in several papers, is closely related to Archimedes’ Hat-Box Theorem, beloved by friend-of-the-blog Greg Kuperberg.  But here’s the kicker: it only works if amplitudes are complex numbers.  If amplitudes are real, then the resulting distribution over distributions will be too bunched up near the corners of the probability simplex; if they’re quaternions, it will be too bunched up near the middle. There’s an even more famous example of such a Goldilocks coincidence—one that’s been elevated, over the past two decades, to exalted titles like “the Axiom of Local Tomography.”  Namely: suppose we have an unknown finite-dimensional mixed state ρ, shared by two players Alice and Bob.  For example, ρ might be an EPR pair, or a correlated classical bit, or simply two qubits both in the state |0⟩.  We imagine that Alice and Bob share many identical copies of ρ, so that they can learn more and more about it by measuring this copy in this basis, that copy in that basis, and so on. We then ask: can ρ be fully determined from the joint statistics of product measurements—that is, measurements that Alice and Bob can apply separately and locally to their respective subsystems, with no communication between them needed?  A good example here would be the set of measurements that arise in a Bell experiment—measurements that, despite being local, certify that Alice and Bob must share an entangled state. If we asked the analogous question for classical probability distributions, the answer is clearly “yes.”  That is, once you’ve specified the individual marginals, and you’ve also specified all the possible correlations among the players, you’ve fixed your distribution; there’s nothing further to specify. For quantum mixed states, the answer again turns out to be yes, but only because amplitudes are complex numbers!  In quantum mechanics over the reals, you could have a 2-qubit state like $$ \rho=\frac{1}{4}\left(\begin{array}[c]{cccc}1 & 0 & 0 & -1\\0 & 1 & 1 & 0\\0 & 1 & 1 & 0\\-1& 0 & 0 & 1\end{array}\right) ,$$ which clearly isn’t the maximally mixed state, yet which is indistinguishable from the maximally mixed state by any local measurement that can be specified using real numbers only.  (Proof: exercise!) In quantum mechanics over the quaternions, something even “worse” happens: namely, the tensor product of two Hermitian matrices need not be Hermitian.  Alice’s measurement results might be described by the 2×2 quaternionic density matrix $$ \rho_{A}=\frac{1}{2}\left(\begin{array}[c]{cc}1 & -i\\i & 1\end{array}\right), $$ and Bob’s results might be described by the 2×2 quaternionic density matrix $$ \rho_{B}=\frac{1}{2}\left(\begin{array}[c]{cc}1 & -j\\j & 1\end{array}\right), $$ and yet there might not be (and in this case, isn’t) any 4×4 quaternionic density matrix corresponding to ρA⊗ρB, which would explain both results separately. What’s going on here?  Why do the local measurement statistics underdetermine the global quantum state with real amplitudes, and overdetermine it with quaternionic amplitudes, being in one-to-one correspondence with it only when amplitudes are complex? We can get some insight by looking at the number of independent real parameters needed to specify a d-dimensional Hermitian matrix.  Over the complex numbers, the number is exactly d2: we need 1 parameter for each of the d diagonal entries, and 2 (a real part and an imaginary part) for each of the d(d-1)/2 upper off-diagonal entries (the lower off-diagonal entries being determined by the upper ones).  Over the real numbers, by contrast, “Hermitian matrices” are just real symmetric matrices, so the number of independent real parameters is only d(d+1)/2.  And over the quaternions, the number is d+4[d(d-1)/2] = 2d(d-1). Now, it turns out that the Goldilocks phenomenon that we saw above—with local measurement statistics determining a unique global quantum state when and only when amplitudes are complex numbers—ultimately boils down to the simple fact that $$ (d_A d_B)^2 = d_A^2 d_B^2, $$ but $$\frac{d_A d_B (d_A d_B + 1)}{2} > \frac{d_A (d_A + 1)}{2} \cdot \frac{d_B (d_B + 1)}{2},$$ and conversely $$ 2 d_A d_B (d_A d_B – 1) < 2 d_A (d_A – 1) \cdot 2 d_B (d_B – 1).$$ In other words, only with complex numbers does the number of real parameters needed to specify a “global” Hermitian operator, exactly match the product of the number of parameters needed to specify an operator on Alice’s subsystem, and the number of parameters needed to specify an operator on Bob’s.  With real numbers it overcounts, and with quaternions it undercounts. A major research goal in quantum foundations, since at least the early 2000s, has been to “derive” the formalism of QM purely from “intuitive-sounding, information-theoretic” postulates—analogous to how, in 1905, some guy whose name I forget derived the otherwise strange-looking Lorentz transformations purely from the assumption that the laws of physics (including a fixed, finite value for the speed of light) take the same form in every inertial frame.  There have been some nontrivial successes of this program: most notably, the “axiomatic derivations” of QM due to Lucien Hardy and (more recently) Chiribella et al.  Starting from axioms that sound suitably general and nontechnical (if sometimes unmotivated and weird), these derivations perform the impressive magic trick of deriving the full mathematical structure of QM: complex amplitudes, unitary transformations, tensor products, the Born rule, everything. However, in every such derivation that I know of, some axiom needs to get introduced to capture “local tomography”: i.e., the “principle” that composite systems must be uniquely determined by the statistics of local measurements.  And while this principle might sound vague and unobjectionable, to those in the business, it’s obvious what it’s going to be used for the second it’s introduced.  Namely, it’s going to be used to rule out quantum mechanics over the real numbers, which would otherwise be a model for the axioms, and thus to “explain” why amplitudes have to be complex. I confess that I was always dissatisfied with this.  For I kept asking myself: would I have ever formulated the “Principle of Local Tomography” in the first place—or if someone else had proposed it, would I have ever accepted it as intuitive or natural—if I didn’t already know that QM over the complex numbers just happens to satisfy it?  And I could never honestly answer “yes.”  It always felt to me like a textbook example of drawing the target around where the arrow landed—i.e., of handpicking your axioms so that they yield a predetermined conclusion, which is then no more “explained” than it was at the beginning. Two months ago, something changed for me: namely, I smacked into the “Principle of Local Tomography,” and its reliance on complex numbers, in my own research, when I hadn’t in any sense set out to look for it.  This still doesn’t convince me that the principle is any sort of a-priori necessity.  But it at least convinces me that it’s, you know, the sort of thing you can smack into when you’re not looking for it. The aforementioned smacking occurred while I was writing up a small part of a huge paper with Guy Rothblum, about a new connection between so-called “gentle measurements” of quantum states (that is, measurements that don’t damage the states much), and the subfield of classical CS called differential privacy.  That connection is a story in itself; let me know if you’d like me to blog about it separately.  Our paper should be on the arXiv any day now; in the meantime, here are some PowerPoint slides. Anyway, for the paper with Guy, it was of interest to know the following: suppose we have a two-outcome measurement E (let’s say, on n qubits), and suppose it accepts every product state with the same probability p.  Must E then accept every entangled state with probability p as well?  Or, a closely-related question: suppose we know E’s acceptance probabilities on every product state.  Is that enough to determine its acceptance probabilities on all n-qubit states? I’m embarrassed to admit that I dithered around with these questions, finding complicated proofs for special cases, before I finally stumbled on the one-paragraph, obvious-in-retrospect “Proof from the Book” that slays them in complete generality. Here it is: if E accepts every product state with probability p, then clearly it accepts every separable mixed state (i.e., every convex combination of product states) with the same probability p.  Now, a well-known result of Braunstein et al., from 1998, states that (surprisingly enough) the separable mixed states have nonzero density within the set of all mixed states, in any given finite dimension.  Also, the probability that E accepts ρ can be written as f(ρ)=Tr(Eρ), which is linear in the entries of ρ.  OK, but a linear function that’s determined on a subset of nonzero density is determined everywhere.  And in particular, if f is constant on that subset then it’s constant everywhere, QED. But what does any of this have to do with why amplitudes are complex numbers?  Well, it turns out that the 1998 Braunstein et al. result, which was the linchpin of the above argument, only works in complex QM, not in real QM.  We can see its failure in real QM by simply counting parameters, similarly to what we did before.  An n-qubit density matrix requires 4n real parameters to specify (OK, 4n-1, if we demand that the trace is 1).  Even if we restrict to n-qubit density matrices with real entries only, we still need 2n(2n+1)/2 parameters.  By contrast, it’s not hard to show that an n-qubit real separable density matrix can be specified using only 3n real parameters—and indeed, that any such density matrix lies in a 3n-dimensional subspace of the full 2n(2n+1)/2-dimensional space of 2n×2n symmetric matrices.  (This is simply the subspace spanned by all possible tensor products of n Pauli I, X, and Z matrices—excluding the Y matrix, which is the one that involves imaginary numbers.) But it’s not only the Braunstein et al. result that fails in real QM: the fact that I wanted for my paper with Guy fails as well.  As a counterexample, consider the 2-qubit measurement that accepts the state ρ with probability Tr(Eρ), where $$ E=\frac{1}{2}\left(\begin{array}[c]{cccc}1 & 0 & 0 & -1\\0 & 1 & 1 & 0\\0 & 1 & 1 & 0\\-1 & 0 & 0 & 1\end{array}\right).$$ I invite you to check that this measurement, which we specified using a real matrix, accepts every product state (a|0⟩+b|1⟩)(c|0⟩+d|1⟩), where a,b,c,d are real, with the same probability, namely 1/2—just like the “measurement” that simply returns a coin flip without even looking at the state at all.  And yet the measurement can clearly be nontrivial on entangled states: for example, it always rejects $$\frac{\left|00\right\rangle+\left|11\right\rangle}{\sqrt{2}},$$ and it always accepts $$ \frac{\left|00\right\rangle-\left|11\right\rangle}{\sqrt{2}}.$$ Is it a coincidence that we used exactly the same 4×4 matrix (up to scaling) to produce a counterexample to the real-QM version of Local Tomography, and also to the real-QM version of the property I wanted for the paper with Guy?  Is anything ever a coincidence in this sort of discussion? I claim that, looked at the right way, Local Tomography and the property I wanted are the same property, their truth in complex QM is the same truth, and their falsehood in real QM is the same falsehood.  Why?  Simply because Tr(Eρ), the probability that the measurement E accepts the mixed state ρ, is a function of two Hermitian matrices E and ρ (both of which can be either “product” or “entangled”), and—crucially—is symmetric under the interchange of E and ρ. Now it’s time for another confession.  We’ve identified an elegant property of quantum mechanics that’s true but only because amplitudes are complex numbers: namely, if you know the probability that your quantum circuit accepts every product state, then you also know the probability that it accepts an arbitrary state.  Yet, despite its elegance, this property turns out to be nearly useless for “real-world applications” in quantum information and computing.  The reason for the uselessness is that, for the property to kick in, you really do need to know the probabilities on product states almost exactly—meaning (say) to 1/exp(n) accuracy for an n-qubit state. Once again a simple example illustrates the point.  Suppose n is even, and suppose our measurement simply projects the n-qubit state onto a tensor product of n/2 Bell pairs.  Clearly, this measurement accepts every n-qubit product state with exponentially small probability, even as it accepts the entangled state  $$\left(\frac{\left|00\right\rangle+\left|11\right\rangle}{\sqrt{2}}\right)^{\otimes n/2}$$ with probability 1.  But this implies that noticing the nontriviality on entangled states, would require knowing the acceptance probabilities on product states to exponential accuracy. In a sense, then, I come back full circle to my original puzzlement: why should Local Tomography, or (alternatively) the-determination-of-a-circuit’s-behavior-on-arbitrary-states-from-its-behavior-on-product-states, have been important principles for Nature’s laws to satisfy?  Especially given that, in practice, the exponential accuracy required makes it difficult or impossible to exploit these principles anyway?  How could we have known a-priori that these principles would be important—if indeed they are important, and are not just mathematical spandrels? But, while I remain less than 100% satisfied about “why the complex numbers? why not just the reals?,” there’s one conclusion that my recent circling-back to these questions has made me fully confident about.  Namely: quantum mechanics over the quaternions is a flaming garbage fire, which would’ve been rejected at an extremely early stage of God and the angels’ deliberations about how to construct our universe. In the literature, when the question of “why not quaternionic amplitudes?” is discussed at all, you’ll typically read things about how the parameter-counting doesn’t quite work out (just like it doesn’t for real QM), or how the tensor product of quaternionic Hermitian matrices need not be Hermitian.  In this paper by McKague, you’ll read that the CHSH game is winnable with probability 1 in quaternionic QM, while in this paper by Fernandez and Schneeberger, you’ll read that the non-commutativity of the quaternions introduces an order-dependence even for spacelike-separated operations. But none of that does justice to the enormity of the problem.  To put it bluntly: unless something clever is done to fix it, quaternionic QM allows superluminal signaling.  This is easy to demonstrate: suppose Alice holds a qubit in the state |1⟩, while Bob holds a qubit in the state |+⟩ (yes, this will work even for unentangled states!)  Also, let $$U=\left(\begin{array}[c]{cc}1 & 0\\0 & j\end{array}\right) ,~~~V=\left(\begin{array}[c]{cc}1 & 0\\0& i\end{array}\right).$$ We can calculate that, if Alice applies U to her qubit and then Bob applies V to his qubit, Bob will be left with the state $$ \frac{j \left|0\right\rangle + k \left|1\right\rangle}{\sqrt{2}}.$$ By contrast, if Alice decided to apply U only after Bob applied V, Bob would be left with the state  $$ \frac{j \left|0\right\rangle – k \left|1\right\rangle}{\sqrt{2}}.$$ But Bob can distinguish these two states with certainty, for example by applying the unitary $$ \frac{1}{\sqrt{2}}\left(\begin{array}[c]{cc}j & k\\k & j\end{array}\right). $$ Therefore Alice communicated a bit to Bob. I’m aware that there’s a whole literature on quaternionic QM, including for example a book by Adler.  Would anyone who knows that literature be kind enough to enlighten us on how it proposes to escape the signaling problem?  Regardless of the answer, though, it seems worth knowing that the “naïve” version of quaternionic QM—i.e., the version that gets invoked in quantum information discussions like the ones I mentioned above—is just immediately blasted to smithereens by the signaling problem, without the need for any subtle considerations like the ones that differentiate real from complex QM. Update (Dec. 20): In response to this post, Stephen Adler was kind enough to email me with further details about his quaternionic QM proposal, and to allow me to share them here. Briefly, Adler completely agrees that quaternionic QM inevitably leads to superluminal signaling—but in his proposal, the surprising and nontrivial part is that quaternionic QM would reduce to standard, complex QM at large distances. In particular, the strength of a superluminal signal would fall off exponentially with distance, quickly becoming negligible beyond the Planck or grand unification scales. Despite this, Adler says that he eventually abandoned his proposal for quaternionic QM, since he was unable to make specific particle physics ideas work out (but the quaternionic QM proposal then influenced his later work). Unrelated Update (Dec. 18): Probably many of you have already seen it, and/or already know what it covers, but the NYT profile of Donald Knuth (entitled “The Yoda of Silicon Valley”) is enjoyable and nicely written. The NP genie Tuesday, December 11th, 2018 Hi from the Q2B conference!
1fcf0be69ca083be
Citation for this page in APA citation style.           Close Mortimer Adler Rogers Albritton Alexander of Aphrodisias Samuel Alexander William Alston Louise Antony Thomas Aquinas David Armstrong Harald Atmanspacher Robert Audi Alexander Bain Mark Balaguer Jeffrey Barrett William Barrett William Belsham Henri Bergson George Berkeley Isaiah Berlin Richard J. Bernstein Bernard Berofsky Robert Bishop Max Black Susanne Bobzien Emil du Bois-Reymond Hilary Bok Laurence BonJour George Boole Émile Boutroux Michael Burke Lawrence Cahoone Joseph Keim Campbell Rudolf Carnap Nancy Cartwright Gregg Caruso Ernst Cassirer David Chalmers Roderick Chisholm Randolph Clarke Samuel Clarke Anthony Collins Antonella Corradini Diodorus Cronus Jonathan Dancy Donald Davidson Mario De Caro Daniel Dennett Jacques Derrida René Descartes Richard Double Fred Dretske John Dupré John Earman Laura Waddell Ekstrom Austin Farrer Herbert Feigl Arthur Fine John Martin Fischer Frederic Fitch Owen Flanagan Luciano Floridi Philippa Foot Alfred Fouilleé Harry Frankfurt Richard L. Franklin Bas van Fraassen Michael Frede Gottlob Frege Peter Geach Edmund Gettier Carl Ginet Alvin Goldman Nicholas St. John Green H.Paul Grice Ian Hacking Ishtiyaque Haji Stuart Hampshire Sam Harris William Hasker Georg W.F. Hegel Martin Heidegger Thomas Hobbes David Hodgson Shadsworth Hodgson Baron d'Holbach Ted Honderich Pamela Huby David Hume Ferenc Huoranszki Frank Jackson William James Lord Kames Robert Kane Immanuel Kant Tomis Kapitan Walter Kaufmann Jaegwon Kim William King Hilary Kornblith Christine Korsgaard Saul Kripke Thomas Kuhn Andrea Lavazza Christoph Lehner Keith Lehrer Gottfried Leibniz Jules Lequyer Michael Levin Joseph Levine George Henry Lewes David Lewis Peter Lipton C. Lloyd Morgan John Locke Michael Lockwood Arthur O. Lovejoy E. Jonathan Lowe John R. Lucas Alasdair MacIntyre Ruth Barcan Marcus James Martineau Storrs McCall Hugh McCann Colin McGinn Michael McKenna Brian McLaughlin John McTaggart Paul E. Meehl Uwe Meixner Alfred Mele Trenton Merricks John Stuart Mill Dickinson Miller Thomas Nagel Otto Neurath Friedrich Nietzsche John Norton Robert Nozick William of Ockham Timothy O'Connor David F. Pears Charles Sanders Peirce Derk Pereboom Steven Pinker Karl Popper Huw Price Hilary Putnam Willard van Orman Quine Frank Ramsey Ayn Rand Michael Rea Thomas Reid Charles Renouvier Nicholas Rescher Richard Rorty Josiah Royce Bertrand Russell Paul Russell Gilbert Ryle Jean-Paul Sartre Kenneth Sayre Moritz Schlick Arthur Schopenhauer John Searle Wilfrid Sellars Alan Sidelle Ted Sider Henry Sidgwick Walter Sinnott-Armstrong Saul Smilansky Michael Smith Baruch Spinoza L. Susan Stebbing Isabelle Stengers George F. Stout Galen Strawson Peter Strawson Eleonore Stump Francisco Suárez Richard Taylor Kevin Timpe Mark Twain Peter Unger Peter van Inwagen Manuel Vargas John Venn Kadri Vihvelin G.H. von Wright David Foster Wallace R. Jay Wallace Ted Warfield Roy Weatherford C.F. von Weizsäcker William Whewell Alfred North Whitehead David Widerker David Wiggins Bernard Williams Timothy Williamson Ludwig Wittgenstein Susan Wolf David Albert Michael Arbib Walter Baade Bernard Baars Jeffrey Bada Leslie Ballentine Gregory Bateson John S. Bell Mara Beller Charles Bennett Ludwig von Bertalanffy Susan Blackmore Margaret Boden David Bohm Niels Bohr Ludwig Boltzmann Emile Borel Max Born Satyendra Nath Bose Walther Bothe Jean Bricmont Hans Briegel Leon Brillouin Stephen Brush Henry Thomas Buckle S. H. Burbury Melvin Calvin Donald Campbell Sadi Carnot Anthony Cashmore Eric Chaisson Gregory Chaitin Jean-Pierre Changeux Rudolf Clausius Arthur Holly Compton John Conway Jerry Coyne John Cramer Francis Crick E. P. Culverwell Antonio Damasio Olivier Darrigol Charles Darwin Richard Dawkins Terrence Deacon Lüder Deecke Richard Dedekind Louis de Broglie Stanislas Dehaene Max Delbrück Abraham de Moivre Paul Dirac Hans Driesch John Eccles Arthur Stanley Eddington Gerald Edelman Paul Ehrenfest Manfred Eigen Albert Einstein George F. R. Ellis Hugh Everett, III Franz Exner Richard Feynman R. A. Fisher David Foster Joseph Fourier Philipp Frank Steven Frautschi Edward Fredkin Lila Gatlin Michael Gazzaniga Nicholas Georgescu-Roegen GianCarlo Ghirardi J. Willard Gibbs Nicolas Gisin Paul Glimcher Thomas Gold A. O. Gomes Brian Goodwin Joshua Greene Dirk ter Haar Jacques Hadamard Mark Hadley Patrick Haggard J. B. S. Haldane Stuart Hameroff Augustin Hamon Sam Harris Ralph Hartley Hyman Hartman John-Dylan Haynes Donald Hebb Martin Heisenberg Werner Heisenberg John Herschel Basil Hiley Art Hobson Jesper Hoffmeyer Don Howard William Stanley Jevons Roman Jakobson E. T. Jaynes Pascual Jordan Ruth E. Kastner Stuart Kauffman Martin J. Klein William R. Klemm Christof Koch Simon Kochen Hans Kornhuber Stephen Kosslyn Daniel Koshland Ladislav Kovàč Leopold Kronecker Rolf Landauer Alfred Landé Pierre-Simon Laplace David Layzer Joseph LeDoux Gilbert Lewis Benjamin Libet David Lindley Seth Lloyd Hendrik Lorentz Josef Loschmidt Ernst Mach Donald MacKay Henry Margenau Owen Maroney Humberto Maturana James Clerk Maxwell Ernst Mayr John McCarthy Warren McCulloch N. David Mermin George Miller Stanley Miller Ulrich Mohrhoff Jacques Monod Emmy Noether Alexander Oparin Abraham Pais Howard Pattee Wolfgang Pauli Massimo Pauri Roger Penrose Steven Pinker Colin Pittendrigh Max Planck Susan Pockett Henri Poincaré Daniel Pollen Ilya Prigogine Hans Primas Henry Quastler Adolphe Quételet Lord Rayleigh Jürgen Renn Juan Roederer Jerome Rothstein David Ruelle Tilman Sauer Jürgen Schmidhuber Erwin Schrödinger Aaron Schurger Sebastian Seung Thomas Sebeok Claude Shannon David Shiang Abner Shimony Herbert Simon Dean Keith Simonton B. F. Skinner Lee Smolin Ray Solomonoff Roger Sperry John Stachel Henry Stapp Tom Stonier Antoine Suarez Leo Szilard Max Tegmark Libb Thims William Thomson (Kelvin) Giulio Tononi Peter Tse Francisco Varela Vlatko Vedral Mikhail Volkenstein Heinz von Foerster Richard von Mises John von Neumann Jakob von Uexküll John B. Watson Daniel Wegner Steven Weinberg Paul A. Weiss Herman Weyl John Wheeler Wilhelm Wien Norbert Wiener Eugene Wigner E. O. Wilson Stephen Wolfram H. Dieter Zeh Ernst Zermelo Wojciech Zurek Konrad Zuse Fritz Zwicky Free Will Mental Causation James Symposium Entanglement is a mysterious quantum phenomenon that is widely, but mistakenly, described as capable of transmitting information over vast distances faster than the speed of light. It has proved very popular with science writers, philosophers of science, and many scientists who hope to use the mystery to deny some of the basic concepts underlying quantum physics. Many of them try to deny indeterminism, ontological chance. Entanglement depends on two quantum properties that are simply impossible in "classical" physics. One is called nonlocality. We shall argue that Albert Einstein first caught a glimpse of nonlocality as early as 1905. He made a clear public statement about it at the 1927 Solvay conference, but was misunderstood by Niels Bohr and ignored by most physicists until 1935. The other is nonseparability, which Einstein was first to see, even as he attacked the idea, just as he had reacted to his discovery of indeterminism in 1916. A "weakness in the theory," he called chance. In the 1935 Einstein-Podolsky-Rosen paper, Einstein extended nonlocality beyond the relation between a particle and its wave function. It was now extended from one particle to another with which it had interacted. Erwin Schrödinger called them "entangled." Each of these might be considered a mystery in its own right, but fortunately information physics (and the information interpretation of quantum mechanics) can explain them both, with no equations, in a way that should be understandable to the lay person. This may not be good news for the science writers and publishers who turn out so many titles each year claiming that quantum physics implies that there are multiple parallel universes, that we can travel backwards in time, that things can be in two places at the same time, that we can teleport material instantly from one place to another, and of course that we can send signals faster than the speed of light. A couple of these somewhat weird claims about measurements of entangled particles can be illustrated and explained, as we shall see. One deep philosophical claim is that the minds of physicists are manipulating "quantum reality," that there is nothing "really" there until we look at it. The half-truth is that our "free choice" as to which property to measure in an experiment can create a value of a property that did not exist before the experiment. But we cannot force the value to be specific, for example +1 or -1. That is determined by "Nature's choice," ontological randomness ("chance") which was discovered in the emission of photons by Einstein in 1916. Einstein's Discovery of Nonlocality and Nonseparability Einstein was the first to see nonlocal behavior in quantum phenomena. He may have seen it as early as 1905 in the photoelectric effect, the same year he published his special theory of relativity. But it was perfectly clear to him 22 years later (ten years after his general theory of relativity and his explanation of how quanta of light are randomly emitted and absorbed by atoms), when he described nonlocality with a diagram on the blackboard at an international conference of physicists in Belgium in 1927 at the fifth Solvay conference. In his contribution to the 1949 Schilpp memorial volume on Einstein, Niels Bohr gave us a picture of what Einstein drew on that blackboard. At the general discussion in Como, we all missed the presence of Einstein, but soon after, in October 1927, I had the opportunity to meet him in Brussels at the Fifth Physical Conference of the Solvay Institute, which was devoted to the theme "Electrons and Photons." Note that they wanted Einstein's reaction to their work, but actually took little interest in Einstein's concern about the nonlocal implications of quantum mechanics. At the Solvay meetings, Einstein had from their beginning been a most prominent figure, and several of us came to the conference with great anticipations to learn his reaction to the latest stage of the development which, to our view, went far in clarifying the problems which he had himself from the outset elicited so ingeniously. During the discussions, where the whole subject was reviewed by contributions from many sides and where also the arguments mentioned in the preceding pages were again presented, Einstein expressed, however, a deep concern over the extent to which a causal account in space and time was abandoned in quantum mechanics. To illustrate his attitude, Einstein referred at one of the sessions to the simple example, illustrated by Fig. 1, of a particle (electron or photon) penetrating through a hole or a narrow slit in a diaphragm placed at some distance before a photographic plate. photon passes through a slit On account of the diffraction of the wave connected with the motion of the particle and indicated in the figure by the thin lines, it is under such conditions not possible to predict with certainty at what point the electron will arrive at the photographic plate, but only to calculate the probability that, in an experiment, the electron will be found within any given region of the plate. The "nonlocal" effect at point (B) is just the probability of an electron being found at point (B) going to zero instantly (as if an action at a distance) the moment an electron is found at point A The apparent difficulty, in this description, which Einstein felt so acutely, is the fact that, if in the experiment the electron is recorded at one point A of the plate, then it is out of the question of ever observing an effect of this electron at another point (B), although the laws of ordinary wave propagation offer no room for a correlation between two such events. Bohr is telling us that in 1927 Einstein saw instantaneous "correlations" of events widely separated ("as if actions at a distance"), which exactly describes today's perfect "nonlocal" correlations of widely separated entangled particles. Then in 1935, Einstein, Boris Podolsky, and Nathan Rosen proposed a thought experiment (known by their initials as EPR) to exhibit internal contradictions in the new quantum physics. They hoped to show that quantum theory could not describe certain intuitive "elements of reality" and thus was either incomplete or, as they might have hoped, demonstrably incorrect. Einstein and his colleagues Erwin Schrödinger, Max Planck, and others hoped for a return to deterministic physics, and the elimination of mysterious quantum phenomena like superposition of states and "collapse" of the wave function. EPR continues to fascinate determinist philosophers of science hoping to prove that quantum indeterminacy (ontological randomness) does not exist. Beyond the problem of nonlocality, the EPR thought experiment introduced the problem of "nonseparability." This mysterious phenomenon appears to transfer something physical faster than the speed of light. What happens actually is merely instantaneous (simultaneous) knowledge of a distant particle's properties by measurement of a local particle that interacted with the distant particle sometime in the past. The 1935 EPR paper was based on an earlier question of Einstein's about two particles fired in opposite directions from a central source with equal velocities. He imagined them starting at t0 some distance apart and approaching one another with equal high velocities. Then for a short time interval from t1 to t1 + Δt the particles are in contact with one another. Einstein described this situation to Léon Rosenfeld in 1933. Shortly before he left Germany to emigrate to America, Einstein attended a lecture on quantum electrodynamics by Leon Rosenfeld. Keep in mind that Rosenfeld was perhaps the most dogged defender of the Copenhagen Interpretation, which maintains that a particle has no position until it is measured. After the talk, Einstein asked Rosenfeld, “What do you think of this situation?” Suppose two particles are set in motion towards each other with the same, very large, momentum, and they interact with each other for a very short time when they pass at known positions. Consider now an observer who gets hold of one of the particles, far away from the region of interaction, and measures its momentum: then, from the conditions of the experiment, he will obviously be able to deduce the momentum of the other particle. If, however, he chooses to measure the position of the first particle, he will be able tell where the other particle is. We can diagram a simple case of Einstein’s question as follows. Recall that it was Einstein who discovered in 1924 the identical nature, indistinguishability, and interchangeability of some quantum particles. He found that identical particles are not independent, altering their quantum statistics. After the particles interact at t1, quantum mechanics describes them with a single two-particle wave function that is not the product of independent single-particle wave functions. In the case of electrons, which are indistinguishable interchangeable particles, it is not proper to say electron 1 goes this way and electron 2 that way. (Nevertheless, it is convenient to label the particles, as we do in the illustration.) Einstein then asked Rosenfeld, “How can the final state of the second particle be influenced by a measurement performed on the first after all interaction has ceased between them?” This was the germ of the EPR paradox, and ultimately the problem of two-particle entanglement. Why does Einstein question Rosenfeld and describe this as an “influence,” suggesting an “action-at-a-distance?” It is only paradoxical in the context of Rosenfeld’s Copenhagen Interpretation, since the second particle is not itself measured and yet we know something about its properties, which the Copenhagen Interpretation says we cannot know without an explicit measurement. Einstein was clearly correct to tell Rosenfeld that at a later time t2, a measurement of one particle's position would instantly establish the position of the other particle - without measuring it. Einstein obviously used conservation of linear momentum implicitly to calculate (and know) the position of the second particle. Two years later, after EPR, Schrödinger described two such particles as becoming "entangled" (verschränkt) at their first interaction, so "nonlocal" phenomena are also known as "quantum entanglement." Although conservation laws are rarely cited as the explanation, they are the physical reason that entangled particles always produce correlated results for all properties. If the results were not always correlated, the implied violation of a fundamental conservation law would cause a much bigger controversy than entanglement itself, as puzzling as that is. This idea of something measured in one place "influencing" measurements far away challenged what Einstein thought of as "local reality." It came to be known as "nonlocality." Einstein called it a "spukhaft Fernwirkung" or "spooky action at a distance." We prefer to describe this phenomenon as "knowledge at a distance." No action has been performed on the distant particle simply because we learn about its position. Note that this assumes the distant particle has not has not been disturbed by an interaction with the environment. In the year following the Einstein-Podsky-Rosen paper, Erwin Schrödinger looked more carefully at Einstein's "separability" assumption (Trennungsprinzip) that an entangled system can be separated enough to be regarded as two systems with independent wave functions: Years ago I pointed out that when two systems separate far enough to make it possible to experiment on one of them without interfering with the other, they are bound to pass, during the process of separation, through stages which were beyond the range of quantum mechanics as it stood then. For it seems hard to imagine a complete separation, whilst the systems are still so close to each other, that, from the classical point of view, their interaction could still be described as an unretarded actio in distans. And ordinary quantum mechanics, on account of its thoroughly unrelativistic character, really only deals with the actio in distans case. The whole system (comprising in our case both systems) has to be small enough to be able to neglect the time that light takes to travel across the system, compared with such periods of the system as are essentially involved in the changes that take place... It seems worth noticing that the paradox could be avoided by a very simple assumption, namely if the situation after separating were described by the expansion [ψ (x,y) = Σ ak gk(x) fk(y), as assumed in EPR], but with the additional statement that the knowledge of the phase relations between the complex constants ak has been entirely lost in consequence of the process of separation. When some interaction, like a measurement, causes a separation, the two-particle wave function Ψ12 collapses, the system decoheres into the product Ψ1Ψ2, losing their "influence" on one another, but not the acquisition of information about the second system by (nonlocsl) measurements on the first. This would mean that not only the parts, but the whole system, would be in the situation of a mixture, not of a pure state. It would not preclude the possibility of determining the state of the first system by suitable measurements in the second one or vice versa. But it would utterly eliminate the experimenters influence on the state of that system which he does not touch. Schrödinger says that the entangled system may become disentangled (Einstein's separation) and yet some perfect correlations between later measurements might remain. Note that the entangled system could simply decohere as a result of interactions with the environment, as proposed by decoherence theorists. The perfectly correlated results of Bell-inequality experiments might nevertheless be preserved, depending on the interaction. Schrödinger tells us that the two-particle wave function Ψ12 will be separated into the product of single-particle wave functions Ψ1 and Ψ2 by a measurement of either particle, for example, by either Alice's or Bob's measurements in the case of Bell's Theorem. As we saw, Einstein had objected to nonlocal phenomena as early as the Solvay Conference of 1927, when he criticized the collapse of the wave function as "instantaneous-action-at-a-distance" that prevents the wave from "acting at more than one place on the screen." The simultaneous events at points A and B in Einstein's 1927 Figure 1 above are the same kind of nonlocality as the two entangled particles acquiring perfectly correlated properties while in a spacelike separation that he suggested to Rosenfeld in 1933, and which Podolsky and Rosen developed into the EPR paradox in 1935. Einstein's 1927 concern was based on the idea that the light wave might contain some kind of ponderable energy. At that time Schrödinger thought it might be distributed electricity. In these cases the instantaneous "collapse" of the wave function might violate Einstein's principle of relativity, a concern he first expressed in 1909. When we recognize that the wave function is only pure information about the probability of finding a particle (or particles) somewhere, we see that there is no matter or energy traveling faster than the speed of light. Einstein's criticism somewhat resembles the criticisms by Descartes and others about Newton's theory of gravitation. Newton's opponents charged that his theory was "action at a distance" and instantaneous. Einstein's own theory of general relativity shows that gravitational influences travel at the speed of light and are mediated by a gravitational field that can be described as curved space-time. When a probability function collapses to unity in one place and zero elsewhere, nothing physical is moving from one place to the other. When the nose of one horse crosses the finish line, its probability of winning goes to certainty, and the finite probabilities of the other horses, including the one in the rear, instantaneously drop to zero. This happens faster than the speed of light, since the last horse is in a "space-like" separation. But it does not violate relativity. The first practical and workable experiments to test the 1935 "thought experiments" of Einstein, Podolsky, and Rosen (EPR) were suggested by David Bohm in 1952. Instead of measuring linear momentum, Bohm proposed using two electrons that are prepared in an initial state of known total spin. Momentum and position are continuous variables. Spin is discrete. Bohm argued that measurements of discrete variables would be more precise. Bohm also proposed local "hidden variables" might be needed to explain the correlations. Here is Bohm's description We consider a molecule of total spin zero consisting of two atoms, each of spin one-half. The wave function of the system is therefore ψ = (1/√2) [ ψ+ (1) ψ- (2) - ψ- (1) ψ+ (2) ] where ψ+ (1) refers to the wave function of the atomic state in which one particle (A) has spin +ℏ/2, etc. The two atoms are then separated by a method that does not influence the total spin. After they have separated enough so that they cease to interact, any desired component of the spin of the first particle (A) is measured. Then, because the total spin is still zero, it can immediately be concluded that the same component of the spin of the other particle (B) is opposite to that of A. Note that when Bohm says "because the total spin is still zero, it can immediately be concluded that the same component of the spin of the other particle (B) is opposite to that of A," he is implicitly using the conservation of total spin. In 1964, John Bell put limits on the "hidden variables" that might restore a deterministic physics in the form of what he called an inequality, the violation of which would confirm standard quantum mechanics. Here is Bell's description. As with Bohm, conservation is not mentioned explicitly, but it involves spin components measured in the same direction With the example advocated by Bohm and Aharonov, the EPR argument is the following. Consider a pair of spin one-half particles formed somehow in the singlet spin state and moving freely in opposite directions. Measurements can be made, say by Stern-Gerlach magnets, on selected components of the spins σ1 and σ2. If measurement of the component σ1a, where a is some unit vector, yields the value + 1 then, according to quantum mechanics, measurement of σ2a must yield the value — 1 and vice versa. Now we make the hypothesis, and it seems one at least worth considering, that if the two measurements are made at places remote from one another the orientation of one magnet does not influence the result obtained with the other. "pre-determination" is too strong a term. The first measurement just "determines" the later measurement. We shall see that the second measurement is synchronous with the "first" in a "special" frame Since we can predict in advance the result of measuring any chosen component of σ2, by previously measuring the same component of σ1, it follows that the result of any such measurement must actually be predetermined. Since the initial quantum mechanical wave function does not determine the result of an individual measurement, this predetermination implies the possibility of a more complete specification of the state. Just like Bohm, Bell is implicitly using the conservation of total spin. If one electron spin is 1/2 in the up direction and the other is spin down or -1/2, the total spin is zero. The underlying physical law of importance is not conservation of linear momentum (as Einstein used), Bohm and Bell use the conservation of angular momentum (or spin). If electron 1 is prepared with spin down and electron 2 with spin up, the total angular momentum is zero. This is called the singlet state. Bohm and Bell agree that quantum theory describes the two electrons as in a superposition of spin up ( + ) and spin down ( - ) states, | ψ > = 1/√2) | + - > - 1/√2) | - + >         (1) The principles of quantum mechanics say that the prepared system is in a linear combination (or superposition) of these two states, and can provide only the probabilities of finding the entangled system in either the | + - > state or the | - + > state. The 1/√2 coefficients of the probability amplitude for each term, when squared, give us the probabilities (1/2) that the system will be found in the state | + - > or in the state | - + >. The actual outcome is random (Paul Dirac called it "Nature's choice." But the individual electron spin outcomes are not individually and separately random, because the particles are not independent. One is always up and the other down, as the conservation law requires. Should measurements ever show both spins in the same state, either | + + > or | - - >, that would violate the conservation of angular momentum. Quantum mechanics does not include such terms in the wave function. So they are not predicted and they are never observed. EPR tests can be done more easily with polarized photons than with electrons, which require complex magnetic fields. The first of these was done in 1972 by Stuart Freedman and John Clauser at UC Berkeley. They used oppositely polarized photons (one with spin = +1, the other spin = -1) coming from a central source. Again, the total photon spin of zero is conserved. Their data, in agreement with quantum mechanics, violated the Bell's inequalities to high statistical accuracy, thus providing strong evidence against local hidden-variable theories. If hidden variables exist, they must be non-local, said Bell. For more on superposition of states and the physics of photons, see the Dirac 3-polarizers experiment. John Clauser, Michael Horne, Abner Shimony, and Richard Holt (known collectively as CHSH) and later Alain Aspect did more sophisticated tests. The outputs of the polarization analyzers were fed to a coincidence detector that records the instantaneous measurements, described as + -, - +, + +, and - - . The first two ( + - and - + ) conserve the spin angular momentum and are the only types ever observed in these nonlocality/entanglement tests. With the exception of some of Holt's early results that were found to be erroneous, no evidence has so far been found of any failure of standard quantum mechanics. And as experimental accuracy has improved by orders of magnitude, quantum physics has correspondingly been confirmed to one part in 1018, and the transfer speed of the probability information between particles has a lower limit of 106 times the speed of light. There has been no evidence for local "hidden variables." Nevertheless, experimenters continue to look for possible "loopholes" in the experimental results, such as detector inefficiencies that might be hiding results favorable to Einstein's picture of "local reality." Nicolas Gisin and his colleagues have extended the polarized photon tests of EPR and the Bell inequalities to a separation of 18 kilometers near Geneva. They continue to find 100% correlation and no evidence of the "hidden variables" sought after by Einstein and David Bohm. An interesting use of the special theory of relativity was proposed by Gisin's colleagues, Antoine Suarez and Valerio Scarani. They use the idea of hyperplanes of simultaneity. Back in the 1960's, C. W. Rietdijk and Hilary Putnam argued that physical determinism could be proved to be true by considering the experiments and observers A and B in the above diagram to be moving at high speed with respect to one another. Roger Penrose developed a similar argument in his book The Emperor's New Mind. He called it the Andromeda Paradox. Suarez and Scarani showed that for some relative speeds between the two observers A and B, observer A could "see" the measurement of observer B to be in his future, and vice versa. Because the two experiments have a "spacelike" separation (neither is inside the causal light cone of the other), each observer thinks he does his own measurement before the other. Gisin tested the limits on this effect by moving mirrors in the path to the birefringent crystals and showed that, like all other Bell experiments, the "before-before" suggestion of Suarez and Scarani did nothing to invalidate quantum mechanics. These experiments were able to put a lower limit on the speed with which the information about probabilities collapses, estimating it as at least thousands - perhaps millions - of times the speed of light and showed empirically that probability collapses are essentially instantaneous. Despite all his experimental tests verifying quantum physics, including the "reality" of nonlocality and entanglement, Gisin continues to explore the EPR paradox, considering the possibility that signals are coming to the entangled particles from "outside space-time." How Information Physics Explains Nonlocality, Nonseparability, and Entanglement Information physics starts with the fact that measurements bring new stable information into existence. In EPR the information in the prepared state of the two particles includes the fact that the total linear momentum and the total angular momentum are zero. New information requires an irreversible process that also increases the entropy more than enough to compensate for the information increase, to satisfy the second law of thermodynamics. It is this moment of irreversibility and the creation of new observable information that is the "cut" or Schnitt" described by Werner Heisenberg and John von Neumann in the famous problem of measurement Note that the new observable information does not require a "conscious observer" as Eugene Wigner and some other scientists thought. The information is ontological (really in the world) and not merely epistemic (in the mind). Without new information, there would be nothing for the observers to observe. Initially Prepared Information Plus Conservation Laws Conservation laws are the consequence of extremely deep properties of nature that arise from simple considerations of symmetry. We regard these laws as "cosmological principles." Physical laws do not depend on the absolute place and time of experiments, nor their particular direction in space. Conservation of linear momentum depends on the translation invariance of physical systems, conservation of energy the independence of time, and conservation of angular momentum the invariance under rotations. Recall that the EPR experiment starts with two electrons (or photons) prepared in an entangled state that is a mixture of pure two-particle states, each of which conserves the total angular momentum and, of course, conserves the linear momentum as in Einstein's original EPR example. This information about the linear and angular momenta is established by the initial state preparation (a measurement). Quantum mechanics describes the probability amplitude wave function Ψ12 of the two-particle system as in a superposition of two-particle states. It is not a product of single-particle states, and there is no information about the identical indistinguishable electrons traveling along distinguishable paths. With slightly different notation, we can write equation (1) as Ψ12 = 1/√2) | 1+2- > + 1/√2) | 1-2+ >         (2) The probability amplitude wave function Ψ12 travels away from the source (at the speed of light or less). Let's assume that at t0 observer A finds an electron (e1) with spin up. At the time of this "first" measurement, by observer A or B, new information comes into existence telling us that the wave function Ψ12 has "collapsed" into the state | 1+2- > (or into | 1-2+ >). Just as in the two-slit experiment, probabilities have now become certainties. If the first measurement finds a particular component of electron 1 spin is up, so the same spin component of entangled electron 2 must be down to conserve angular momentum. And conservation of linear momentum tells us that at t0 the second electron is equidistant from the source in the opposite direction. As with any wave-function "collapse", the probability amplitude information changes (it does not "travel" anywhere instantly). Nothing really "collapses." Unlike the two-slit experiment, where the collapse goes to a specific point in 3-dimensional configuration space, the "collapse" here is a "jump" or "projection" into one of the two possible 6-dimensional two-particle quantum states | + - > or | - + >. This makes "visualization" (Schrödinger's Anschaulichkeit) difficult or impossible, but the parallel with the collapse in the two-slit case provides an intuitive insight of sorts. It is what Einstein saw in 1927 and again in 1933. If the measurement finds an electron (call it electron 1) as spin-up, then at that moment of new information creation, the two-particle wave function collapses to the state | + - > and electron 2 "jumps" into a spin-down state with probability unity (certainty). The results of observer B's measurement at a later time t1 is therefore determined to be spin down. Notice that Einstein's intuition that the result seems already "determined" or "fixed" before the second measurement is in fact correct. The result is determined by the law of conservation of momentum. But as with the distinction between determinism and pre-determinism in the free-will debates, the measurement by observer B was not pre-determined before observer A's measurement. It was simply determined by her measurement. Why do so few accounts of entanglement mention conservation laws? Although Einstein mentioned conservation in the original EPR paper, it is noticeably absent from later work. Bohm and Bell are obviously using it without an explicit mention. A prominent exception is Eugene Wigner, writing on the problem of measurement in 1963: If a measurement of the momentum of one of the particles is carried out — the possibility of this is never questioned — and gives the result p, the state vector of the other particle suddenly becomes a (slightly damped) plane wave with the momentum -p. This statement is synonymous with the statement that a measurement of the momentum of the second particle would give the result -p, as follows from the conservation law for linear momentum. The same conclusion can be arrived at also by a formal calculation of the possible results of a joint measurement of the momenta of the two particles. Writing a few years after Bohm, and one year before Bell, Wigner explicitly describes Einstein's conservation of momentum example as well as the conservation of angular momentum (spin) that explains perfect correlations between angular momentum (spin) components measured in the same direction One can go even further: instead of measuring the linear momentum of one particle, one can measure its angular momentum about a fixed axis. If this measurement yields the value mℏ, the state vector of the other particle suddenly becomes a cylindrical wave for which the same component of the angular momentum is -mℏ. This statement is again synonymous with the statement that a measurement of the said component of the angular momentum of the second particle certainly would give the value -mℏ. This can be inferred again from the conservation law of the angular momentum (which is zero for the two particles together) or by means of a formal analysis. Visualizing Entanglement and Nonlocality Schrödinger said that his "Wave Mechanics" provided more "visualizability" (Anschaulichkeit) than the Copenhagen school and its "damned quantum jumps" as he called them. He was right. But we must focus on the probability amplitude wave function of the prepared two-particle state, and not attempt to describe the paths or locations of independent particles - at least until after some measurement has been made. We must also keep in mind the conservation laws that Einstein used to discover nonlocal behavior in the first place. Then we can see that the "mystery" of nonlocality is primarily the same mystery as the single-particle collapse of the wave function. As Richard Feynman said, there is only one mystery in quantum mechanics (the collapse of probability and the consequent statistical outcomes). We choose to examine a phenomenon which is impossible, absolutely impossible, to explain in any classical way, and which has in it the heart of quantum mechanics. In reality, it contains the only mystery. We cannot make the mystery go away by "explaining" how it works. We will just tell you how it works. In telling you how it works we will have told you about the basic peculiarities of all quantum mechanics. In his 1935 paper, Schrödinger described the two particles in EPR as "entangled" in English, and verschränkt in German, which means something like cross-linked. It describes someone standing with arms crossed. In the time evolution of an entangled two-particle state according to the Schrödinger equation, we can visualize it - as we visualize the single-particle wave function - as collapsing when a measurement is made. The discontinuous "jump" is also described as the "reduction of the wave packet." This is apt in the two-particle case, where the superposition of | + - > and | - + > states is "projected" or "reduced: to one of these states, and then further reduced to the product of independent one-particle states | + > and | - >. In the two-particle case (instead of just one particle making an appearance), when either particle is measured we know instantly those properties of the other particle that satisfy the conservation laws, including its location equidistant from, but on the opposite side of, the source, and its other properties such as spin. Here is an animation showing the two particles simultaneously acquiring their opposite spins when either is measured. How Mysterious Is Entanglement? Some commentators say that nonlocality and entanglement are a "second revolution" in quantum mechanics, "the greatest mystery in physics," or "science's strangest phenomenon," and that quantum physics has been "reborn." They usually quote Erwin Schrödinger as saying "I consider [entanglement] not as one, but as the characteristic trait of quantum mechanics, the one that enforces its entire departure from classical lines of thought." Schrödinger knew that his two-particle wave function Ψ12 cannot have the same simple interpretation as the single particle, which can be visualized in ordinary 3-dimensional configuration space. And he is right that entanglement exhibits a richer form of the "action-at-a-distance" and nonlocality that Einstein had already identified in the "collapse" of the single particle wave function. But the main difference is that two particles acquire new properties instead of one particle, and they do it instantaneously (at faster than light speeds), just as the single-particle wave function changes everywhere in the case of a single-particle measurement. Nonlocality and entanglement are thus another manifestation of Richard Feynman's "only" mystery in the two-slit experiment. Is There an Asymmetry Here? Here we must explain the asymmetry that Einstein and Schrödinger have introduced into a perfectly symmetric situation, making entanglement such a mystery. Every follower of their early thinking introduces this false asymmetry. The classic EPR idea is completely symmetric about the origin of the state preparation. Einstein introduced the mistaken idea of measuring one particle "first" and then asking how it influences subsequent measurements of the "second" particle. By contrast, Schrödinger's two-particle wave function "collapses" at all positions in an instant of time. Both particles then appear in a space-like separation. The perfectly symmetric picture shows that neither Alice nor Bob can in any way influence the other's experiment, as can be seen best in what we can call a special frame. There is a special frame in which the collapse of the two-particle wave function is best visualized. It is not a preferred frame in the special relativistic sense (e.g., an inertial frame). But observers in all other frames in relative motion along the experiment axis will see one of the measurements before the other. Relativity contributes confusion to what is going on. Almost every presentation of the EPR paradox begins with something like "Alice observes one particle..." and concludes with the question "How does the second particle get the information needed so that Bob's measurements correlate perfectly with Alice?" There is a fundamental asymmetry in this framing of the EPR experiment. It is a surprise that Einstein, who was so good at seeing deep symmetries, did not consider how to remove the asymmetry. Even more puzzling, why did he introduce it? Why do most all subsequent scientists accept it without question? Consider this reframing: Alice's measurement collapses the two-particle wave function. The two indistinguishable particles simultaneously appear at locations in a space-like separation. The frame of reference in which the source of the two entangled particles and the two experimenters are at rest is a special frame in the following sense. As Einstein knew very well, there are frames of reference moving with respect to the laboratory frame of the two observers in which the time order of the events can be reversed. In some moving frames Alice measures first, but in others Bob measures first. If there is a special frame of reference (not a preferred frame in the relativistic sense), surely it is the one in which the origin of the two entangled particles is at rest. Assuming that Alice and Bob are also at rest in this special frame and equidistant from the origin, we arrive at the simple picture in which any measurement that causes the two-particle wave function to collapse makes both particles appear simultaneously at determinate places with fully correlated properties (just those that are needed to conserve energy, momentum, angular momentum, and spin). In the two-particle case (instead of just one particle making an appearance), when either particle is measured, we know instantly those properties of the other particle that satisfy the conservation laws, including its location equidistant from, but on the opposite side of, the entangling interaction, and all other properties such as spin. It's just "knowledge-at-a-distance." No "Hidden Variables," but Perhaps "Hidden Constants?" Although we find no need for "hidden variables," whether local or non-local, we might say that the conservation laws give us "hidden constants." Conservation of a particular property is often described as a "constant of the motion." These constants might be viewed as "local," in that they travel along with particles at all times, or as "global," in that they are a property of the two-particle probability amplitude wave function Ψ12 as it spreads out in space. This agrees with Bohm, and especially with Bell, who says that the spin of particle 2 is "predetermined" to be found up if particle 1 is measured to be down. But recall that the Copenhagen Interpretation says we cannot know a spin property until it is measured. So some claim that the spins are in an unknown combination of spin down and spin up until the measurements. It is this that suggests the possibility that both spins might be found in the same direction, violating conservation. Although electron spins in this situation are never found experimentally in the same direction, the Copenhagen view gave rise to the idea of a hidden variable as some sort of signal that could travel to particle 2 after the measurement of particle 1, causing it to change its spin to be opposite that of particle 1. What sort of signal might this be? And what mechanism exists in a bare electron that could cause it to change a property like its spin without an external force of some kind? Clearly, Wigner's explicit view, and the implicit claims of Bohm and Bell that the electron spins were prepared (entangled) in opposite states, are the simplest and clearest explanations of the entanglement mystery. Despite accepting that a particular value of some "observables" can only be known by a measurement (knowledge is an epistemological problem) Einstein asked whether the particle actually (really, ontologically) has a path and position, even other properties, before we measure it? His answer was yes. So Einstein would likely agree with Wigner, Bohm, and with Bell to assume that the two particles have opposite spins from the time of their entangling interaction. Here is an animation illustrating the assumption that the two electrons are prepared, one in a spin-up, the other in a spin-down state. They remain in those states no matter how far they separate, provided neither interacts with anything else until the measurements at A and B. Two "hidden constants" of the motion, one spin up, one down, completely explain the fact of perfect correlations of opposing spins. That "Nature's" initial choice of up-down versus down-up is quantum random explains why the bit strings can be used in quantum encryption. Principle Theories and Constructivist Theories In his 1933 essay, "On the Method of Theoretical Physics," Albert Einstein argued that the greatest physical theories would be built on "principles," not on constructions derived from physical experience. His theory of special relativity was based on the principle of relativity, that the laws of physics are the same in all inertial frames, along with the constant velocity of light in all frames. Our explanation of entanglement as the result of "hidden constants" of the motion is based on conservation laws, which, as Emmy Noether showed, are based on still deeper principles of symmetry. This explanation is, of course, also based solidly on the empirical fact that electron spins are always found in opposite directions. Erwin Schrödinger, Discussion of Probability between Separated Systems (Entanglement Paper), Proceedings of the Cambridge Physical Society 1935, 31, issue 4, pp.555-563 David Bohm, A Suggested Interpretation of the Quantum Theory in Terms of "Hidden" Variables. I David Bohm and Yakir Aharonov, Discussion of Experimental Proof for the Paradox of Einstein, Rosen, and Podolsky John Bell, On the Einstein-Podolsky-Rosen Paradox "Albert Einstein, On the Method of Theoretical Physics," The Herbert Spencer Lecture, Oxford, June 10, 1933, Ideas and Opinions, Bonanza Books, 1954, pp.270-276; original in Mein Weltbild, Amsterdam, 1934, (PDF) For Teachers For Scholars Normal | Teacher | Scholar
363b996ec172f993
My watch list   1 (none)hydrogenhelium Name, symbol, number hydrogen, H, 1 Chemical seriesnonmetals Group, period, block 11, s Standard atomic weight 1.00794(7) g·mol−1 Electron configuration 1s1 Electrons per shell 1 Physical properties Density(0 °C, 101.325 kPa) 0.08988 g/L Melting point14.01 K (−259.14 °C, −434.45 °F) Boiling point20.28 K (−252.87 °C, −423.17 °F) Triple point13.8033 K (-259°C), 7.042 kPa Critical point32.97 K, 1.293 MPa Heat of fusion(H2) 0.117 kJ·mol−1 Heat of vaporization(H2) 0.904 kJ·mol−1 Heat capacity(25 °C) (H2) 28.836 J·mol−1·K−1 Vapor pressure P/Pa 1 10 100 1 k 10 k 100 k at T/K         15 20 Atomic properties Crystal structurehexagonal Oxidation states1, −1 (amphoteric oxide) Electronegativity2.1 (Pauling scale) Atomic radius25 pm Atomic radius (calc.)53 pm Covalent radius37 pm Van der Waals radius120 pm Thermal conductivity(300 K) 180.5 m W·m−1·K−1 CAS registry number1333-74-0 Selected isotopes Main article: Isotopes of hydrogen iso NA half-life DM DE (MeV) DP 1H 99.985% H is stable with 0 neutrons 2H 0.015% H is stable with 1 neutrons 3H trace 12.32 y β 0.019 3He This box: view  talk  edit Hydrogen (pronounced /ˈhaɪdrədʒən/), is the chemical element represented by the symbol H and an atomic number of 1. At standard temperature and pressure it is a colourless, odorless, nonmetallic, tasteless, highly flammable diatomic gas (H2). With an atomic mass of 1.00794 amu, hydrogen is the lightest element. Hydrogen is the most abundant of the chemical elements, constituting roughly 75% of the universe's elemental mass.[1] Stars in the main sequence are mainly composed of hydrogen in its plasma state. Elemental hydrogen is relatively rare on Earth, and is industrially produced from hydrocarbons such as methane, after which most elemental hydrogen is used "captively" (meaning locally at the production site), with the largest markets about equally divided between fossil fuel upgrading (e.g., hydrocracking) and ammonia production (mostly for the fertilizer market). Hydrogen may be produced from water using the process of electrolysis, but this process is presently significantly more expensive commercially than hydrogen production from natural gas[2]. The most common naturally occurring isotope of hydrogen, known as protium, has a single proton and no neutrons. In ionic compounds it can take on either a positive charge (becoming a cation composed of a bare proton) or a negative charge (becoming an anion known as a hydride). Hydrogen can form compounds with most elements and is present in water and most organic compounds. It plays a particularly important role in acid-base chemistry, in which many reactions involve the exchange of protons between soluble molecules. As the only neutral atom for which the Schrödinger equation can be solved analytically, study of the energetics and bonding of the hydrogen atom has played a key role in the development of quantum mechanics. The solubility and characteristics of hydrogen with various metals are very important in metallurgy (as many metals can suffer hydrogen embrittlement) and in developing safe ways to store it for use as a fuel. Hydrogen is highly soluble in many compounds composed of rare earth metals and transition metals[3] and can be dissolved in both crystalline and amorphous metals.[4] Hydrogen solubility in metals is influenced by local distortions or impurities in the metal crystal lattice.[5] Hydrogen gas is highly flammable and will burn at concentrations as low as 4% H2 in air. The enthalpy of combustion for hydrogen is – 286 kJ/mol;[citation needed] it burns according to the following balanced equation. 2 H2(g) + O2(g) → 2 H2O(l) + 572 kJ/mol When mixed with oxygen across a wide range of proportions, hydrogen explodes upon ignition. Hydrogen burns violently in air. It ignites automatically at a temperature of 560 C [3] Pure hydrogen-oxygen flames burn in the ultraviolet color range and are nearly invisible to the naked eye, as illustrated by the faintness of flame from the main Space Shuttle engines (as opposed to the easily visible flames from the shuttle boosters). Thus it is difficult to visually detect if a hydrogen leak is burning. The Hindenburg zeppelin is an infamous case of hydrogen combustion (pictured), although the tragedy was due mainly to combustible materials in the skin of the zeppelin, which were also responsible for the coloring of the flames.[6] Another characteristic of hydrogen fires is that the flames tend to ascend rapidly with the gas in air, as illustrated by the Hindenberg flames, causing less damage than hydrocarbon fires. For example, two-thirds of the Hindenburg passengers survived the fire, and many of the deaths which occurred were from falling or from diesel fuel burns.[7] Electron energy levels Main article: Hydrogen atom The ground state energy level of the electron in a hydrogen atom is -13.6 eV, which is equivalent to an ultraviolet photon of roughly 92 nm. The energy levels of hydrogen can be calculated fairly accurately using the Bohr model of the atom, which conceptualizes the electron as "orbiting" the proton in analogy to the Earth's orbit of the sun. However, the electromagnetic force attracts electrons and protons to one another, while planets and celestial objects are attracted to each other by gravity. Because of the discretization of angular momentum postulated in early quantum mechanics by Bohr, the electron in the Bohr model can only occupy certain allowed distances from the proton, and therefore only certain allowed energies. A more accurate description of the hydrogen atom comes from a purely quantum mechanical treatment that uses the Schrödinger equation or the equivalent Feynman path integral formulation to calculate the probability density of the electron around the proton. Treating the electron as a matter wave reproduces chemical results such as shape of the hydrogen atom more naturally than the particle-based Bohr model, although the energy and spectral results are the same. Modeling the system fully using the reduced mass of nucleus and electron (as one would do in the two-body problem in celestial mechanics) yields an even better formula for the hydrogen spectra, and also the correct spectral shifts for the isotopes deuterium and tritium. Very small adjustments in energy levels in the hydrogen atom, which correspond to actual spectral effects, may be determined by using a full quantum mechanical theory which corrects for the effects of special relativity (see Dirac equation), and by accounting for quantum effects arising from production of virtual particles in the vacuum and as a result of electric fields (see quantum electrodynamics). In hydrogen liquid, the electronic ground state energy level is split into hyperfine structure levels because of magnetic effects of the quantum mechanical spin of the electron and proton. The energy of the atom when the proton and electron spins are aligned is higher than when they are not aligned. The transition between these two states can occur through emission of a photon through a magnetic dipole transition. Radio telescopes can detect the radiation produced in this process, which is used to map the distribution of hydrogen in the galaxy. H2 reacts directly with other oxidizing elements. A violent and spontaneous reaction can occur at room temperature with chlorine and fluorine, forming the corresponding hydrogen halides: hydrogen chloride and hydrogen fluoride. Elemental molecular forms There are two different types of diatomic hydrogen molecules that differ by the relative spin of their nuclei.[8] In the orthohydrogen form, the spins of the two protons are parallel and form a triplet state; in the parahydrogen form the spins are antiparallel and form a singlet. At standard temperature and pressure, hydrogen gas contains about 25% of the para form and 75% of the ortho form, also known as the "normal form".[9] The equilibrium ratio of orthohydrogen to parahydrogen depends on temperature, but since the ortho form is an excited state and has a higher energy than the para form, it is unstable and cannot be purified. At very low temperatures, the equilibrium state is composed almost exclusively of the para form. The physical properties of pure parahydrogen differ slightly from those of the normal form.[10] The ortho/para distinction also occurs in other hydrogen-containing molecules or functional groups, such as water and methylene. The uncatalyzed interconversion between para and ortho H2 increases with increasing temperature; thus rapidly condensed H2 contains large quantities of the high-energy ortho form that convert to the para form very slowly.[11] The ortho/para ratio in condensed H2 is an important consideration in the preparation and storage of liquid hydrogen: the conversion from ortho to para is exothermic and produces enough heat to evaporate the hydrogen liquid, leading to loss of the liquefied material. Catalysts for the ortho-para interconversion, such as iron compounds, are used during hydrogen cooling.[12] A molecular form called protonated molecular hydrogen, or H3+, is found in the interstellar medium (ISM), where it is generated by ionization of molecular hydrogen from cosmic rays. It has also been observed in the upper atmosphere of the planet Jupiter. This molecule is relatively stable in the environment of outer space due to the low temperature and density. H3+ is one of the most abundant ions in the Universe, and it plays a notable role in the chemistry of the interstellar medium.[13] Further information: Hydrogen compounds Covalent and organic compounds While H2 is not very reactive under standard conditions, it does form compounds with most elements. Millions of hydrocarbons are known, but they are not formed by the direct reaction of elementary hydrogen and carbon (although synthesis gas production followed by the Fischer-Tropsch process to make hydrocarbons comes close to being an exception, as this begins with coal and the elemental hydrogen is generated in situ). Hydrogen can form compounds with elements that are more electronegative, such as halogens (e.g., F, Cl, Br, I) and chalcogens (O, S, Se); in these compounds hydrogen takes on a partial positive charge. When bonded to fluorine, oxygen, or nitrogen, hydrogen can participate in a form of strong noncovalent bonding called hydrogen bonding, which is critical to the stability of many biological molecules. Hydrogen also forms compounds with less electronegative elements, such as the metals and metalloids, in which it takes on a partial negative charge. These compounds are often known as hydrides. Hydrogen forms a vast array of compounds with carbon. Because of their general association with living things, these compounds came to be called organic compounds; the study of their properties is known as organic chemistry and their study in the context of living organisms is known as biochemistry. By some definitions, "organic" compounds are only required to contain carbon (as a classic historical example, urea). However, most of them also contain hydrogen, and since it is the carbon-hydrogen bond which gives this class of compounds most of its particular chemical characteristics, carbon-hydrogen bonds are required in some definitions of the word "organic" in chemistry. (This latter definition is not perfect, however, as in this definition urea would not be included as an organic compound). In inorganic chemistry, hydrides can also serve as bridging ligands that link two metal centers in a coordination complex. This function is particularly common in group 13 elements, especially in boranes (boron hydrides) and aluminum complexes, as well as in clustered carboranes.[14] Compounds of hydrogen are often called hydrides, a term that is used fairly loosely. To chemists, the term "hydride" usually implies that the H atom has acquired a negative or anionic character, denoted H. The existence of the hydride anion, suggested by G.N. Lewis in 1916 for group I and II salt-like hydrides, was demonstrated by Moers in 1920 with the electrolysis of molten lithium hydride (LiH), that produced a stoichiometric quantity of hydrogen at the anode.[15] For hydrides other than group I and II metals, the term is quite misleading, considering the low electronegativity of hydrogen. An exception in group II hydrides is BeH2, which is polymeric. In lithium aluminum hydride, the AlH4 anion carries hydridic centers firmly attached to the Al(III). Although hydrides can be formed with almost all main-group elements, the number and combination of possible compounds varies widely; for example, there are over 100 binary borane hydrides known, but only one binary aluminum hydride.[16] Binary indium hydride has not yet been identified, although larger complexes exist.[17] "Protons" and acids Oxidation of H2 formally gives the proton, H+. This species is central to discussion of acids, though the term proton is used loosely to refer to positively charged or cationic hydrogen, denoted H+. A bare proton H+ cannot exist in solution because of its strong tendency to attach itself to atoms or molecules with electrons. To avoid the convenient fiction of the naked "solvated proton" in solution, acidic aqueous solutions are sometimes considered to contain the hydronium ion (H3O+) organized into clusters to form H9O4+.[18] Other oxonium ions are found when water is in solution with other solvents.[19] Although exotic on earth, one of the most common ions in the universe is the H3+ ion, known as protonated molecular hydrogen or the triatomic hydrogen cation.[20] Main article: Isotopes of hydrogen Hydrogen has three naturally occurring isotopes, denoted 1H, ²H, and ³H. Other, highly unstable nuclei (4H to 7H) have been synthesized in the laboratory but not observed in nature.[21][22] • ²H, the other stable hydrogen isotope, is known as deuterium and contains one proton and one neutron in its nucleus. Deuterium comprises 0.0026 – 0.0184% (by mole-fraction or atom-fraction) of hydrogen samples on Earth, with the lower number tending to be found in samples of hydrogen gas and the higher enrichments (0.015% or 150 ppm) typical of ocean water. Deuterium is not radioactive, and does not represent a significant toxicity hazard. Water enriched in molecules that include deuterium instead of normal hydrogen is called heavy water. Deuterium and its compounds are used as a non-radioactive label in chemical experiments and in solvents for 1H-NMR spectroscopy. Heavy water is used as a neutron moderator and coolant for nuclear reactors. Deuterium is also a potential fuel for commercial nuclear fusion. • ³H is known as tritium and contains one proton and two neutrons in its nucleus. It is radioactive, decaying into Helium-3 through beta decay with a half-life of 12.32 years.[14] Small amounts of tritium occur naturally because of the interaction of cosmic rays with atmospheric gases; tritium has also been released during nuclear weapons tests. It is used in nuclear fusion reactions, as a tracer in isotope geochemistry, and specialized in self-powered lighting devices. Tritium was once routinely used in chemical and biological labeling experiments as a radiolabel (this has become less common). Hydrogen is the only element that has different names for its isotopes in common use today. (During the early study of radioactivity, various heavy radioactive isotopes were given names, but such names are no longer used). The symbols D and T (instead of ²H and ³H) are sometimes used for deuterium and tritium, but the corresponding symbol P is already in use for phosphorus and thus is not available for protium. IUPAC states that while this use is common it is not preferred. Natural occurrence Hydrogen is the most abundant element in the universe, making up 75% of normal matter by mass and over 90% by number of atoms.[23] This element is found in great abundance in stars and gas giant planets. Molecular clouds of H2 are associated with star formation. Hydrogen plays a vital role in powering stars through proton-proton reaction nuclear fusion. Throughout the universe, hydrogen is mostly found in the atomic and plasma states whose properties are quite different from molecular hydrogen. As a plasma, hydrogen's electron and proton are not bound together, resulting in very high electrical conductivity and high emissivity (producing the light from the sun and other stars). The charged particles are highly influenced by magnetic and electric fields. For example, in the solar wind they interact with the Earth's magnetosphere giving rise to Birkeland currents and the aurora. Hydrogen is found in the neutral atomic state in the Interstellar medium. The large amount of neutral hydrogen found in the damped Lyman-alpha systems is thought to dominate the cosmological baryonic density of the Universe up to redshift z=4.[24] Under ordinary conditions on Earth, elemental hydrogen exists as the diatomic gas, H2 (for data see table). However, hydrogen gas is very rare in the Earth's atmosphere (1 ppm by volume) because of its light weight, which enables it to escape from Earth's gravity more easily than heavier gases. Although H atoms and H2 molecules are abundant in interstellar space, they are difficult to generate, concentrate, and purify on Earth. Still, hydrogen is the third most abundant element on the Earth's surface.[25] Most of the Earth's hydrogen is in the form of chemical compounds such as hydrocarbons and water.[14] Hydrogen gas is produced by some bacteria and algae and is a natural component of flatus. Methane is a hydrogen source of increasing importance. Discovery of H2 Hydrogen gas, H2, was first artificially produced and formally described by T. Von Hohenheim (also known as Paracelsus, 1493 – 1541) via the mixing of metals with strong acids. He was unaware that the flammable gas produced by this chemical reaction was a new chemical element. In 1671, Robert Boyle rediscovered and described the reaction between iron filings and dilute acids, which results in the production of hydrogen gas.[26] In 1766, Henry Cavendish was the first to recognize hydrogen gas as a discrete substance, by identifying the gas from a metal-acid reaction as "inflammable air" and further finding that the gas produces water when burned. Cavendish had stumbled on hydrogen when experimenting with acids and mercury. Although he wrongly assumed that hydrogen was a liberated component of the mercury rather than the acid, he was still able to accurately describe several key properties of hydrogen. He is usually given credit for its discovery as an element. In 1783, Antoine Lavoisier gave the element the name of hydrogen when he (with Laplace) reproduced Cavendish's finding that water is produced when hydrogen is burned. Lavoisier's name for the gas won out. One of the first uses of H2 was for balloons, and later airships. The H2 was obtained by reacting sulfuric acid and metallic iron. Infamously, H2 was used in the Hindenburg airship that was destroyed in a midair fire. The highly flammable hydrogen (H2) was later replaced for airships and most balloons by the unreactive helium (He). Role in history of quantum theory Large quantities of H2 are needed in the petroleum and chemical industries. The largest application of H2 is for the processing ("upgrading") of fossil fuels, and in the production of ammonia. The key consumers of H2 in the petrochemical plant include hydrodealkylation, hydrodesulfurization, and hydrocracking.[28] H2 has several other important uses. H2 is used as a hydrogenating agent, particularly in increasing the level of saturation of unsaturated fats and oils (found in items such as margarine), and in the production of methanol. It is similarly the source of hydrogen in the manufacture of hydrochloric acid. H2 is also used as a reducing agent of metallic ores. Apart from its use as a reactant, H2 has wide applications in physics and engineering. It is used as a shielding gas in welding methods such as atomic hydrogen welding. H2 is used as the rotor coolant in electrical generators at power stations, because it has the highest thermal conductivity of any gas. Liquid H2 is used in cryogenic research, including superconductivity studies. Since H2 is lighter than air, having a little more than 1/15th of the density of air, it was once widely used as a lifting agent in balloons and airships. However, this use was curtailed after the Hindenburg disaster erroneously convinced the public that the gas was too dangerous for this purpose. Hydrogen is still regularly used for the inflation of weather balloons. In more recent application Hydrogen is used pure or mixed with Nitrogen (sometime called Forming Gas) as a tracer gas for minute leak detection. Applications can be found in automotive, aircraft, consumer goods, medical device and chemical industry. Hydrogen is an authorized food additive (E 949) that allows food package leak testing among other anti-oxidizing properties.[29] Hydrogen's rarer isotopes also each have specific applications. Deuterium (hydrogen-2) is used in nuclear fission applications as a moderator to slow neutrons, and in nuclear fusion reactions. Deuterium compounds have applications in chemistry and biology in studies of reaction isotope effects. Tritium (hydrogen-3), produced in nuclear reactors, is used in the production of hydrogen bombs, as an isotopic label in the biosciences, and as a radiation source in luminous paints. Energy carrier Main article: Hydrogen economy Hydrogen is not an energy source, except in the hypothetical context of commercial nuclear fusion power plants using deuterium or tritium, a technology presently far from development. The sun's energy comes from nuclear fusion of hydrogen but this process is difficult to achieve on earth. Elemental hydrogen from solar, biological, or electrical sources costs more in energy to make than is obtained by burning it. Hydrogen may be obtained from fossil sources (such as methane) for less energy than required to make it, but these sources are unsustainable, and are also themselves direct energy sources (and are rightly regarded as the basic source of the energy in the hydrogen obtained from them). Molecular hydrogen has been widely discussed in the context of energy, as a possible carrier of energy on an economy-wide scale. A theoretical advantage of using H2 as an energy carrier is the localization and concentration of environmentally unwelcome aspects of hydrogen manufacture from fossil fuel energy sources. For example, CO2 sequestration followed by carbon capture and storage could be conducted at the point of H2 production from methane. Hydrogen used in transportation would burn cleanly, without carbon emissions. However, the infrastructure costs associated with full conversion to a hydrogen economy would be substantial.[30] In addition, the energy density of both liquid hydrogen and hydrogen gas at any practicable pressure is significantly less than that of traditional fuel sources. Laboratory syntheses Zn + 2 H+ → Zn2+ + H2 Aluminum produces H2 upon treatment with acids but also with base: 2 Al + 6 H2O → 2 Al(OH)3 + 3 H2 The electrolysis of water is a simple method of producing hydrogen, although the resulting hydrogen necessarily has less energy content than was required to produce it. A low voltage current is run through the water, and gaseous oxygen forms at the anode while gaseous hydrogen forms at the cathode. Typically the cathode is made from platinum or another inert metal when producing hydrogen for storage. If, however, the gas is to be burnt on site, oxygen is desirable to assist the combustion, and so both electrodes would be made from inert metals. (Iron, for instance, would oxidize, and thus decrease the amount of oxygen given off.) The theoretical maximum efficiency (electricity used vs. energetic value of hydrogen produced) is between 80 – 94%. Bellona Report on Hydrogen In 2007, it was discovered that an alloy of aluminium and gallium in pellet form added to water could be used to generate hydrogen.[31] The process creates also creates alumina, but the expensive gallium, which prevents to formation of an oxide skin on the pellets, can be re-used. This potentially has important implications for a hydrogen economy, since hydrogen can be produced on-site and does not need to be transported. Industrial syntheses Hydrogen can be prepared in several different ways but the economically most important processes involve removal of hydrogen from hydrocarbons. Commercial bulk hydrogen is usually produced by the steam reforming of natural gas.[32] At high temperatures (700 – 1100 °C; 1,300 – 2,000 °F), steam (water vapor) reacts with methane to yield carbon monoxide and H2. CH4 + H2O → CO + 3 H2 This reaction is favored at low pressures but is nonetheless conducted at high pressures (20 atm; 600 inHg) since high pressure H2 is the most marketable product. The product mixture is known as "synthesis gas" because it is often used directly for the production of methanol and related compounds. Hydrocarbons other than methane can be used to produce synthesis gas with varying product ratios. One of the many complications to this highly optimized technology is the formation of coke or carbon: CH4 → C + 2 H2 Consequently, steam reforming typically employs an excess of H2O. Additional hydrogen from steam reforming can be recovered from the carbon monoxide through the water gas shift reaction, especially with an iron oxide catalyst. This reaction is also a common industrial source of carbon dioxide:[32] :CO + H2O → CO2 + H2 Other important methods for H2 production include partial oxidation of hydrocarbons: CH4 + 0.5 O2CO + 2 H2 and the coal reaction, which can serve as a prelude to the shift reaction above:[32] :C + H2O → CO + H2 Hydrogen is sometimes produced and consumed in the same industrial process, without being separated. In the Haber process for the production of ammonia (the world's fifth most produced industrial compound), hydrogen is generated from natural gas. Hydrogen is also produced in usable quantities as a co-product of the major petrochemical processes of steam cracking and reforming. Electrolysis of brine to yield chlorine also produces hydrogen as a co-product. Biological syntheses Other rarer but mechanistically interesting routes to H2 production also exist in nature. Nitrogenase produces approximately one equivalent of H2 for each equivalent of N2 reduced to ammonia. Some phosphatases reduce phosphite to H2. Hydrogen, Latin: 'hydrogenium', is from Ancient Greek ὕδωρ (hydor): "water" and (genes): "forming". Ancient Greek γείνομαι (geinomai): "to beget or sire")[36] The word "hydrogen" has several different meanings; 1. the name of an element. 2. an atom, sometimes called "H dot", that is abundant in space but essentially absent on Earth, because it dimerizes. 3. a diatomic molecule that occurs naturally in trace amounts in the Earth's atmosphere; chemists increasingly refer to H2 as dihydrogen,[37] or hydrogen molecule, to distinguish this molecule from atomic hydrogen and hydrogen found in other compounds. 4. the atomic constituent within all organic compounds, water, and many other chemical compounds. The elemental forms of hydrogen should not be confused with hydrogen as it appears in chemical compounds. See also 1. ^ Hydrogen in the Universe, NASA Website. URL accessed on 2 June 2006. 2. ^ Hydrogen Basics - Production Florida Solar Energy Center. 3. ^ Takeshita T, Wallace WE, Craig RS. (1974). Hydrogen solubility in 1:5 compounds between yttrium or thorium and nickel or cobalt. Inorg Chem 13(9):2282. 4. ^ Kirchheim R, Mutschele T, Kieninger W. (1988). Hydrogen in amorphous and nanocrystalline metals Mater. Sci. Eng. 99: 457–462. 5. ^ Kirchheim R. (1988). Hydrogen solubility and diffusivity in defective and amorphous metals. Prog. Mater. Sci. 32(4):262–325. 6. ^ Dziadecki, John (2005). Hindenburg Hydrogen Fire. Retrieved on 2007-01-16. 7. ^ The Hindenburg Disaster. Swiss Hydrogen Association. Retrieved on 2007-01-16. 8. ^ Universal Industrial Gases, Inc. – Hydrogen (H2) Applications and Uses. Retrieved on September 15, 2005. 9. ^ Tikhonov VI, Volkov AA. (2002). Separation of water into its ortho and para isomers. Science 296(5577):2363. 10. ^ NASA Glenn Research Center Glenn Safety Manual. CH. 6 - Hydrogen. Document GRC-MQSA.001, March 2006. [1] 11. ^ Milenko YY, Sibileva RM, Strzhemechny MA. (1997). Natural ortho-para conversion rate in liquid and gaseous hydrogen. J Low Temp Phys 107(1-2):77–92. 12. ^ Svadlenak RE, Scott AB. (1957). The Conversion of Ortho-to Parahydrogen on Iron Oxide-Zinc Oxide Catalysts. J Am Chem Soc 79(20); 5385–5388. 13. ^ H3+ Resource Center. Universities of Illinois and Chicago. Retrieved on 2007-02-09. 14. ^ a b c Miessler GL, Tarr DA. (2004). Inorganic Chemistry 3rd ed. Pearson Prentice Hall: Upper Saddle River, NJ, USA 15. ^ K. Moers, (1920). 2. Z. Anorg. Allgem. Chem., 113:191. 16. ^ Downs AJ, Pulham CR. (1994). The hydrides of aluminium, gallium, indium, and thallium: a re-evaluation. Chem Soc Rev 23:175–83. 17. ^ Hibbs DE, Jones C, Smithies NA. (1999). A remarkably stable indium trihydride complex: synthesis and characterization of [InH3{P(C6H11)3}]. Chem Commum 185–6. 18. ^ Okumura M, Yeh LI, Myers JD, Lee YT. (1990). Infrared spectra of the solvated hydronium ion: vibrational predissociation spectroscopy of mass-selected H3O+•H2On•H2m. 19. ^ Perdoncin G, Scorrano G. (1977). Protonation equilibria in water at several temperatures of alcohols, ethers, acetone, dimethyl sulfide, and dimethyl sulfoxide. 99(21); 6983–6986. 20. ^ Carrington A, McNab IR. (1989). The infrared predissociation spectrum of triatomic hydrogen cation (H3+). Accounts of Chemical Research 22:218–22. 21. ^ Gurov YB, Aleshkin DV, Berh MN, Lapushkin SV, Morokhov PV, Pechkurov VA, Poroshin NO, Sandukovsky VG, Tel'kushev MV, Chernyshev BA, Tschurenkova TD. (2004). Spectroscopy of superheavy hydrogen isotopes in stopped-pion absorption by nuclei. Physics of Atomic Nuclei 68(3):491–497. 22. ^ Korsheninnikov AA. et al. (2003). Experimental Evidence for the Existence of 7H and for a Specific Structure of 8He. Phys Rev Lett 90, 082501. 23. ^ Jefferson Lab – Hydrogen. Retrieved on September 15, 2005. 24. ^ Surveys for z > 3 Damped Lyα Absorption Systems: The Evolution of Neutral Gas. Retrieved on October 13, 2006. 25. ^ "Basic Research Needs for the Hydrogen Economy." Argonne National Laboratory, U.S. Department of Energy, Office of Science Laboratory. 15 May 2003. [2] 26. ^ Webelements – Hydrogen historical information. Retrieved on September 15, 2005. 27. ^ Berman R, Cooke AH, Hill RW. Cryogenics, Ann. Rev. Phys. Chem. 7 (1956). 1–20. 28. ^ Los Alamos National Laboratory – Hydrogen. Retrieved on September 15, 2005. 29. ^ additives 30. ^ See Romm, Joseph (2004). The Hype about Hydrogen, Fact and Fiction in the Race to Save the Climate. New York: Island Press.  31. ^ New process generates hydrogen from aluminum alloy to run engines, fuel cells. 32. ^ a b c Oxtoby DW, Gillis HP, Nachtrieb NH. (2002). Principles of Modern Chemistry 5th ed. Thomson Brooks/Cole 33. ^ Cammack, R.; Frey, M.; Robson, R. Hydrogen as a Fuel: Learning from Nature; Taylor & Francis: London, 2001 34. ^ Kruse O, Rupprecht J, Bader KP, Thomas-Hall S, Schenk PM, Finazzi G, Hankamer B. (2005). Improved photobiological H2 production in engineered green algal cells. J Biol Chem 280(40):34170–7. 35. ^ United States Department of Energy FY2005 Progress Report. IV.E.6 Hydrogen from Water in a Novel Recombinant Oxygen-Tolerant Cyanobacteria System. HO Smith, Xu Q. Accessed 16 August 2006. 36. ^ LSJ, "of the father to beget, rarely of the mother to give birth. 37. ^ Kubas, G. J., Metal Dihydrogen and σ-Bond Complexes, Kluwer Academic/Plenum Publishers: New York, 2001 Further reading • (1989). "Chart of the Nuclides". Fourteenth Edition. General Electric Company. • Ferreira-Aparicio, P; M. J. Benito, J. L. Sanz (2005). "New Trends in Reforming Technologies: from Hydrogen Industrial Plants to Multifuel Microreformers". Catalysis Reviews 47: 491–588. • Krebs, Robert E. (1998). The History and Use of Our Earth's Chemical Elements: A Reference Guide. Westport, Conn.: Greenwood Press. ISBN 0-313-30123-9.  • Newton, David E. (1994). The Chemical Elements. New York, NY: Franklin Watts. ISBN 0-531-12501-7.  • Rigden, John S. (2002). Hydrogen: The Essential Element. Cambridge, MA: Harvard University Press. ISBN 0-531-12501-7.  • Romm, Joseph, J. (2004). The Hype about Hydrogen, Fact and Fiction in the Race to Save the Climate. Island Press. ISBN 1-55963-703-X.  Author interview at Global Public Media. • Stwertka, Albert (2002). A Guide to the Elements. New York, NY: Oxford University Press. ISBN 0-19-515027-9.  Listen to this article (2 parts) · (info) This audio file was created from a revision dated 2006-10-28, and may not reflect subsequent edits to the article. (Audio help) More spoken articles This article is licensed under the GNU Free Documentation License. It uses material from the Wikipedia article "Hydrogen". A list of authors is available in Wikipedia.
5845d91346090b30
Monday, January 16, 2012 Consciousness, Causality, and Quantum Physics Consciousness, Causality, and Quantum Physics by David Pratt from JSE Website Quantum theory is open to different interpretations, and this paper reviews some of the points of contention. The standard interpretation of quantum physics assumes that the quantum world is characterized by absolute indeterminism and that quantum systems exist objectively only when they are being measured or observed. David Bohm’s ontological interpretation of quantum theory rejects both these assumptions. Bohm’s theory that quantum events are party determined by subtler forces operating at deeper levels of reality ties in with John Eccles’ theory that our minds exist outside the material world and interact with our brains at the quantum level. Paranormal phenomena indicate that our minds can communicate with other minds and affect distant physical systems by nonordinary means. Whether such phenomena can be adequately explained in terms of nonlocality and the quantum vacuum or whether they involve superphysical forces and states of matter as yet unknown to science is still an open question, and one which merits further experimental study. Quantum theory is generally regarded as one of the most successful scientific theories ever formulated. But while the mathematical description of the quantum world allows the probabilities of experimental results to be calculated with a high degree of accuracy, there is no consensus on what it means in conceptual terms. Some of the issues involved are explored below. Quantum uncertainty According to the uncertainty principle, the position and momentum of a subatomic particle cannot be measured simultaneously with an accuracy greater than that set by Planck’s constant. This is because in any measurement a particle must interact with at least one photon, or quantum of energy, which acts both like a particle and like a wave, and disturbs it in an unpredictable and uncontrollable manner. An accurate measurement of the position of an orbiting electron by means of a microscope, for example, requires the use of light of short wavelengths, with the result that a large but unpredictable momentum is transferred to the electron. An accurate measurement of the electron’s momentum, on the other hand, requires light quanta of very low momentum (and therefore long wavelength), which leads to a large angle of diffraction in the lens and a poor definition of the position. According to the conventional interpretation of quantum physics, however, not only is it impossible for us to measure a particle’s position and momentum simultaneously with equal precision, a particle does not possess well-defined properties when it is not interacting with a measuring instrument. Furthermore, the uncertainty principle implies that a particle can never be at rest, but is subject to constant fluctuations even when no measurement is taking place, and these fluctuations are assumed to have no causes at all. In other words, the quantum world is believed to be characterized by absolute indeterminism, intrinsic ambiguity, and irreducible lawlessness. As the late physicist David Bohm (1984, p. 87) put it: "it is assumed that in any particular experiment, the precise result that will be obtained is completely arbitrary in the sense that it has no relationship whatever to anything else that exists in the world or that ever has existed." Bohm (ibid., p. 95) took the view that the abandonment of causality had been too hasty: "it is quite possible that while the quantum theory, and with it the indeterminacy principle, are valid to a very high degree of approximation in a certain domain, they both cease to have relevance in new domains below that in which the current theory is applicable. Thus, the conclusion that there is no deeper level of causally determined motion is just a piece of circular reasoning, since it will follow only if we assume beforehand that no such level exists." Most physicists, however, are content to accept the assumption of absolute chance. We shall return to this issue later in connection with free will. Collapsing the wave function A quantum system is represented mathematically by a wave function, which is derived from Schrödinger’s equation. The wave function can be used to calculate the probability of finding a particle at any particular point in space. When a measurement is made, the particle is of course found in only one place, but if the wave function is assumed to provide a complete and literal description of the state of a quantum system - as it is in the conventional interpretation - it would mean that in between measurements the particle dissolves into a "superposition of probability waves" and is potentially present in many different places at once. Then, when the next measurement is made, this wave packet is supposed to instantaneously "collapse," in some random and mysterious manner, into a localized particle again. This sudden and discontinuous "collapse" violates the Schrödinger equation, and is not further explained in the conventional interpretation. Since the measuring device that is supposed to collapse a particle’s wave function is itself made up of subatomic particles, it seems that its own wave function would have to be collapsed by another measuring device (which might be the eye and brain of a human observer), which would in turn need to be collapsed by a further measuring device, and so on, leading to an infinite regress. In fact, the standard interpretation of quantum theory implies that all the macroscopic objects we see around us exist in an objective, unambiguous state only when they are being measured or observed. Schrödinger devised a famous thought-experiment to expose the absurd implications of this interpretation. A cat is placed in a box containing a radioactive substance, so that there is a fifty-fifty chance of an atom decaying in one hour. If an atom decays, it triggers the release of a poison gas, which kills the cat. After one hour the cat is supposedly both dead and alive (and everything in between) until someone opens the box and instantly collapses its wave function into a dead or alive cat. Various solutions to the "measurement problem" associated with wave-function collapse have been proposed. Some physicists maintain that the classical or macro-world does not suffer from quantum ambiguity because it can store information and is subject to an "arrow of time", whereas the quantum or micro-world is alleged to be unable to store information and time-reversible (Pagels, 1983). A more extravagant approach is the many-worlds hypothesis, which claims that the universe splits each time a measurement (or measurement-like interaction) takes place, so that all the possibilities represented by the wave function (e.g. a dead cat and a living cat) exist objectively but in different universes. Our own consciousness, too, is supposed to be constantly splitting into different selves, which inhabit these proliferating, non-communicating worlds. Other theorists speculate that it is consciousness that collapses the wave function and thereby creates reality. In this view, a subatomic particle does not assume definite properties when it interacts with a measuring device, but only when the reading of the measuring device is registered in the mind of an observer (which may of course be long after the measurement has taken place). According to the most extreme, anthropocentric version of this theory, only selfconscious beings such as ourselves can collapse wave functions. This means that the whole universe must have existed originally as "potentia" in some transcendental realm of quantum probabilities until selfconscious beings evolved and collapsed themselves and the rest of their branch of reality into the material world, and that objects remain in a state of actuality only so long as they are being observed by humans (Goswami, 1993). Other theorists, however, believe that nonselfconscious entities, including cats and possibly even electrons, may be able to collapse their own wave functions (Herbert, 1993). The theory of wave-function collapse (or state-vector collapse, as it is sometimes called) raises the question of how the "probability waves" that the wave function is thought to represent can collapse into a particle if they are no more than abstract mathematical constructs. Since the very idea of wave packets spreading out and collapsing is not based on hard experimental evidence but only on a particular interpretation of the wave equation, it is worth taking a look at one of the main alternative interpretations, that of David Bohm and his associates, which provides an intelligible account of what may be taking place at the quantum level. The implicate order Bohm’s ontological interpretation of quantum physics rejects the assumption that the wave function gives the most complete description of reality possible, and thereby avoids the need to introduce the ill-defined and unsatisfactory notion of wave-function collapse (and all the paradoxes that go with it). Instead, it assumes the real existence of particles and fields: particles have a complex inner structure and are always accompanied by a quantum wave field; they are acted upon not only by classical electromagnetic forces but also by a subtler force, the quantum potential, determined by their quantum field, which obeys Schrödinger’s equation. (Bohm & Hiley, 1993; Bohm & Peat, 1989; Hiley & Peat, 1991) The quantum potential carries information from the whole environment and provides direct, nonlocal connections between quantum systems. It guides particles in the same way that radio waves guide a ship on automatic pilot -- not by its intensity but by its form. It is extremely sensitive and complex, so that particle trajectories appear chaotic. It corresponds to what Bohm calls the implicate order, which can be thought of as a vast ocean of energy on which the physical, or explicate, world is just a ripple. Bohm points out that the existence of an energy pool of this kind is recognized, but given little consideration, by standard quantum theory, which postulates a universal quantum field -- the quantum vacuum or zero-point field -- underlying the material world. Very little is known about the quantum vacuum at present, but its energy density is estimated to be an astronomical 10^108 J/cm³ (Forward, 1996, pp. 328-37). In his treatment of quantum field theory, Bohm proposes that the quantum field (the implicate order) is subject to the formative and organizing influence of a superquantum potential, which expresses the activity of a superimplicate order. The superquantum potential causes waves to converge and diverge again and again, producing a kind of average particle-like behavior. The apparently separate forms that we see around us are therefore only relatively stable and independent patterns, generated and sustained by a ceaseless underlying movement of enfoldment and unfoldment, with particles constantly dissolving into the implicate order and then recrystallizing. This process takes place incessantly, and with incredible rapidity, and is not dependent upon a measurement being made. In Bohm’s model, then, the quantum world exists even when it is not being observed and measured. He rejects the positivist view that something that cannot be measured or known precisely cannot be said to exist. In other words, he does not confuse epistemology with ontology, the map with the territory. For Bohm, the probabilities calculated from the wave function indicate the chances of a particle being at different positions regardless of whether a measurement is made, whereas in the conventional interpretation they indicate the chances of a particle coming into existence at different positions when a measurement is made. The universe is constantly defining itself through its ceaseless interactions -- of which measurement is only a particular instance -- and absurd situations such as dead-and-alive cats therefore cannot arise. Thus, although Bohm rejects the view that human consciousness brings quantum systems into existence, and does not believe that our minds normally have a significant effect on the outcome of a measurement (except in the sense that we choose the experimental setup), his interpretation opens the way for the operation of deeper, subtler, more mindlike levels of reality. He argues that consciousness is rooted deep in the implicate order, and is therefore present to some degree in all material forms. He suggests that there may be an infinite series of implicate orders, each having both a matter aspect and a consciousness aspect: "everything material is also mental and everything mental is also material, but there are many more infinitely subtle levels of matter than we are aware of" (Weber, 1990, p. 151). The concept of the implicate domain could be seen as an extended form of materialism, but, he says, "it could equally well be called idealism, spirit, consciousness. The separation of the two -- matter and spirit -- is an abstraction. The ground is always one." (Weber, 1990, p. 101) Mind and free will Quantum indeterminism is clearly open to interpretation: it either means hidden (to us) causes, or a complete absence of causes. The position that some events "just happen" for no reason at all is impossible to prove, for our inability to identify a cause does not necessarily mean that there is no cause. The notion of absolute chance implies that quantum systems can act absolutely spontaneously, totally isolated from, and uninfluenced by, anything else in the universe. The opposing standpoint is that all systems are continuously participating in an intricate network of causal interactions and interconnections at many different levels. Individual quantum systems certainly behave unpredictably, but if they were not subject to any causal factors whatsoever, it would be difficult to understand why their collective behavior displays statistical regularities. The position that everything has a cause, or rather many causes, does not necessarily imply that all events, including our own acts and choices, are rigidly predetermined by purely physical processes -- a standpoint sometimes called "hard determinism" (Thornton, 1989). The indeterminism at the quantum level provides an opening for creativity and free will. But if this indeterminism is interpreted to mean absolute chance, it would mean that our choices and actions just "pop up" in a totally random and arbitrary way, in which case they could hardly be said to be our choices and the expression of our own free will. Alternatively, quantum indeterminism could be interpreted as causation from subtler, nonphysical levels, so that our acts of free will are caused -- but by our own selfconscious minds. From this point of view -- sometimes called "soft determinism" -- free will involves active, selfconscious self-determination. According to orthodox scientific materialism, mental states are identical with brain states; our thoughts and feelings, and our sense of self, are generated by electrochemical activity in the brain. This would mean either that one part of the brain activates another part, which then activates another part, etc., or that a particular region of the brain is activated spontaneously, without any cause, and it is hard to see how either alternative would provide a basis for a conscious self and free will. Francis Crick (1994), for example, who believes that consciousness is basically a pack of neurons, says that the main seat of free will is probably in or near a part of the cerebral cortex known as the anterior cingulate sulcus, but he implies that our feeling of being free is largely, if not entirely, an illusion. Those who reduce consciousness to a by-product of the brain disagree on the relevance of the quantum-mechanical aspects of neural networks: for example, Francis Crick, the late Roger Sperry (1994), and Daniel Dennett (1991) tend to ignore quantum physics, while Stuart Hameroff (1994) believes that consciousness arises from quantum coherence in microtubules within the brain’s neurons. Some researchers see a connection between consciousness and the quantum vacuum: for example, Charles Laughlin (1996) argues that the neural structures that mediate consciousness may interact nonlocally with the vacuum (or quantum sea), while Edgar Mitchell (1996) believes that both matter and consciousness arise out of the energy potential of the vacuum. Neuroscientist Sir John Eccles dismisses the materialistic standpoint as a "superstition", and advocates dualist interactionism: he argues that there is a mental world in addition to the material world, and that our mind or self acts on the brain (particularly the supplementary motor area of the neocortex) at the quantum level by increasing the probability of the firing of selected neurons (Eccles, 1994; Giroldini, 1991). He argues that the mind is not only nonphysical but absolutely nonmaterial and nonsubstantial. However, if it were not associated with any form of energy-substance whatsoever, it would be a pure abstraction and therefore unable to exert any influence on the physical world. This objection also applies to antireductionists who shun the word "dualist" and describe matter and consciousness as complementary or dyadic aspects of reality, yet deny consciousness any energetic or substantial nature, thereby implying that it is fundamentally different from matter and in fact a mere abstraction. An alternative position is that which is echoed in many mystical and spiritual traditions: that physical matter is just one "octave" in an infinite spectrum of matter-energy, or consciousness-substance, and that just as the physical world is largely organized and coordinated by inner worlds (astral, mental, and spiritual), so the physical body is largely energized and controlled by subtler bodies or energy-fields, including an astral model-body and a mind or soul (see Purucker, 1973). According to this view, nature in general, and all the entities that compose it, are formed and organized mainly from within outwards, from deeper levels of their constitution. This inner guidance is sometimes automatic and passive, giving rise to our automatic bodily functions and habitual and instinctual behavior, and to the regular, lawlike operations of nature in general, and sometimes it is active and self-conscious, as in our acts of intention and volition. A physical system subjected to such subtler influences is not so much acted upon from without as guided from within. As well as influencing our own brains and bodies, our minds also appear to be able to affect other minds and bodies and other physical objects at a distance, as seen in paranormal phenomena. It was David Bohm and one of his supporters, John Bell of CERN, who laid most of the theoretical groundwork for the EPR experiments performed by Alain Aspect in 1982 (the original thought-experiment was proposed by Einstein, Podolsky, and Rosen in 1935). These experiments demonstrated that if two quantum systems interact and then move apart, their behavior is correlated in a way that cannot be explained in terms of signals traveling between them at or slower than the speed of light. This phenomenon is known as nonlocality, and is open to two main interpretations: • either it involves unmediated, instantaneous action at a distance • or it involves faster-than-light signaling If nonlocal correlations are literally instantaneous, they would effectively be noncausal; if two events occur absolutely simultaneously, "cause" and "effect" would be indistinguishable, and one of the events could not be said to cause the other through the transfer of force or energy, for no such transfer could take place infinitely fast. There would therefore be no causal transmission mechanism to be explained, and any investigations would be confined to the conditions that allow correlated events to occur at different places. It is interesting to note that light and other electromagnetic effects were also once thought to be transmitted instantaneously, until observational evidence proved otherwise. The hypothesis that nonlocal connections are absolutely instantaneous is impossible to verify, as it would require two perfectly simultaneous measurements, which would demand an infinite degree of accuracy. However, as David Bohm and Basil Hiley (1993, pp. 293-4, 347) have pointed out, it could be experimentally falsified. For if nonlocal connections are propagated not at infinite speeds but at speeds greater than that of light through a "quantum ether" -- a subquantum domain where current quantum theory and relativity theory break down -- then the correlations predicted by quantum theory would vanish if measurements were made in periods shorter than those required for the transmission of quantum connections between particles. Such experiments are beyond the capabilities of present technology but might be possible in the future. If superluminal interactions exist, they would be "nonlocal" only in the sense of nonphysical. Nonlocality has been invoked as an explanation for telepathy and clairvoyance, though some investigators believe that they might involve a deeper level of nonlocality, or what Bohm calls "super-nonlocality" (similar perhaps to Sheldrake’s "morphic resonance" (1989)). As already pointed out, if nonlocality is interpreted to mean instantaneous connectedness, it would imply that information could be "received" at a distance at exactly the same moment as it is generated, without undergoing any form of transmission. At most, one could then try to understand the conditions that allow the instant appearance of information. The alternative position is that information -- which is basically a pattern of energy -- always takes time to travel from its source to another location, that information is stored at some paraphysical level, and that we can access this information, or exchange information with other minds, if the necessary conditions of "sympathetic resonance" exist. As with EPR, the hypothesis that telepathy is absolutely instantaneous is unprovable, but it might be possible to devise experiments that could falsify it. For if ESP phenomena do involve subtler forms of energy traveling at finite but perhaps superluminal speeds through superphysical realms, it might be possible to detect a delay between transmission and reception, and also some weakening of the effect over very long distances, though it is already evident that any attenuation must be far less than that experienced by electromagnetic energy, which is subject to the inverse-square law. As for precognition, the third main category of ESP, one possible explanation is that it involves direct, "nonlocal" access to the actual future. Alternatively, it may involve clairvoyant perception of a probable future scenario that is beginning to take shape on the basis of current tendencies and intentions, in accordance with the traditional idea that coming events cast their shadows before them. Bohm says that such foreshadowing takes place "deep in the implicate order" (Talbot, 1992, p. 212) -- which some mystical traditions would call the astral or akashic realms. Psychokinesis and the unseen world Micro-psychokinesis involves the influence of consciousness on atomic particles. In certain micro-PK experiments conducted by Helmut Schmidt, groups of subjects were typically able to alter the probabilities of quantum events from 50% to between 51 and 52%, and a few individuals managed over 54% (Broughton, 1991, p. 177). Experiments at the PEAR lab at Princeton University have yielded a smaller shift of 1 part in 10,000 (Jahn & Dunne, 1987). Some researchers have invoked the theory of the collapse of wave functions by consciousness in order to explain such effects. It is argued that in micro-PK, in contrast to ordinary perception, the observing subject helps to specify what the outcome of the collapse of the wave function will be, perhaps by some sort of informational process (Broughton, 1991, pp. 177-81). Eccles follows a similar approach in explaining how our minds act on our own brains. However, the concept of wave-function collapse is not essential to explaining mind-matter interaction. We could equally well adopt the standpoint that subatomic particles are ceaselessly flickering into and out of physical existence, and that the outcome of the process is modifiable by our will -- a psychic force. Macro-PK involves the movement of stable, normally unmoving objects by mental effort. Related phenomena include poltergeist activity, materializations and dematerializations, teleportation, and levitation. Although an impressive amount of evidence for such phenomena has been gathered by investigators over the past one hundred and fifty years (Inglis, 1984, 1992; Milton, 1994), macro-PK is a taboo area, and attracts little interest, despite -- or perhaps because of -- its potential to overthrow the current materialistic paradigm and revolutionize science. Such phenomena clearly involve far more than altering the probabilistic behavior of atomic particles, and could be regarded as evidence for forces, states of matter, and nonphysical living entities currently unknown to science. Confirmation that such things exist would provide a further indication that within the all-embracing unity of nature there is endless diversity. The possible existence of subtler planes interpenetrating the physical plane is at any rate open to investigation (see Tiller, 1993), and this is more than can be said for the hypothetical extra dimensions postulated by superstring theory, which are said to be curled up in an area a billion-trillion-trillionth of a centimeter across and therefore completely inaccessible, or the hypothetical "baby universes" and "bubble universes" postulated by some cosmologists, which are said to exist in some equally inaccessible "dimension". The hypothesis of superphysical realms does not seem to be favored by many researchers. Edgar Mitchell (1996), for example, believes that all psychic phenomena involve nonlocal resonance between the brain and the quantum vacuum, and consequent access to holographic, nonlocal information. In his view, this hypothesis could explain not only PK and ESP, but also out-of-body and near-death experiences, visions and apparitions, and evidence usually cited in favor of a reincarnating soul. He admits that this theory is speculative, unvalidated, and may require new physics. Further experimental studies of consciousness-related phenomena, both normal and paranormal, will hopefully allow the merits of the various contending theories to be tested. Such investigations could deepen our knowledge of the workings of both the quantum realm and our minds, and the relationship between them, and indicate whether the quantum vacuum really is the bottom level of all existence, or whether there are deeper realms of nature waiting to be explored. • Bohm, D. (1984). Causality and Chance in Modern Physics. London: Routledge & Kegan Paul. First published in 1957. • Bohm, D. & Hiley, B.J. (1993). The Undivided Universe: An ontological interpretation of quantum theory. London and New York: Routledge. • Bohm, D. & Peat, F.D. (1989). Science, Order & Creativity. London: Routledge. • Broughton, R.S. (1991). Parapsychology: The Controversial Science. New York: Ballantine Books. • Crick, F. (1994). The Astonishing Hypothesis: The Scientific Search for the Soul. London: Simon & Schuster. • Dennett, D.C. (1991). Consciousness Explained. London: Allen Lane/Penguin. • Eccles, J.C. (1994). How the Self Controls Its Brain. Berlin: Springer-Verlag. • Forward, R.L. (1996). Mass Modification Experiment Definition Study, Journal of Scientific Exploration, 10:3, 325. • Giroldini, W. (1991). Eccles’s Model of Mind-Brain Interaction and Psychokinesis: A Preliminary Study, Journal of Scientific Exploration, 5:2, pp. 145-61. • Goswami, A. with Reed, R.E. & Goswami, M. (1993). The Self-Aware Universe: How consciousness creates the material world. New York: Tarcher/Putnam. • Hameroff, S.R. (1994). Quantum coherence in microtubules: A neural basis for emergent consciousness? Journal of Consciousness Studies, 1:1, 91. • Herbert, N. (1993). Elemental Mind: Human Consciousness and the New Physics. New York: Dutton. • Hiley, B.J. & Peat, F.D. (eds.) (1991). Quantum Implications: Essays in honour of David Bohm. London and New York: Routledge. • Inglis, B. (1984). Science and Parascience: A history of the paranormal, 1914-1939. London: Hodder and Stoughton. • Inglis, B. (1992). Natural and Supernatural: A History of the Paranormal from the Earliest Times to 1914. Bridport/Lindfield: Prism/Unity. First published in 1977. • Jahn, R.G. & Dunne, B.J. (1987). Margins of Reality: The Role of Consciousness in the Physical World. New York: Harcourt Brace. • Laughlin, C.D. (1996). Archetypes, Neurognosis and the Quantum Sea. Journal of Scientific Exploration, 10:3, 375. • Milton, R. (1994). Forbidden Science: Suppressed research that could change our lives. London: Fourth Estate. • Mitchell, E. with Williams, D. (1996). The Way of the Explorer: An Apollo Astronaut’s Journey Through the Material and Mystical Worlds. New York: Putnam. • Pagels, H.R. (1983). The Cosmic Code: Quantum Physics as the Language of Nature. New York: Bantam. • Purucker, G. de (1973). The Esoteric Tradition. Pasadena, California: Theosophical University Press. 2nd ed. first published in 1940. • Sheldrake, R. (1989). The Presence of the Past: Morphic Resonance and the Habits of Nature. New York: Vintage. • Sperry, R.W. (1994). Holding Course Amid Shifting Paradigms. In New Metaphysical Foundations of Modern Science, edited by W. Harman with J. Clark. Sausalito, California: Institute of Noetic Sciences. • Talbot, M. (1992). The Holographic Universe. New York: HarperPerennial. • Thornton, M. (1989). Do we have free will? Bristol: Bristol Classical Press. • Tiller, W.A. (1993). What Are Subtle Energies? Journal of Scientific Exploration, 7:3, 293. • Weber, R. (1990). Dialogues with Scientists and Sages: The Search for Unity. London: Arkana. Original Page: Shared from Read It Later No comments: Post a Comment
0b3ec9c1f60420b6
Emission spectrum From Wikipedia, the free encyclopedia   (Redirected from Emission spectroscopy) Jump to: navigation, search Emission spectrum of a metal halide lamp. E_{\text{photon}} = h\nu, where E_{\text{photon}} is the energy of the photon, \nu is its frequency, and h is Planck's constant. This concludes that only photons having certain energies are emitted by the atom. The principle of the atomic emission spectrum explains the varied colors in neon signs, as well as chemical flame test results (described below). Emission spectrum of Hydrogen Emission spectrum of Iron Radiation from molecules[edit] Emission spectroscopy[edit] Schematic diagram of spontaneous emission Emission lines from hot gases were first discovered[citation needed] by Ångström, and the technique was further developed by David Alter, Gustav Kirchhoff and Robert Bunsen. See the history of spectroscopy for details. Experimental technique in flame emission spectroscopy[edit] Emission coefficient[edit] Emission coefficient is a coefficient in the power output per unit time of an electromagnetic source, a calculated value in physics. The emission coefficient of a gas varies with the wavelength of the light. It has units of ms−3sr−1.[1] It is also used as a measure of environmental emissions (by mass) per MWh of electricity generated, see: Emission factor. Scattering of light[edit] Spontaneous emission[edit] A warm body emitting photons has a monochromatic emission coefficient relating to its temperature and total power radiation. This is sometimes called the second "Einstein coefficient", and can be deduced from quantum mechanical theory. Energy spectrum[edit] An energy spectrum is a distribution energy among a large assemblage of particles. It is a statistical representation of the wave energy as a function of the wave frequency, and an empirical estimator of the spectral function. For any given value of energy, it determines how many of the particles have that much energy. The particles may be atoms, photons or a flux of elementary particles. The Schrödinger equation and a set of boundary conditions form an eigenvalue problem. A possible value (E) is called an eigenenergy. A non-zero solution of the wave function is called an eigenenergy state, or simply an eigenstate. The set of eigenvalues {Ej} is called the energy spectrum of the particle. The electromagnetic spectrum can also be represented as the distribution of electromagnetic radiation according to energy. The relationship among the wavelength (usually denoted by Greek "\lambda"), the frequency (usually denoted by Greek "\nu"), and the energy E are: where c is the speed of light and h is Planck's Constant. An example of an energy spectrum in the physical domain is ocean waves breaking on the shore. For any given interval of time it can be observed that some of the waves are larger than others. Plotting the number of waves against the amplitude (height) for the interval will yield the energy spectrum of the set.[2] Optical spectroscopy and astrophysics application[edit] Energy spectra are often used in astrophysical spectroscopy. Some modern spectrophotometers, such as the Perkin Elmer 950, include an energy scan option. This is additionally useful in cases where a reference cell is not practical or when absorbance / transmittance is off-scale.[2][3] See also[edit] 1. ^ Carroll, Bradley W. (2007). An Introducion to Modern Astrophysics. CA, USA: Pearson Education. p. 256. ISBN 0-8053-0402-9.  2. ^ a b Solar Energy Spectrum, Integrated Energy, Wavelengths of Light Colors and Visual Response of Eye 3. ^ Allen, C.W. Astrophysical Quantities, 3rd edition, 1973, p. 109, 172. External links[edit]
813acc7f76d64f5e
Sensors Sensors 1424-8220 Molecular Diversity Preservation International (MDPI) 10.3390/s120506049 sensors-12-06049 Review Sensing with Superconducting Point Contacts NurbawonoArgo1 ZhangChun12* Department of Physics, National University of Singapore, 2 Science Drive 3, Singapore; E-Mail: argo Department of Chemistry, National University of Singapore, 3 Science Drive 3, Singapore Author to whom correspondence should be addressed; E-Mail: 2012 10 05 2012 12 5 6049 6074 14 03 2012 06 04 2012 20 04 2012 © 2012 by the authors; licensee MDPI, Basel, Switzerland. 2012 Superconducting point contacts have been used for measuring magnetic polarizations, identifying magnetic impurities, electronic structures, and even the vibrational modes of small molecules. Due to intrinsically small energy scale in the subgap structures of the supercurrent determined by the size of the superconducting energy gap, superconductors provide ultrahigh sensitivities for high resolution spectroscopies. The so-called Andreev reflection process between normal metal and superconductor carries complex and rich information which can be utilized as powerful sensor when fully exploited. In this review, we would discuss recent experimental and theoretical developments in the supercurrent transport through superconducting point contacts and their relevance to sensing applications, and we would highlight their current issues and potentials. A true utilization of the method based on Andreev reflection analysis opens up possibilities for a new class of ultrasensitive sensors. point contact spectroscopy superconductivity andreev reflections Since its first discovery over a hundred years ago [1], superconductors have been utilized for various sensing applications, among others. Superconducting quantum interference devices (SQUIDs) for example, are ubiquitous for ultrasensitive magnetic sensors such as magnetic resonance imaging (MRI) in medical applications, thanks to the Josephson effects [2]. Less common applications are point contact Andreev reflection (PCAR) spectroscopies, which are still fairly limited mainly in laboratory demonstrations and theoretical studies. This is due to non-trivial Andreev physics that is involved in the supercurrent transport through point contacts (PC) which requires more rigorous theoretical treatments in order to decipher the underlying physics and therefore to interpret the experimental results correctly. PC can be fabricated with various methods, for example using a sharp or needle like metallic probe with chemically etched tip, which is then pressed onto another metallic surface using a combination of piezoelectric actuator and differential screw mechanism [3]. A combination of reactive ion etching (RIE) and electron beam machining is also common to produce nanobridges [4], which are basically nanoholes drilled through a thin insulator. Another common technique is micro-controlled break junction (MCBJ) [5], which is basically a metallic nanocontact produced with electron beam machining that can be broken up to produce an atomic gap. This gap can be precisely adjusted using a piezoelectric actuator. The contact sizes range from a few nanometers down to a single atom, and therefore the transport through these PCs is mainly ballistic or under the Sharvin limit [6], where the constriction or the contact size is much smaller than the elastic mean free path of the electrons. Over the past decade there are mainly two very significant landmarks in the applications of PCAR spectroscopies. The first one is the measurement of magnetic polarization [3,7], which utilizes the fact that Andreev process is suppressed when a supercurrent flows from a superconductor to a magnetic normal metal. The degree of polarization can be precisely measured by fitting the entire differential conductance with an appropriate model based on a semiclassical theory, which would be discussed in detail later in this review. This method has spurred new experimental and theoretical developments in magnetic polarization measurements, partly because the PCAR method is easier and more flexible compared to the older methods such as spin-dependent tunneling planar junctions [8] and spin-resolved photoemissions spectroscopy [9]. The second significant landmark is the experimental determination of individual transmission quantum channels of a superconducting single-atom contact [1012], utilizing a microscopic Hamiltonian model and nonequilibrium Green's functions technique to fit the current-voltage curves. This was the first time that the details of quantum conduction channels have ever been resolved experimentally after it was first proposed more than fifty years ago by Landauer [13,14]. Since then, the microscopic Hamiltonian theory is becoming the mainstream in the subsequent development of superconducting quantum transport. Many experiments followed after this pioneering work discussing other various aspects such as using different contact materials from niobium [15,16], effects of diffusivity [17], ferromagnetic interface [18], hydrogen adsorption [19], or structural deformation effects [20], etc. There are also other more recent exciting experimental developments such as the work of Ji et al. [21] and Marchenkov et al. [22], and we would also briefly discuss them in the section on experimental surveys. In order to have a meaningful physical understanding of the PCAR physics, we shall also present a detailed discussions of the theoretical aspects in both semiclassical and quantum pictures. The theoretical discussions in this review shall be divided into two parts. The first part is the summary of the semiclassical treatment based on the famous Blonder–Tinkham–Klapwijk (BTK) theory [23] and its relevant extensions for the PCAR magnetic polarization measurements. The second part is the so-called quantum Hamiltonian theory where we would adopt nonequilibrium Green's function method which is regarded as the most rigorous quantum perturbative technique for dealing with nonequilibrium problems [24]. This formalism fits the atomic point contacts where the conduction consists of only a few quantum channels. We would derive the supercurrent based on the Bardeen–Cooper–Schrieffer (BCS) model of Hamiltonian [25], and highlight some applications of the theory such as to resolve individual quantum channels of a superconducting MCBJ [10], and to study quantum dots coupled to superconducting leads under external radiations [26]. Experimental Surveys Magnetic Polarization Measurements The technique of PCAR spectroscopy has been used for measuring the polarization of ferromagnetic materials [3,7,27], which is mainly driven by the need to search suitable materials for spintronic devices [28,29]. The PCAR method provides easier and more flexible measurements compared to the conventional spin tunneling using planar junctions [8] or spin resolved photoemissions spectroscopy [9]. Unlike the planar junction method, PCAR does not need application of large magnetic field of several Teslas, and there is no constraints in terms of thin film fabrications which impose severe limitations on the types of materials that can be tested. Also, PCAR offers better energy resolutions compared to the photoemission method which is typically limited to ∼1 meV resolutions. The PC and the sample are immersed in a liquid helium bath to keep the temperature below the transition temperature Tc. The positioning and adjustment of the PC employed standard piezoelectric actuators for achieving ideal ballistic contacts. Some cares must be taken to prevent excessive pressure on the tip as this may change electronic properties of the materials and hence the spin polarizations [30]. The current is usually obtained using standard AC lock-in techniques at few kHz frequency. The PCAR method is based on the fact that the current through the PC differs when the tip is superconducting compared to when it is in normal state. The PCAR method is based on the behaviour of the conductance at very low bias where the current is most dependent on the polarization P of the ferromagnet. At low bias electrons enter the gap through Andreev reflection (AR) mechanism, which produces a hole that travels in opposite direction for every electron that enters the gap. The net charge of 2e that moves as supercurrent results in the doubling of conductance, i.e., GNS/GNN = 2. This ratio is called the normalized conductance. When the normal metal is a ferromagnet with perfect polarization, i.e., P = 1, then the probability for the electron to make a pair with another electron with opposite spin is virtually zero, and therefore AR is completely suppressed at the interface as illustrated in Figure 1(a). This leads to zero conductance, i.e., GNS/GNN = 0. A simple linear interpolation between these two extremes gives, GNS/GNN = 2(1 − P), and based on this ballistic assumption, Upadhyay et al. [7] and Soulen et al. [3,30] independently made the first PCAR magnetic polarization measurements, though the idea for deducing spin polarization from conductance was already proposed by de Jong et al. [27]. The theoretical normalized conductance for different polarizations can be seen in Figure 1(b). They fit the entire normalized differential conductance curves for Co, Ni, and some compound ferromagnets as well as Cu. Of course this ballistic assumption is insufficient and the effects of some diffusivity, impurities and surface properties at the contact must be incorporated in order to make better fits to the experimental curves. Mazin et al. [31] and Strijkers et al. [32] proposed a straightforward extension to the BTK theory, which then became a more standard method for polarization measurements with PCARs. As the scattering suppresses AR at low bias and creates sharp peaks in the conductance at eV = ±Δ, a careful account of the diffusive transport is necessary to obtain more reliable estimate for the polarization measurements. Suppression of AR may be misinterpreted as overestimation of polarization if scattering is not properly accounted for. A different parameterization for the BTK coefficients was then proposed and used to determine the spin polarization measurements in half-metallic CrO2 [33]. The modified BTK versions by Mazin and Strijkers are fairly similar and a comparison for CrO2 system reveals only 0.02 difference in the polarization measurements, which is about the accuracy of the PCAR method [34]. These details shall be discussed in the theoretical sections. The model also incorporates proximity effects which can reduce the effective gap of superconductors. Hundreds of related works on PCAR magnetic measurements appeared following these main experimental and theoretical achievements ever since. For instance, Pérez-Willard et al. [35] performed PCAR measurements on Al/Co contact fabricated with RIE method [4] and analyzed the dependence of conductance on the temperatures and magnetic fields. The temperature, as predicted by the extended BTK model, reduces the effective superconducting gap and still finds nice agreements with the theory apart from the temperatures close to Tc. Applications of magnetic field parallel to the insulating layer also modifies the Andreev spectra. Magnetic fields reduces the height of the two maxima around the gap and the transition to normal conductance at the threshold field was abrupt. Panguluri et al. [36] performed PCAR measurement on MnAs epitaxial films grown on [011] GaAs using Pb and Sn point contacts. They also performed a phonon spectra analysis (d2I/dV2) of the contacts and concluded that smaller contact diameters are necessary to achieve truly ballistic transport, and to obtain a reliable PCAR measurements contact sizes around 10 nm or smaller are generally preferable. PCAR can also be used to measure spin diffusion lengths. For example Geresdi et al. and others [37,38] used PCAR to measure spin relaxation in Pt thin films grown on the top of a ferromagnetic Co layer, where by the temperature dependence was investigated and various sources of the spin relaxation in Pt were identified. The widespread use of the BTK theory extension for PCAR spectroscopy has been questioned by Xia et al. [39] who argued that realistic interface conditions must be considered if PCAR measurements are to be valid after all. From the theoretical works on giant magnetoresistance it is generally known that reflection processes at the interface between nonmagnetic and ferromagnetic materials are strongly spin dependent [40], yet the model used in PCAR experiments never introduced spin-dependent scattering at the interface. Xia et al. found that failing to take spin-dependent scattering potential into account would result in poor fitting for Pb/Co systems. Grein et al. [41] recently propose spin-active scattering model of PCAR spectra, which include spin filtering and spin mixing effects. They found that the shape of the interface potential has important effects on the spin mixing effects, which probably makes it necessary to reconsider the general validity of some PCAR measurements once again. Individual Quantum Channel Measurements The second important landmark in the applications of the PCAR method is to determine the individual transmission coefficients of an atomic point contact (APC) [10,11] or often called quantum point contact (QPC). A typical APC consists only of a small number of eigenchannels and each of them is characterized by a transmission coefficient, τn. Each eigenchannel contributes to the conductance by G0τn, where G0 is the quantum conductance given by G0 = 2e2/h. The total conductance of an APC is thus given by [13,14], G = 2 e 2 h n τ n Since the transmission coefficient of each channels can take value between zero and unity, the conductance of a single channel is mostly less than G0 despite the fact that statistically the conductance of an APC tends to be quantized. The quantitative information on individual conductance channels has been inaccessible through normal conductance measurements, but for superconducting systems this can be extracted due to the sensitivity of the so-called sub-gap structures (SGS) of the superconductor at low bias to small changes of each conductance channels. The SGS originates from multiple Andreev reflection (MAR) [42] between two superconductors and the centre normal (vacuum) region, which we shall discuss in detail later in the theory section. This presumably resolves the old question that whether a quantum conductance in the measurements actually corresponds to a number of partially open channels, instead of just one channel. Scheer et al. [10] demonstrated using a superconducting Aluminium APC fabricated with MCBJ method, and fitted the time averaged current with the theoretical model based on the quantum Hamiltonian theory [43]. They found that a single Al atomic contact actually corresponds to three partially open eigenchannels, which exactly correspond to the number of the valence orbitals as illustrated in Figure 2. This conclusion is further verified also for Pb and Nb APCs [12]. The study is very fundamental to our understanding in the science of molecular electronics and mesoscopic transport in general. The total current can be analyzed from the independent current contribution of each channels, i.e., I ( V ) = n I n ( V , τ n ) = 2 e h T ( E , V ) [ f L ( E ) f R ( E ) ] d Efrom which the individual τn can be deduced, the so-called “PIN code” of the eigenchannels. We shall later discuss the derivation of the transmission terms using quantum Hamiltonian model. Excellent quantitative agreements with the experimental data provide a strong justification for the validity of the subsequent developing theory of superconducting quantum transport. Magnetic Impurities Measurements PCAR spectroscopy has also been used to detect and identify magnetic impurities on superconducting surfaces. Yazdani et al. [44] used gold scanning tunneling microscope (STM) tip to study excitations from magnetic adatoms of Mn and Gd on superconducting Nb substrate. Atoms such as Cr, Mn and Gd have been found to reduce the transition temperature Tc of Nb films, and magnetic impurities in general reduce superconducting order parameter and lead to quasiparticle excitations within the superconducting gap [45,46]. Excitations from the magnetic impurities were confirmed by Yazdani et al. by comparing them with non-magnetic adatoms such as Ag, which showed almost featureless conductance across the entire bias. Ji et al. [21] performed an improved experiment with both the STM tip and the substrate made from superconducting materials Nb and Pb respectively. Unlike Yazdani's work where a quantitative analysis for adatom identifications had been hindered by poor energy resolutions, Ji et al. made very significant improvements due to the existence of MAR between the two superconductors which provides high resolution SGS in the conductance, as illustrated in Figure 3. More symmetric SGS structures which are resolved up to 0.1 meV can clearly be seen in the conductance measurements. They claimed that the method can potentially be used to unambiguously detect magnetic adatoms on a superconducting surface, because these spectra are unique fingerprints of the spin states of adatoms, as a result of complex interactions between Andreev bound states (ABS) process and the electronic properties of the adatoms. They also performed similar measurements on dimers of Mn and Cr. Ji et al. used a thin film superconducting Pb which is deposited on a clean Si(111) up to 20 monolayers thick. The superconducting gap of the Pb thin film was found to be 1.30 meV while the Nb STM tip was between 1.44 to 1.52 meV. The effective energy gap of the system turned out to be around 3.0 meV as can be seen in in Figure 3(b) for a clean Pb surface. Different number of peaks with varying intensities were observed for different adatoms. Ji et al. suggested that these correspond to each angular momentum channels, though this still requires further investigations. Electron transport process between the STM tip and the adatoms clearly involves only a few quantum channels and the interactions of the ABS with the spin impurities need to be modeled microscopically in order to fit and interpret the experimental data. Apart from the interface issues which are always tricky, first principle calculations of the adatoms combined with suitable model of the superconductors possibly enable unambiguous determination of magnetic adatoms. Vibrational Mode Measurements Excitations of vibrational modes by traversing electrons have been observed in metallic electrodes attached to nanostructures and molecules such as carbon nanotubes [47,48], hydrogen molecules [49], organic molecules [50,51], gold atomic chains [52], and fullerenes [53]. When a vibrational mode resonates with the bias energy, the conductance can either be enhanced or suppressed by the vibrations. The vibrational energy of the nth -mode is given by ħωn, and the bias at which this takes place is Vn = ħωn/e. Thus in such systems, vibrational modes can be detected directly from current measurements alone and to determine the actual modes one must combine it with standard first principle calculations in order to model the complete vibrating molecule. A recent application of PCAR is to study vibrational modes of a suspended Nb dimer conducted by Marchenkov et al. [22], as illustrated in Figure 4. The dimer was fabricated with the MCBJ technique, and from previous study based on density functional theory (DFT) calculations and conductance measurements, it was confirmed that the configurations at the tip before the break-up was a Nb dimer, where the symmetry and asymmetry of the dimer position across the gap corresponds to either high or low conductance respectively [54,55]. Though in this particular setup the dimer is made of the same atoms as the leads, the idea is still applicable for other types of molecules to be probed with similar technique. This would enable us to study vibrational modes of a truly isolated molecule, unlike the behaviours of ensembles such as in the conventional IR, UV or NMR spectroscopies [56,57]. The measurements were performed at various temperatures from well below Tc up to 12 K. Resonances for high conductance configurations (the dimer is symmetric between the leads) were analysed which appeared both inside and outside the SGS. Particularly for resonances outside SGS, the so-called over the gap structure (OGS), they observed more symmetric and persistent patterns through out different temperatures until they diminished as T > Tc. Unlike the usual SGS which originate from MAR, the OGS do not change positions with bias as the temperature varies. The OGS is not governed by MAR; rather Marchenkov et al. suggested that the OGS originated from the atomic scale structural and dynamical properties of the dimer which resonate with the Josephson current oscillations. The exact shapes, amplitudes and widths of these features correspond to different vibronic and electronic coupling regimes. The time dependent electromagnetic fields of the Josephson oscillations resonate with the vibrational eigenmodes of the Nb dimer. Further they compared the frequencies with ab initio calculations based on DFT and found nice agreements for three different modes of vibrations: longitudinal, transverse and wagging. The method offers a new physics to be used to study dynamical properties of small molecules in general. Theoretical Surveys At the heart of the supercurrent transport mechanism is the so-called Andreev reflection (AR) process which can take place when a superconductor is in contact with a normal metal [58]. In the superconductor the quasiparticles form pairs of opposite spins commonly known as the Cooper pairs [59]. For a normal electron to move into the superconductor, it needs to make a pair with another electron with the opposite spin. At bias higher than superconducting gap energy, denoted as Δ, the electron enters as quasielectron which relaxes into the Cooper pair over a charge relaxation distance. At bias eV < Δ, superconducting gap prevents direct transfer of single electron states and as a result a hole is reflected back at the interface in order to create a Cooper pair in the superconductor, resulting in the doubling of the conductance as discussed in Section 2.1. When two superconductors are separated by a normal region, a series of electron and hole reflection process take place, which is called multiple Andreev reflections (MAR) [42]. Illustrations can be made with a simple diagram in Figure 5 where a normal region is sandwiched in between two superconductors with identical energy gaps and a small bias eV < Δ is applied across the superconductors. The current is oscillating across the junction with a frequency proportional to the bias, ω = 2eV/ħ, known as the AC Josephson frequency, and the MAR process creates SGS in the IV curves. To illustrate the MAR process, we can use the following arguments: initially an electron from the interface between N and S on the left is accelerated by the external field toward the right, but unable to enter due to the energy gap. This would result in a reflection of a hole moving back to the left. The charge of 2e (one from the electron, the other from the hole moving in opposite direction) increase the supercurrent. The process is repeated until the particle gains sufficient energy to overcome the gap. Octavio et al. [42] explains, using the extension of BTK model [23], the SGS in the supercurrent behaviour when the bias is comparable or smaller than Δ. Many researchers have suggested that the SGS are basically current singularities that take place at bias V = 2Δ/en where n = 1, 2, 3, …. However the details of SGS also involve some subtle aspects that are still missing from the semi-empirical approaches, such as the delicate interface properties. An entirely first principle microscopic theory would be needed to quantitatively model the interface natures. A successful quantum theory that can do so would enable PCAR to be used as a reliable sensor with ultrahigh sensitivity, since the SGS provide submili-electronvolt energy resolutions. The BTK Theory Now we shall summarize the derivations of the phenomenological treatments for transport through a normal-superconducting (NS) interface of the famous BTK theory [23]. First, let us discuss some elementary results of the Bogoliubov de Gennes equation from which the BTK theory is derived. Readers who are not familiar with superconductivity can consult some well known references [59]. The Bogoliubov de Gennes Equation The Bogoliubov de Gennes equation [60] describes quasiparticles of electrons and holes in superconductors, analogous to the way Schrödinger equation describes electrons and holes in normal solids. Using the standard two state basis of electron-like and hole-like states, we can describe the wave function as, ψ ( x , t ) = [ f ( x , t ) g ( x , t ) ]and the Bogoliubov de Gennes equation reads, i ψ ( x , t ) t = ( H ( x ) Δ ( x ) Δ ( x ) H ( x ) ) ψ ( x , t )where, H ( x ) = 2 2 m d 2 d x 2 + V ( x ) E FΔ(x) is the spatially dependent superconducting energy gap (or quasiparticle coupling) and EF is the Fermi energy. The mathematical structure of the equation implies time reversed dynamics of the holes compared to that of the electron quasiparticles. For the simplest scenario where we have Δ(x) = Δ and V(x) = 0, we can have an eigenfunction solution of the form, ψ ( x , t ) = [ u v ] exp i ( k x ω t )which gives the eigenenergy solution, E 2 = ( 2 k 2 2 m E F ) 2 + Δ 2and the sketch of this energy can be seen in Figure 6 for a normal metal (Δ = 0) and a superconductor (Δ > 0). The positive solution of the energy refers to the electron quasiparticles and the negative one to hole quasiparticles. The superconducting energy gap is introduced whenever Δ > 0, and this is typically in the order of 1 meV for elemental (low Tc) superconductors, while EF is several eV in magnitude. Another useful quantity is the density of states (DOS) which can be derived from elementary solid state physics, ρ ( k ) d k = V ( 2 π ) 3 4 π k 2 d kand a simple expression for the DOS ratio between the superconducting state to the normal state can be easily derived. Assuming equal Fermi energy between N and S, (EF)N = (EF)S, and in the limit of small energy range compared to the Fermi energy, we have, ρ S ( E ) ρ N ( E ) = ρ ( E ) = E E 2 Δ 2for E > Δ and zero otherwise. Deriving Supercurrent in the BTK Theory The original BTK theory solves the scattering conditions to obtain reflection and transmission probabilities at the interface between normal metal and superconductor using the simplest possible assumptions. First, BTK theory assumes equal Fermi energy between normal metal and superconductor. Second, the superconducting gap Δ(x) is assumed to be spatially independent. In reality, when a superconductor is in contact with a normal metal, there will be some proximity effects [60] due to diffusions of some Cooper pairs into the metal, which reduces the effective gap at the superconsuctor interface. Proximity effects require spatially dependent Δ for a certain length scale around the interface, however in the BTK theory we shall neglect these effects and assume a sudden change of Δ. Third, we shall neglect interactions in both the superconductor and the metal, i.e., V(x) = 0, for regions deep inside the conductors and in the vicinity of x → 0 we can model a simple interface scattering potential such as V(x) = (x) where H is the strength of the scattering potential. Such a simple (but unrealistic) scattering potential allows for analytical spatial solutions for the wave function as follows, ψ N ( x ) = [ 1 0 ] e i ( k F + k N ) x + a [ 0 1 ] e i ( k F k N ) x + b [ 1 0 ] e i ( k F + k N ) x ψ S ( x ) = c [ u v ] e i ( k F + k S ) x + d [ v u ] e i ( k F + k S ) x The wave-numbers kN and kS are measured from the Fermi-wave number kF. Referring to Figure 6, the incident electron e has probability of unity, and it can experience Andreev reflection (a) or normal reflection (b) an the interface. The transmission can take in the form of electron-like (c) or hole-like (d) quasiparticles in the superconductor. Boundary conditions at the interface give, ψ N ( 0 ) = ψ S ( 0 ) = ψ ( 0 ) ψ S ( 0 ) ψ N ( 0 ) = H 2 m 2 ψ ( 0 ) This allows for the solutions of the coefficients and therefore the probabilities, A = |a|2, B = |b|2, etc. The expressions for A and B are listed in Table 1, while the transmission probabilities C and D can be calculated from conservation of probability C + D = 1 − AB, but we do not really need their expressions directly in order to derive the current later. The dimensionless quantity Z is defined as Z 2 = m H 2 2 2 E Foften called the barrier strength, representing the strength of the scattering potential (x). Now we consider those energies less than the gap energy, i.e., |E| < Δ. The incident electrons cannot enter the superconductor as quasiparticles, therefore A + B = 1. If Z = 0, all electrons are Andreev reflected, (A = 1, B = 0), while for Z > 0 some electrons are normally reflected, (A < 1, B > 0). To resolve this we need to consider normal-normal (NN) interface by letting Δ → 0 or ρ → 1. The transmission, evaluated as 1 − (A + B) is given by, T = 1 1 + Z 2which is the standard result for delta potential scattering, and for Andreev reflection probability at the Fermi energy is given by, A = [ 1 1 + 2 Z 2 ] 2which is roughly the square of the normal transmission. This reflects the fact that AR process requires simultaneous transmission of two independent electrons. After we know the probabilities A and B, we are ready to calculate the current, which can be deduced either from the left (normal metal) or the right (superconductor) hand side of the interface. Let us consider from the normal metal side: at energy interval δE, there is a current contribution to the right from the incident electron, a current contribution from AR which reflects holes to the left, i.e., current to the right, and the normal reflection that contributes current to the left. Summing up all these we have, δ I ( E ) = e A v ( E ) ρ ( E ) [ 1 + A ( E ) B ( E ) ] f ( E ) δ Ewhere e is the electronic charge, is the point contact cross sectional area, v(E) is the electron velocity, ρ(E) is the DOS, and f(E) is the Fermi-Dirac distribution function. There is also equivalent current flowing to the left from the superconductor, but it has a different Fermi–Dirac distribution function due to the applied bias, δ I ( E ) = e A v ( E ) ρ ( E ) [ 1 + A ( E ) B ( E ) ] f ( E eV ) δ Eand the total current can be written as, I = e A v ( E ) ρ ( E ) [ 1 + A ( E ) B ( E ) ] [ f ( E eV ) f ( E ) ] d E The integration is mainly over a small energy region around the Fermi level since the term [f(EeV) − f(E)] is zero for large energy. In practice, eV ∼ Δ ≪ EF, and thus the velocity and DOS can be taken as constants, I = e A v ρ [ 1 + A ( E ) B ( E ) ] [ f ( E e V ) f ( E ) ] d E The conductance defined as G = dI/dV can be derived for both NN and NS system, giving the conductance ratio of NN to NS as G NS G NN = ( 1 + Z 2 ) [ 1 + A ( E ) B ( E ) ] f ( E e V ) d Ewhich is the main results of the celebrated BTK theory. f′(E) refers to the derivative of f(E) with respect to energy. To calculate the current through SNS systems, Octavio et al. combined two BTK formulations and used it to explain MAR effects in SNS junctions [42]. Interested readers can refer to the original paper for details. In order to extend the BTK theory to measure the spin polarizations of ferromagnets, Mazin et al. [31] and Strijkers et al. [32] proposed that the current I is a superposition of a fully polarized current PI and a fully non-polarized current (1 − P)I. The non-polarized current can be calculated using the standard BTK theory while the polarized current needs to be calculated with modified expressions for the reflectivities and . The modified constants are determined as follows. The fully polarized current consists of one electron spin species only, therefore there is no Andreev reflection, i.e., Ã = 0 and + + = 1. At small energies |E| < Δ, there is no transmission, implying = 1 [32]. For |E| < Δ, can be determined by assuming that the ratio between normally reflected and transmitted electrons is independent of the polarization, in other words, B C + D = B C + D that subsequently gives, B = B 1 A Complete tabulations of and can be found in the original paper by Strijkers et al. [32]. However, Mazin et al. proposed a slightly different approach that, for electron with energy above the superconducting gap, describes Andreev reflected holes as spatially decaying evanescent wave with finite probability but carrying no current. This difference turns out to be a minor issue as they differ only by a negligible amount when used to interpret the experiments [34]. The conductance ratio for the spin polarized system is hence given by, G NS G NN = P ( 1 + Z 2 ) [ 1 + A B ( E ) ] f ( E e V ) d E ( 1 P ) ( 1 + Z 2 ) [ 1 + A ( E ) B ( E ) ] f ( E e V ) d EIn the metallic limit of perfect contact, there is perfect transparency (Z = 0) and the normalized conductance ratio for zero bias is simply given by 2(1 − P) as stated earlier in the previous section on the experimental surveys. Quantum Hamiltonian Theory In this section we shall summarize a model based on quantum Hamiltonian theory, whose origin can be traced back from the early work by Bardeen who proposed a microscopic Hamiltonian approach for tunneling junction problems [61]. We shall adopt nonequilibrium Green's function (NEGF) formalism to formulate relevant physical quantities. NEGF is a big topic on its own, and readers who are not familiar with its formalism are recommended to browse reference [24], and perhaps some many body topics such as reference [62] and [63]. The historical accounts for the developments of the theory for superconducting resonant tunneling systems can be found in the well known references [43,6470], and readers who are interested in the details should consult the original papers. In particular, we shall illustrate in detail the method by Sun et al. [67] for the supercurrent formulation. The quantum Hamiltonian theory is based on the Bardeen–Cooper–Schrieffer (BCS) model [25], and it still has free adjustable parameters such as the tunneling Hamiltonian and the leads. In order to have a truly first principle method which takes into account the real atomic structure of the device, the theory of superconductivity needs to be combined, for example, with density functional theory (DFT). Fortunately such formalisms are already under developments [71,72] and by combining this formalism with NEGF would enable a first principle calculation for superconducting transport. This is perhaps the future endeavor for the researchers in the field. Model Hamiltonian and Current Derivation In quantum Hamiltonian theory, a system with two metallic leads can be represented by two independent Hamiltonians, HL and HR together with a weak tunneling Hamiltonian between the leads, HT, that represents coupling by which electrons are transferred from one lead to another. To model experimental systems described in Sections 2.3 and 2.4 where quantum point contacts are used to probe magnetic impurities or molecules, we can add an intermediate centre region where electrons transit before they tunnel to the next lead. This can also be thought of a quantum dot represented by a Hamiltonian HC. For a vacuum region between the leads such as in Section 2.2 we do not need HC. The schematics for the system is shown in Figure 7. Expressions for the whole system's Hamiltonian can be written as the following, H ( t ) = H L + H T ( t ) + H C + H Rwhere [43], H L + H R = k , σ , α = L , R ɛ k α σ a k α σ a k α σ + k , α = L , R Δ k α a k α a k α + H . c H C = i , σ ɛ i σ c i σ c i σ + interaction terms H T ( t ) = k , i , σ , α = L , R t k α i e i ( ϕ α + 2 e V α t ) a k α σ c i σ + H . c . The leads are governed by the mean field BCS theory [59]. Momentum index k refers to the leads, and index i (or j) refers to the quantum dot which contains discrete energy levels ε. σ refers to the spin, Vα is the chemical shift due to bias potential across the junction, and φα is the superconducting phase of the leads. Operators a(†) annihilate (create) particle on their respective leads, while operators c(†) do the same for the quantum dot. The time dependent phase is the consequence of the AC Josephson effects in finite bias, and it is incorporated into the tunneling terms following a gauge transformation suggested by Rogovin et al. [73]. For superconducting systems governed by the BCS Hamiltonian, we can construct Green's functions as 2 × 2 Nambu (spinor) space [74] similar to previous construction for Bogoliubov de Gennes, and this is due to the anomalous terms in the potential which contain two operators with opposite spins and momentum. Nambu representation provides consistent and convenient form of Green's function required for the evaluation of equation of motion and perturbation theory. The spinor terms are defined as, α k = [ a k a k ] and α k = [ a k , a k ] For example we can calculate the (retarded) free propagator gr for the mean field BCS model as the following, g r ( k , t , t ) = i θ ( t t ) { α k ( t ) , α k ( t ) } = i θ ( t t ) [ { a k ( t ) , a k ( t ) } { a k ( t ) , a k ( t ) } { a k ( t ) , a k ( t ) } { a k ( t ) , a k ( t ) } ] Evaluations of this term gives [67,68], k g r ( k , t , t ) = i θ ( t t ) d ɛ ρ N β ( ɛ ) e i ɛ ( t t ) [ 1 Δ / ɛ Δ / ɛ 1 ]where ρN is normal density of states and β(ε) is a complex term related to the BCS DOS defined as, β ( ɛ ) = | ɛ | ɛ 2 Δ 2 θ ( | ɛ | Δ ) + ɛ i Δ 2 ɛ 2 θ ( Δ | ɛ | ) Another useful free propagator is the lesser propagator given by, k g < ( k , t , t ) = i d ɛ ρ N f ( ɛ ) Re [ β ( ɛ ) ] e i ɛ ( t t ) [ 1 Δ / ɛ Δ / ɛ 1 ] Time-dependent supercurrent across the junction can be derived from the expectation value of the time derivative of the number operator in any one leads, say the left one for convenience, I ( t ) = e N ˙ L = i e [ N L ( t ) , H ( t ) ] = 2 e Re i , k Tr { G i , L k < ( t , t ) t L i ( t ) σ Z } The term G i , L k < ( t , t ) is called lesser Green's function, which is defined as, G j , L , k < ( t , t 1 ) = i [ a L k ( t 1 ) c j ( t ) a L k ( t 1 ) c j ( t ) a L k ( t 1 ) c j ( t ) a L k ( t 1 ) c j ( t ) ]and the term tLi(t) is tunneling matrix given by, t L j ( t ) = [ t L j e i ( ϕ L + 2 e V L t ) 0 0 t L j e i ( ϕ L + 2 e V L t ) ] The term az is the Pauli matrix, σ Z = [ 1 0 0 1 ] The next step is to express the current in terms of the free propagator of the leads and Green's function of the quantum dot. This can be done through NEGF procedure where the corresponding time-ordered Green's function for G i , L k < is evaluated with NEGF time contour integral, followed by Langreth's analytical continuation. This gives the expression for G i , L k < as the following, G j , L k < ( t , t ) = i d t ( G j i r ( t , t ) t L i ( t ) g L k < ( t t ) + G j i < ( t , t ) t L i ( t ) g L k a ( t t ) )where the quantum dot's Green's functions are given by, G i j r ( t , t 1 ) = i θ ( t t 1 ) [ { c i ( t ) , c j ( t 1 ) } { c i ( t ) , c j ( t 1 ) } { c i ( t ) , c j ( t 1 ) } { c i ( t ) , c j ( t 1 ) } ] G i j < ( t , t 1 ) = i [ c j ( t 1 ) c i ( t ) c j ( t 1 ) c i ( t ) c j ( t 1 ) c i ( t ) c j ( t 1 ) c i ( t ) ] We can then substitute these into G< and write out the current equation. For simplicity in the current example we can include only one localized level in the quantum dot, i.e., transport is only through a single eigenchannel. Using the expressions for the BCS free propagators in the previous chapter and after rearranging the terms we would obtain, I ( t ) = 2 e Im t d t 1 d ɛ 2 π e i ɛ ( t t 1 ) Tr { [ Re ( β L ( ɛ ) ) f L ( ɛ ) G r ( t , t 1 ) + β L ( ɛ ) G < ( t , t 1 ) ] Γ L L ( ɛ ) σ z }and the term Σ̃L/R(ε) is a product term from the rearrangements defined as, L / R ( ɛ ) = [ e i e V L / R ( t 1 t ) Δ L / R ɛ e i ( ϕ L / R + e V L / R ( t 1 + t ) ) Δ L / R ɛ e i ( ϕ L / R + e V L / R ( t 1 + t ) ) e i e V L / R ( t 1 t ) ] The term ΓL is the line width matrix function, a product of interlevel tunneling matrices and the normal density of states ρN, Γ L ; i j ( t , t 1 ) = 2 π t L i ( t ) t L j ( t 1 ) ρ L Nwhich would be a constant in the case of single level quantum dot. Now in order to solve G i j r / < we need to be more specific with the actual form of the interactions in Equation (27) of the quantum dot. For illustrations, we can use the simplest case where the quantum dot is non-interacting, which enables exact evaluations for G i j r / <. This corresponds to larger quantum dots where charge screening is sufficiently strong to make the interactions to be accounted only as an overall self-consistent potential. In such simple cases we can use the Dyson and Keldysh equations by first computing the corresponding selfenergies. The selfenergies can be calculated easily from the equation of motions, which take the same form as the resonant tunneling model [66,67], L / Rij r / < ( t , t 1 ) = t L / R i ( t ) ( k g L / R k r / < ( t , t 1 ) ) t L / R j ( t 1 )and using the BCS free propagators stated above we can easily get their explicit forms. Time Averaged Current and Fourier Transformations The Josephson current through SNS QPC oscillates at very high frequency, typically in the terahertz range, which makes the time resolved quantities not so easily compared with the experiments. A more convenient way would be to work with the time averaged quantities derived from the Fourier transformation of the correct intrinsic frequencies of the systems. All dynamic quantities can be expanded as harmonics of the fundamental frequency ω = 2 eV, i.e., I ( t ) = n I n e in ω t The time average current is derived simply from the zeroth order term I0. Due to the two-time correlations in the Green's function, we require a transformation that can account them in a consistent manner, and this is done through a so-called double Fourier transform of the Green's functions, G m n ( ɛ ) = 1 2 π T / 2 T / 2 d t 1 e i ( ɛ + n ω ) t 1 T / 2 T / 2 d t e i ( ɛ + m ω ) t G ( ɛ , t , t 1 ) The retarded Green's function is calculated with the Dyson equation in Fourier transformed form, hence the matrices here are in Fourier space and Nambu space, and for the case of multilevel system it would be the tensor product of all, i.e., [m, n] ⊗ [i, j] ⊗ [2 × 2] and the retarded function is obtained by straightforward inversion of the whole matrix. The lesser function is calculated with the Keldysh equation and the entire composite matrices are substituted, i.e., G r ( ɛ ) = [ g r ( ɛ ) 1 ( L r ( ɛ ) + R r ( ɛ ) ) ] 1 G < ( ɛ ) = [ G r ( ɛ ) ( L < ( ɛ ) + R r ( ɛ ) ) G a ( ɛ ) ] The advanced function is obtained from the retarded function by Ga = [Gr], and the time-average current can then be expressed as the zeroth order component of the Fourier transform, I 0 = e π Im d ɛ Tr { [ f L ( ɛ ) Re ( β ( ɛ ) ) G 00 r ( ɛ ) + 1 2 β ( ɛ ) G 00 < ( ɛ ) ] Γ L ( ɛ ) σ z } The sample plot for the time averaged current and differential conductance (dI/dV) for single level quantum dot in SNS QPC can be seen in Figure 8. Notice the rich SGS at small bias due to MAR compared to fairly featureless behaviours at higher bias eV > 2Δ. The quantum Hamiltonian theory enables us to incorporate more physics into the quantum dot. For example to describe magnetic interactions of the impurities, one may consider a model for HC of the following, H C = i , σ ɛ i σ c i σ c i σ + i j U i , j n i n jor other suitable forms of interactions. With this the underlying physics when MAR oscillates across a magnetic impurity can be studied, and general interactions can also be computed with first principle method. For such interacting systems the Green's function may be calculated perturbatively or with other methods. Some examples on such works are by Avishai et al. [75] and Pala et al. [76]. For a vacuum region between the superconducting leads we do not include HC and the resulting model is slightly simpler. The model Hamiltonian they used is similar to Equation (25), but in this case without the quantum dot, H ( t ) = H L + H R + H T ( t )where, H T ( t ) = σ [ t e i ( ϕ 0 + 2 eVt ) a L σ a R σ + t e i ( ϕ 0 + 2 eVt ) a R σ a L σ ] The tunneling Hamiltonian directly couples left and right leads. For a single eigenchannel system the hopping term t is just a constant, and the phase term is the difference between left lead and right lead, i.e., ϕ0 = ϕLϕR and eV = μLμR. The equation for the current can then be re-derived using the same procedure as explained in the last sections. Excellent quantitative agreements with the experimental data provide a strong justification for the validity of the microscopic model in the quantum Hamiltonian theory. Shapiro Effects and External Radiations Another interesting application of the quantum Hamiltonian theory is for studying the interactions with some external electromagnetic radiations. The frequency range of interests in this case would be in the microwave regions, due to the intrinsic energy scale of typical superconducting energy gaps. The interplay between the AC Josephson effect in superconducting junctions under finite bias with the external radiations exhibit the phenomenon known as the Shapiro effects in the supercurrent behaviours [77]. Cuevas et al. [78] proposed that the effects from the external radiations of frequency ωr to some extent can be modeled as effective time dependent voltage, Vac cos ωrt, acting on top of the existing AC Josephson frequency. The total effective bias can be written as V(t) = V + Vac cos ωrt, and the time dependent phase in the tunneling Hamiltonian becomes ϕ ( t ) = ( ϕ 0 + ω t + α cos ω r twhere α is a measure of the coupling strength with the external radiations. The Fourier series expansion of the current takes the following form, I ( t ) = m , n I n m exp [ i ( n ϕ 0 + n ω t + m ω r t ) ] For a superconducting QPC system with a featureless barrier, i.e., a vacuum region between two superconducting leads, Cuevas et al. managed to compute the supercurrent numerically with the use of Bessel basis functions. They found that the Shapiro effects take place at bias V = (m/n)ħωr/2e, where m and n are integers. The effects from external radiations are basically current singularities that are distinct from the fundamental SGS of the QPC, since each singularity takes place at infinitely short bias interval and appear as prominent spikes. Chauvin et al. [79] have experimentally confirmed this with very good agreements with the model, except for very low bias regions. For superconducting QPC with a quantum dot at the centre, the localized energy levels at the quantum dot exhibit another intriguing physics upon exposure to external radiations, at least in two ways. First, in semiclassical limit the external field would oscillate the entire set of localized energy levels in unison. Second, absorptions and emissions of the photons would also stimulate interlevel transitions as the electrons tunnel through the quantum dot, and both would affect MAR process inside the quantum dot and hence the supercurrent behaviours. However, in order to do time averaged analysis, one needs to perform multi-frequency Fourier transformation on the dynamical quantities because of the two frequencies dependence of the phase factor. This is non-trivial particularly when the frequencies are non-commensurate, i.e., when their ratio is irrational. To slightly simplify the problem, one may consider replacing one of the superconducting leads with a normal lead (SNN system) and use the gauge where the bias potential at the superconducting lead is zero, thereby eliminating time dependence term from the AC Josephson effects [26]. External radiations can be modeled semiclassically adopting typical dipole approximations [80], H C ( t ) = i , σ [ ɛ i + A cos ( ω t ) ] c i σ c i σ + i j , σ B cos ( ω t ) c i σ c j σ ]In this case, the Green's function of the quantum dot may be computed with the use of Floquet basis [81], which was found to enable flexible modeling of quantum transitions in a multilevel quantum dot [26]. One can study the effects of localized level oscillations by letting B = 0, and it was found that series of resonances appear due to the oscillations and the energy spacing between these resonances is equivalent to the radiation energy as can be seen in Figure 9. On the other hand the effects from interlevel transitions can be studied by simply letting A = 0 and transitions was found to produce splitting on the primary DC resonance when radiation frequency is at Rabi frequency. Furthermore, the splits were separated by the energy proportional to the interlevel hopping constant B. This provides the possibility for experimental inference of the interlevel coupling strength from simple current measurements. In addition, the details of the quantum dot can greatly affect the transport behaviours such as the symmetry of the quantum dot with respect to the leads [82], the relative energy difference between the localized level and the superconducting gap, electronic interactions etc. [83]. If these additional factors are not carefully taken into account, any physical deductions based on the incomplete model would potentially lead to false conclusions. Intrinsically small energy gap in superconducting PCAR spectroscopy provides a promising candidate for ultrasensitive sensors, making use of the AR process which carries rich physics at the contacts. AR process in NS systems can be used to probe spin polarizations of ferromagnetic materials with convenience and high precision compared to the conventional methods. Theoretical developments in this area are mainly based on the BTK theory, which had begun earlier and has become a relatively mature theory to be used in spin polarization measurements. However, some problems still remain that relate to various delicate details of the surface properties at the contacts which have been treated phenomenologically. Atomic contacts such as STM tips and MCBJ have discrete eigenchannels and the quantum Hamiltonian theory combined with NEGF enables rigorous descriptions of the complex transport properties of MAR. The method also has promising potentials to be extended for a fully first principle method if we combine the existing first principle superconductivity theory [71,72] with NEGF, which is a possible future research direction for anyone working in this field. References and Notes OnnesH.K.The superconductivity of MercuryComm. Phys. Lab. Univ. Leiden.1911No. 122 and No. 124 JosephsonB.D.Possible new effects in superconducting tunnellingPhys. Lett.19621251253 SoulenR.J.ByersJ.M.OsofskyM.S.NadgornyB.AmbroseT.ChengS.F.BroussardP.R.TanakaC.T.NowakJ.MooderaJ.S.BarryA.CoeyJ.M.D.Measuring the Spin Polarization of a Metal with a Superconducting Point ContactScience19982828588 RallsK.S.BuhrmanR.A.TiberoR.C.Fabrication of thin-film metal nanobridgesAppl. Phys. Lett.19895524592461 MullerC.J.van JonghL.J.Conductance and supercurrent discontinuities in atomic-scale metallic constrictions of variable widthPhys. Rev. Lett.199269140143 SharvinY.V.A possible method for studying Fermi surfacesSov. Phys. JETP196521655656 UpadhyayS.K.PalanisamiA.LouieR.N.BuhrmanR.A.Probing ferromagnets with andreev reflectionPhys. Rev. Lett.19988132473250 TedrowP.M.MeserveyR.Spin-polarized electron tunnelingPhys. Rep.1994238173243 JohnsonP.D.Core Level Spectroscopies for Magnetic PhenomenaNATO Advanced Study InstituteSeries B: PhysicsPlenum PressNew York, NY, USA1995345 ScheerE.JoyezP.EsteveD.UrbinaC.DevoretM.H.Conduction channel transmissions of atomic-size aluminum contactsPhys. Rev. Lett.19977835353538 CuevasJ.C.YeyatiA.L.Martín-RoderoA.Microscopic origin of conducting channels in metallic atomic-size contactsPhys. Rev. Lett.19988010661069 ScheerE.AgraïtNCuevasJ.C.YeyatiA.L.LudophB.Martín-RoderoA.BollingerG.R.van RuitenbeekJ.M.UrbinaC.The signature of chemical valence in the electrical conduction through a single-atom contactNature1998394154157 LandauerR.Spatial variation of currents and fields due to localized scatterers in metallic conductionIBM J. Res. Dev.19571223231 van HoutenH.BeenakkerC.Quantum point contactsPhys. Today1996492227 NavehY.PatelV.AverinD.V.LikharevK.K.LukensJ.E.Universal distributions of transparencies in highly conductive Nb/AlOx/Nb junctionsPhys. Rev. Lett.20008554045407 LudophB.van der PostN.Multiple Andreev reflection in single-atom niobium junctionsPhys. Rev. B20006185618569 PierreF.AnthoreA.PothierH.UrbinaC.EsteveD.Multiple Andreev reflections revealed by the energy distribution of quasiparticlesPhys. Rev. Lett.20018610781081 ChtchelkatchevN.M.Transitions between π and 0 states in superconductor-ferromagnet-superconductor junctionsJETP Lett.200480743747 MakkP.CsonkaS.HalbritterA.Effect of hydrogen molecules on the electronic transport through atomic-sized metallic juctions in the superconducting statePhys. Rev. B200878045414 BöhlerT.EdtbauerA.ScheerE.Point-contact spectroscopy on aluminium atomic size contacts: Longitudinal and transverse vibronic excitationsNew J. Phys.200911013036 JiS.-H.ZhangT.FuY.-S.ChenX.MaX.-C.LiJ.DuanW.-H.JiaJ.-F.XueQ.-K.High-resolution scanning tunneling spectroscopy of magnetic impurity induced bound states in the superconducting gap of Pb thin filmsPhys. Rev. Lett.2008100226801 MarchenkovA.DaiZ.DonehooB.BarnettR.N.LandmanU.Alternating current Josephson effect and resonant superconducting transport through vibrating Nb nanowiresNature Nanotechnology20072481485 BlonderG.E.TinkhamM.KlapwijkT.M.Transition from metallic to tunneling regimes in superconducting microconstrictions: Excess current, charge imbalance, and supercurrent conversionPhys. Rev. B19822545154532 HaugH.JauhoA.-P.Quantum Kinetics in Transport and Optics of SemiconductorsSpringerNew York, NY, USA2008 BardeenJ.CooperL.N.SchriefferJ.R.Theory of superconductivityPhys. Rev.195710811751204 NurbawonoA.FengY.P.ZhangC.Electron tunneling through a hybrid superconducting-normal mesoscopic junction under microwave radiationPhys. Rev. B201082014535 de JongM.J.M.BeenakkerC.W.J.Andreev reflection in ferromagnet-superconductor junctionsPhys. Rev. Lett.19957416571660 PrinzG.A.Spin-Polarized transportPhys. Today1995485863 PrinzG.A.MagnetoelectronicsScience199828216601663 WoodsG.T.SoulenR.J.MazinI.NadgornyB.OsofskyM.S.SandersJ.SrikanthH.EgelhoffW.F.DatlaR.Analysis of point-contact Andreev reflection spectra in spin polarization measurementsPhys. Rev. B200470054416 MazinI.I.GolubovA.A.NadgornyB.Probing spin polarization with Andreev reflection: A theoretical basisJ. Appl. Phys.20018975767578 StrijkersG.J.JiY.YangF.Y.ChienC.L.ByersJ.M.Andreev reflections at metal/superconductor point contacts: Measurement and analysisPhys. Rev. B200163104510 JiY.StrijkersG.J.YangF.Y.ChienC.L.ByersJ.M.AnguelouchA.XiaoG.GuptaA.Determination of the spin polarization of Half-Metallic CrO2 by point contact Andreev reflectionPhys. Rev. Lett.2001865585 JiY.StrijkersG.J.YangF.Y.ChienC.L.Comparison of two models for spin polarization measurements by Andreev reflectionPhys. Rev. B200164224425 Pérez-WillardF.CuevasJ.C.SürgersC.PfundsteinP.KopuJ.EschrigM.LöhneysenH.V.Determining the current polarization in Al/Co nanostructured point contactsPhys. Rev. B200469140502 PanguluriR.P.TsoiG.NadgornyB.ChunS.H.SamarthN.MazinI.I.Point contact spin spectroscopy of ferromagnetic MnAs epitaxial filmsPhys. Rev. B200368201307(R) GeresdiA.HalbritterA.TanczikóF.MihályG.Direct measurement of the spin diffusion length by Andreev spectroscopyAppl. Phys. Lett.201198212507 RajanikanthAKasaiS.OhshimaN.HonoK.Spin polarization of currents in Co/Pt multilayer and Co–Pt alloy thin filmsAppl. Phys. Lett.201097022505 XiaK.KellyP.J.BauerG.E.W.TurekI.Spin-dependent transparency of ferromagnet/superconductor interfacesPhys. Rev. Lett.200289166603 GijsM.A.M.BauerG.E.W.Perpendicular giant magnetoresistance of magnetic multilayersAdvances in Physics199746285445 GreinR.LöfwanderT.MetalidisG.EschrigM.Theory of superconductor-ferromagnet point-contact spectra: The case of strong spin polarizationPhys. Rev. B201081094508 OctavioM.TinkhamMBlonderG.EKlapwijkT.M.Subharmonic energy-gap structure in superconducting constrictionsPhys. Rev. B19832767396746 CuevasJ.C.Martín-RoderoA.YeyatiA.L.Hamiltonian approach to the transport properties of superconducting quantum point contactsPhys. Rev. B19965473667379 YazdaniA.JonesB.A.LutzC.P.CrommieM.F.EiglerD.M.Probing the local effects of magnetic impurities on superconductivityScience199727517671770 RoyA.BuchananD.S.HolmgrenD.J.GinsbergD.M.Localized magnetic moments on chromium and manganese dopant atoms in Niobium and VanadiumPhys. Rev. B19853130033014 ScholtenP.D.MoultonW.G.Effect of ion-implanted Gd on the superconducting properties of thin Nb filmsPhys. Rev. B19771513181323 SazonovaV.YaishY.ÜstünelH.RoundyD.AriasT.A.McEuenP.L.A tunable carbon nanotube electromechanical oscillatorNature2004431284287 LeRoyB.J.LemayS.G.KongJ.DekkerC.Electrical generation and absorption of phonons in carbon nanotubesNature2004432371374 SmitR.H.M.NoatY.UntiedtC.LangN.D.van HemertM.C.van RuitenbeekJ.M.Measurement of the conductance of a hydrogen moleculeNature2002419906909 StipeB.C.RezaeiM.A.HoW.Single molecule vibrational spectroscopy and microscopyScience199828017321735 ZhitenevN.B.MengH.BaoZ.Conductance of small molecular junctionsPhys. Rev. Lett.200288226801 AgraïtN.UntiedtC.Rubio-BollingerG.VieiraS.Onset of energy dissipation in ballistic atomic wiresPhys. Rev. Lett.200288216803 ParkH.ParkJ.LimA.K.L.AndersonE.H.AlivisatosA.P.McEuenP.L.Nanomechanical oscillations in a single-C60 transistorNature20004075760 MarchenkovA.DaiZ.ZhangC.BarnettR.N.LandmanU.Atomic dimer shuttling and two-level conductance fluctuations in Nb nanowiresPhys. Rev. Lett.200798046802 DaiZ.MarchenkovA.Subgap structure in resistively shunted superconducting atomic point contactsAppl. Phys. Lett.200688203120 HarwoodL.M.MoodyC.J.Experimental Organic Chemistry: Principles and PracticeWiley-BlackwellHoboken, NJ, USA1989 KeelerJUnderstanding NMR SpectroscopyChichesterU.K.WileyHoboken, NJ, USA2010 AndreevA.F.Thermal conductivity of the intermediate state of superconductorsSov. Phys. JETP19641912281231 TinkhamMIntroduction to SuperconductivityMcGraw HillNew York, NY, USA1996 de GennesP.G.Superconductivity of Metals and AlloysBenjaminNew York, NY, USA1966 BardeenJ.Tunneling from a many-particle point of viewPhys. Rev. Lett.196165759 MahanG.D.Many-Particle PhysicsPlenumNew York, NY, USA1981 FetterA.L.WaleckaJ.D.Quantum Theory of Many-Particle SystemsDoverNew York, NY, USA2003 MeirY.WingreenN.S.Landauer formula for the current through an interacting electron regionPhys. Rev. Lett.19926825122515 WingreenN.S.JauhoA.-P.MeirY.Time-dependent transport through a mesoscopic structurePhys. Rev. B19934884878490 JauhoA.-P.WingreenN.S.MeirY.Time-dependent transport in interacting and noninteracting resonant-tunneling systemsPhys. Rev. B19945055285544 SunQ.-F.GuoH.WangJ.Hamiltonian approach to the ac Josephson effect in superconducting-normal hybrid systemsPhys. Rev. B200265075315 SunQ.-F.WangJ.LinT.-H.Photon assisted Andreev tunneling through a mesoscopic hybrid systemPhys. Rev. B1999591312613138 YeyatiA.L.CuevasJ.C.López-DávalosA.Martín-RoderoA.Resonant tunneling through a small quantum dot coupled to superconducting leadsPhys. Rev. B199655R6137R6140 DolciniF.Dell'AnnaL.Multiple Andreev reflections in a quantum dot coupled to superconducting leads: Effect of spin-orbit couplingPhys. Rev. B200878024518 LüdersM.MarquesM.A.L.LathiotakisN.N.FlorisA.ProfetaG.FastL.ContinenzaA.MassiddaS.GrossE.K.U.Ab initio theory of superconductivity. I. Density functional formalism and approximate functionalsPhys. Rev. B200572024545 MarquesM.A.L.LüdersM.LathiotakisN.N.ProfetaG.FlorisA.FastL.ContinenzaA.GrossE.K.U.MassiddaS.Ab initio theory of superconductivity. II. Application to elemental metalsPhys. Rev. B200572024546 RogovinD.ScalapinoD.J.Fluctuations phenomena in tunnel junctionsAnn. Phys.197486190 NambuY.Quasi-particles and gauge invariance in the theory of superconductivityPhys. Rev.1960117648663 AvishaiY.GolubA.ZaikinA.D.Quantum dot between two superconductorsEurophys. Lett.200154640646 PalaM.GovernaleM.KönigJNon-equilibrium josephson and andreev current through interacting quantum dotsNew J. Phys.2007910.1088/1367-2630/9/8/278 ShapiroS.Josephson currents in superconducting tunneling: The effect of microwaves and other observationsPhys. Rev. Lett.1963118082 CuevasJ.C.HeurichJ.Martín-RoderoA.Levy YeyatiA.SchönG.Subharmonic shapiro steps and assisted tunneling in superconducting point contactsPhys. Rev. Lett.200288157001 ChauvinM.vom SteinP.PothierH.JoyezP.HuberM.E.EsteveD.UrbinaC.Superconducting atomic contacts under microwave irradiationPhys. Rev. Lett.200697067006 ScullyM.O.ZubairyM.S.Quantum OpticsCambridge University PressCambridge, UK1997 ShirleyJ.H.Solution of the Schrödinger equation with a hamiltonian periodic in timePhys. Rev.1965138B979B987 NurbawonoA.FengY.P.ZhaoE.ZhangC.Differential conductance anomaly in superconducting quantum point contactsPhys. Rev. B20098015 NurbawonoA.FengY.P.ZhangC.The roles of potential symmetry in transport properties of superconducting quantum point contactsJ. Comput. Theor. Nanosci.2010724482452 Figures and Table (a) typical I-V curves in PCAR measurements. During the normal state (T > Tc) the current shows the typical ohmic response. After the PC becomes superconducting (T < Tc), non-magnetic systems (P = 0) show excess current due to Andreev reflection (AR) process, while ferromagnetic systems (P = 1) show suppression of AR process leading to suppression of current; (b) Normalized conductance for various polarizations, in the clean metallic limit (Z = 0). The bias is in the units of superconducting energy gap. Measured I-V curves for two different Al atomic point contacts having different sets of {τn}: a = {0.747, 0.168, 0.036} and b = {0.519, 0.253, 0.106}. Each τn is associated with each valence orbital of Al. The current and voltages are in reduced units, the current is normalized with respect to the total conductance measured by the slope of the I-V at high voltages eV > 5Δ. Effectively exact fitting of the experimental data shows the reliability of the theoretical model based on quantum Hamiltonian [11]. Adapted figure reproduced with kind permission from the authors [10]. Other details can be found in the original paper. Detecting a single atom magnetic impurities of Mn and Cr on Pb surface with a Nb STM tip. (a) is the schematic view of the set up; and (b) is the differential conductance (dI/dV) for a clean Pb surface; (c) is for Cr atom where six peaks are detected and (d) is for Mn atom where four peaks are detected. The method proposes the use of SGS to identify atomic size magnetic impurities on surfaces. Figures were reproduced and adapted with kind permission from the authors [21]. The schematic view of the atomic configurations for measuring vibrational modes of an Nb dimer fabricated with MCBJ technique [22]. The Nb leads were adjusted with piezoelectric movements. The dimer was found to have four modes of vibrations: longitudinal (along the dimer), transverse (up and down), and wagging (torsional) about its centre of mass. These modes affect the MAR tunneling process between the leads and were detected as current singularities inside and outside the superconducting gaps. Multiple Andreev reflection (MAR) process in a symmetric superconductor-normal-superconductor (SNS) system with the normal region sufficiently thin to provide ballistic trajectories. The dark particles (electrons) are the antiparticle of the white particles (holes), and the reflection process repeats until they attain sufficient energy to overcome the superconducting gap Δ. The horizontal axes on the superconductor sides represent density of states. Band diagram for N (left) S (right) interface for the BTK model. The superconducting energy gap in reality is much smaller to Fermi energy (Δ ≪ EF). Label e is the incident electron, a is Andreev reflection, b is normal reflection, c is electron like transmission, and d is hole like transmission. Figures are adapted from reference [23]. A resonant tunneling system which consists of two superconducting leads and a quantum dot. The system is represented by three subsystem Hamiltonians, H = HL + HT + HC + HR. Plot of the time averaged I-V and dI/dV curves for SNS QPC systems with single level quantum dot (εd = 0). Other parameters are, ΓL = ΓR = 0.5Δ and kbT = 0.1Δ. Rich subgap structures mainly at low bias (eV < Δ) can potentially be used to identify the quantum dot's electronic structures and magnetic properties. Effects of single mode external radiations on SNN transport in weak coupling limit. (a) Time averaged current for a single-level quantum dot in SNN system and the effects of single level oscillations upon external radiations. The external radiations create current resonances at interval ħω and preserve the main DC resonance at eV = 4Δ; (b) Time averaged current for a symmetric two-level quantum dot in SNN system and the effects of interlevel transitions due to the external radiations. In this case the external radiations can only affect the transport when the frequency is equal to the energy difference between the localized levels, i.e., at Rabi frequency ħω = (ε1ε2). The main DC resonance at 4Δ splits into two, and the separation between the split is equal to 2B. The simple relationship provides a way to directly measure the interlevel coupling strength from a simple current measurements [26]. Table for coefficients A (Andreev reflection) and B (normal reflection). E < Δ E < Δ A = Δ2/[E2 + (Δ2E2)(1 + 2Z2)2] A = (ρ2 − 1)/[ρ+(1+2Z2)]2 B = 1 − A B = 4Z2(1 + Z2)/[ρ+(1+2Z2)]2]
89cb145c8762846f
semiclassical approximation physics, mathematical physics Surveys, textbooks and lecture notes theory (physics), model (physics) To some extent, quantum mechanics and quantum field theory are a deformation of classical mechanics and classical field theory, with the deformation parameterized by Planck's constant . The semiclassical approximation or quasiclassical approximation to quantum mechanics is the restriction of this deformation to just first order (or some finite order) in . Could not include lassical-to-quantum notions - table Applied to path integral quantization, the semiclassical approximation is meant to approximate the path integral ϕFieldsDϕF(ϕ)e iS(ϕ)/ by an expansion in about the critical points of the action functional S (hence the solutions of the Euler-Lagrange equations, hence to the classical trajectories of the system). As usual for the path integral in physics, this often requires work to make precise, but at a heuristic level the idea is famous as the rotating phase approximation?: the idea is that in regions of field-space where S varies fast as measured in units of Planck's constant, the complex phases of the integrand exp(iS/) tend to cancel each other in the integral so that substantial contributions to the integral come only from the vicininity of critical points of S (classical trajectories). Notably in the Schrödinger picture of quantum evolution, solutions to the Schrödinger equation iddtψ=H^ψ (which characterizes quantum states given by wave functions ψ for Hamiltonian dynamics induced by a Hamilton operator H^) are usefully considered to first (or any finite) order in . This method, known after (some of) its inventors as the WKB method or similar, amounts to expressing the wave function in the form ψ=exp(S) where S is a slowly varying function and solving the equation for S. Globally consistent such solutions to first order lead to what are called Bohr-Sommerfeld quantization conditions. For the formalization of this method in symplectic geometry/geometric quantization see at semiclassical state. This WKB method makes sense for a more general class of wave equations. For instance in wave optics this yields the short-wavelength limit of the geometrical optics approximation. Here S is called the eikonal?. Multidimensional generalization of the WKB method appear to be rather nontrivial; they have been pioneered by V. Maslov who introduced a topological invariant to remove ambiguities of the naive version of the method, called the Maslov index. Equivariant localization Large N-limit in gauge theories In radiation theory • V. P. Maslov, Theory of perturbations and asymptotic methods (Russian), Izdat. Moskov. Gos. Univ. 1965. • V. Guillemin, S. Sternberg, Geometric asymptotics, AMS 1977, online • M. F. Atiyah, Circular symmetry and stationary phase approximation, Asterisque 131 (1985) 43–59 • N. Berline, Ezra Getzler, M. Vergne, Heat kernels and Dirac operators, Grundlehren 298, Springer 1992, “Text Edition” 2003. For large N-limit compared to semiclassical expansion see For the semiclassical method in superstring theory see Revised on March 21, 2013 19:45:13 by Urs Schreiber (
948437ce7ce70e26
Psychology Wiki Quantum brain dynamics 34,191pages on this wiki Quantum physics Quantum psychology Schrödinger cat Quantum mechanics Introduction to... Mathematical formulation of... Fundamental concepts Decoherence · Interference Uncertainty · Exclusion Transformation theory Ehrenfest theorem · Measurement Double-slit experiment Davisson-Germer experiment Stern–Gerlach experiment EPR paradox · Schrodinger's Cat Schrödinger equation Pauli equation Klein-Gordon equation Dirac equation Advanced theories Quantum field theory Quantum electrodynamics Quantum chromodynamics Quantum gravity Feynman diagram Copenhagen · Quantum logic Hidden variables · Transactional Many-worlds · Many-minds · Ensemble Consistent histories · Relational Consciousness causes collapse Orchestrated objective reduction Bohm · In neuroscience, quantum brain dynamics (QBD) is a hypothesis to explain the function of the brain within the framework of quantum field theory. Although there are many blank areas in understanding the brain dynamics and especially how it gives rise to conscious experience it should be noted that quantum mechanics is only conisdered by some to be capable of explaining the enigma of consciousness. There is currently no experimental verification of this hypothesis. QBD is thus classified as protoscience. Mari Jibu and Kunio Yasue (1995) were the first researchers that tried to popularize the quantum field theory of Nambu-Goldstone bosons as the one and only reliable quantum theory of fundamental macroscopic dynamics realized in the brain with which a deeper understanding of consciousness can be obtained. This hypothesis was originated by Ricciardi and Umezawa (1967) in a general framework of the spontaneous symmetry breaking formalism, and since then developed into a quantum field theoretical framework of brain functioning called quantum brain dynamics (Jibu and Yasue 1995) and that of general biological cell functioning called quantum biodynamics (Del Giudice et. al., 1986; 1988). There, Umezawa proposed a general theory of quanta of long-range coherent waves within and between brain cells, and showed a possible mechanism of memory storage and retrieval in terms of Nambu-Goldstone bosons characteristic to the spontaneous symmetry breaking formalism. References Edit • Conte, E, Todarello, O, Federici, A, Vitiello, F, Lopane, M, Khrennikov, A, Zbilut JP (2007). Some remarks on an experiment suggesting quantum-like behavior of cognitive entities and formulation of an abstract quantum mechanical formalism to describe cognitive entity and its dynamics. Chaos, Solitons and Fractals 31: 1076-1088 [1] • Del Giudice E, Doglia S, Milani M, Vitiello G (1986). Electromagnetic field and spontaneous symmetry breaking in biological matter. Nucl. Phys. B 275: 185-199. • Del Giudice E, Preparata G, Vitiello G (1988). Water as a free electric dipole laser. Physical Review Letters 61: 1085-1088. Abstract • Georgiev DD, Glazebrook JF (2006). Dissipationless waves for information transfer in neurobiology - some implications. Informatica 30: 221-232. Free full text • Jibu M, Yasue K (1995). Quantum Brain Dynamics: An Introduction. John Benjamins, Amsterdam. • Jibu M, Yasue K (1997). What is mind? Quantum field theory of evanescent photons in brain as quantum theory of consciousness. Informatica 21: 471-490. Abstract • Ricciardi LM, Umezawa H (1967). Brain and physics of many-body problems. Kybernetik 4: 44-48. • Weiss V, Weiss H (2003). The golden mean as clock cycle of brain waves. Chaos, Solitons and Fractals 18: 643-652. Free full text See also Edit Around Wikia's network Random Wiki
8b98da6c5a0adedd
Wednesday, August 31, 2016 Expanded Reproduction in an Abstract Capitalist Society In the previous post, 'Simple Reproduction in an Abstract Capitalist Society', we looked at a basic model of how a capitalist society engaged in generalised commodity production can reproduce itself - but without growing. If you didn't read that post, now would be a good time - we use it below. Unlike previous modes of production which were mainly focused on the production of use values, capitalism as practiced by capitalists is motivated purely by the search for surplus value (i.e. growth in capital). The production of specific use values is a matter of indifference as long as the commodities concerned can be sold in the market, thereby releasing their monetary value. Capitalists will not invest unless they think they can grow their capital, so a properly-functioning capitalist society is a growing one. This has posed a problem for some Marxist economists. How, they argue, can capitalism grow when the workers are paid only a portion (v out of v+s) of the value they produce, and the capitalists - although they live well - need to keep most of their capital gains for further investment? This has been termed the 'underconsumption theory of capitalist crises' and was the subject of a historical dispute between Nikolai Bukharin and Rosa Luxemburg in the 1920s. Luxemburg thought that capitalism could only grow (via realising the value of an increasing mass of commodities) through vigorous expansion into new markets, and that this explained 'imperialism'. Bukharin put her right. So here is Bukharin's model in spreadsheet form - click on image to make larger.. As before we have Department 1, making machines and raw materials, and Department 2, making consumables to keep workers and capitalists alive for another day's toil. We split the surplus value created by workers into three categories: that proportion consumed unproductively by the capitalists, (a); that proportion which is capital re-invested in machines, (δc); and that proportion invested in increased labour (δv). All of the variables here measure capital value, so that δv is increased capital allocated to wages. This could be more workers to use extra machines or raw materials, or more highly-paid (more highly-skilled and productive) workers to use more sophisticated machines. The constraint between Departments 1 and 2 to ensure that reproduction can occur is a simple generalisation of the previous case: c2 = v1+a1 and δc2 = δv1. This equates the constant capital in Department 2 with the payments to workers and capitalists in Department 1 through the endless cycles of capitalist reproduction of the relations of production. The model is very, very simple. It is assumed that the capitalists don't increase their consumption iteration-on-iteration .. though they probably would. Also, the incremental growth of constant and variable capital is held constant, although it would probably be increasing geometrically. These details don't invalidate the 'in principle' character of the model.   Bukharin comments: "In other words, the following grow: • the constant capital of society,  • the consumption of the workers,  • the consumption of the capitalists (everything taken in values).  "In this connexion we will not make any further analysis of the relation in which this growth of the various above-listed values proceeds. This question needs to be treated separately. "Here we must mention, even if only briefly, the following circumstances: along with the growth of production, the market of this production grows too, the market of means of production expands, and the consumer demand grows also (since, taken in absolute terms, the capitalists' consumption grows as well as that of the workers). "In other words, here the possibility is given of, on the one hand, an equilibrium between the various parts of the total social production and, on the other, an equilibrium between production and consumption. "In this process the equilibrium between production and consumption is for its part conditioned by the production equilibrium, i.e. the equilibrium between the various parts of the functioning capital and its various branches. "In the above analysis we neglect at first a series of highly important, specifically capitalist moments, e.g. money-circulation. "This resulted in a series of the most serious mistakes, it resulted further in the denial of the existence of contradictions within capitalism, finally a direct apology for the capitalist system, an apology which attempts – to use a Marxist word – to ‘reason away' the crises, the over-production, the mass misery and so on. ‘It must never be forgotten, that in capitalist production what matters is not the immediate use value but the exchange value, and in particular, the expansion of the surplus value.' Here, Bukharin is writing as a typical soviet Bolshevik, echoing Marx's extrapolations of the inevitable fate of capitalism. Reality was to turn out very differently, to the point where it is a genuine and profound question of Marxist analysis as to whether capitalism is indeed subject to structural crises (not just regular business cycles) which could catalyse a revolutionary dynamic towards a higher mode of production. The Marxist theory of crises will be examined here later (cf. Simon Clarke's book, "Marx's Theory of Crisis", available as a Word document here). Tuesday, August 30, 2016 Simple Reproduction in an Abstract Capitalist Society Link to online PDF Capitalism is characterised within the Marxist tradition as generalised commodity production; in Marx’s view, a correct understanding of the commodity encapsulates its fundamentals. Key is the concept of labour power and surplus value. In the following extract from Michael Heinrich’s “An Introduction to the Three Volumes of Karl Marx’s Capital” (Chapter 5, The Capitalist Process of Production), the term ‘means of production’ relates to machinery and raw materials. “With regard to the value of the newly produced commodities, the means of production and labour-power play completely different roles. “The value of the means of production consumed in the creation of a commodity constitutes part of the value of the newly produced commodity. If means of production are completely used up in the process of production, then the value of these means of production is completely transferred to the newly produced mass of commodities. “But if means of production such as tools or machines are not completely used up, then only a part of their value is transferred. If for example a particular machine has a life span of ten years, then one-tenth of its value is transferred to the mass of commodities produced within a year.  The portion of capital laid out in means of production will, under normal conditions, not change value during the production process, but a portion of its value will constitute a portion of the value of the commodities produced. “Marx calls this portion of capital constant capital, or c for short. “Things are different with labour-power. The value of labour-power is not all transferred to the commodities produced. The value newly generated by the “consumption” of labour-power, that is, by labour expenditure, is what is transferred to the value of the newly created commodities. “How much value the worker adds to the product of labour does not depend upon the value of labour-power, but upon the extent to which the labour expended counts as value-creating, abstract labour. The difference between the newly added value and the value of labour-power is the surplus value, or s. “Or to put it differently, the newly added value is equal to the sum of the value of labour-power and surplus value. Marx calls the portion of capital used to pay wages variable capital, or v for short. This portion of capital changes value during the production process; the workers are paid with v, but produce new value in the amount of v + s. “The value of a mass of commodities produced within a specific period of time (a day or even a year) can therefore be expressed as: c + v + s Here c indicates the value of the constant capital consumed, that is, the value of the raw materials and the proportionate share of the value of tools and machines, insofar as they are used. “The valorisation of capital results solely from its variable component. The level of valorisation can therefore be measured by relating the surplus value to the variable capital: Marx calls the quantity s/v the rate of surplus value. It is simultaneously also the measure of the exploitation of labour-power. “The rate of surplus value is usually given as a percentage. For example, if s = 40 and v = 40, then one does not speak of a rate of surplus value of 1, but rather of a rate of surplus value of 100 percent. If s = 20 and v = 40, than the rate of surplus value amounts to 50 percent.” An exercises in Marxist economics is to show how capitalism can reproduce itself. In the most basic case, we look at an idealised steady state situation, where capitalists appropriate surplus value and consume it without re-investment. Expanded reproduction will be modelled in the next post. The economy is divided into two departments: Department 1 is the sector which creates means of production (machines and/or raw materials); this department provides and reproduces the ‘c’ in commodity value. Department 2 produces means of consumption:  food, shelter and all the other necessities for the survival and continuing existence of the workers and capitalists. It underpins the ‘v + s’ in commodity value. For simple reproduction to occur, the following relation must hold*: c2 = v1 + s1 This says that the value of the constant capital in Department 2 (means of consumption) must be equal to the variable + surplus value in Department 1. All other levels of capital may be chosen freely to reflect the size of the economy, the amount of constant capital and labour-power employed and the degree of exploitation**. The economy will turn-over and reproduce itself provided the above relationship holds. Here is an example spreadsheet, followed through 9 iterations. As you can see, it never changes and equivalent values (Exch) are exchanged between Department 1 and Department 2. Department 1 has to buy means of consumption for its own workers and capitalists (v1 + s1) from Department 2 (it makes its own constant capital c1); Department 2 has to buy its constant capital c2 from Department 1, but can feed etc its own workers and capitalists (v2 + c2) itself. If you check the spreadsheet, it all adds up. This proves that capitalist equilibrium (at least in this ever-so-simple model) is possible in principle; in reality capitalists make independent and non-centralised decisions so coordination cannot be as exact as in the spreadsheet. This will eventually lead us into a theory of crises. Things get a little more complex and interesting when we consider expanded reproduction, the typical case of a capitalist economy in growth. The subject of the next post: "Expanded Reproduction in an Abstract Capitalist Society". * See 'Imperialism and the Accumulation of Capital, Bukharin 1925' for more details. ** 'Exploitation' is a pejorative word but should be here understood analytically. In any form of society which is economically growing, workers will receive less to spend than the value of their work-production. Otherwise, where is the infrastructure of civilisation to come from? In the case of capitalism, that 'surplus value' is appropriated privately by the capitalist. In feudalism it was appropriated mainly by the aristocracy, and in slave societies it was directly, coercively owned by the slave's master. In any society where humans work to society's benefit, there will be a social surplus product .. but it may not take the form of surplus value if labour is not commodified. Who knows whether that will ever come about? Thursday, August 25, 2016 Proxima Centauri b I found out about it yesterday, from Paul Gilster's blog, Centauri Dreams. It had all the right information: The key question: observe it better with giant space telescopes (maybe a new push for the FOCAL mission using the sun as a gravitational lens) .. or send a probe? Might a country (the US or China, say?) embark upon a high-prestige, multi-decade-long programme to send such a mission? It encounters that old starflight paradox: the later launches - so much more technologically advanced - overtake the first ones. I think it's possible that the youngest children on Earth might live to see close-in imaging of this planet .. and/or a mission that we might have figured out by then how to slow down. Related: this sad little tale from Alastair Reynolds. "Help Eliza, I'm in trouble!" I'm something of a subscriber to the view: 'AI's the solution .. so what's the problem?' The problem under consideration today is that of child abuse, mentioned in this post about Internet paedophiles yesterday, and prominent in continuing revelations about abuse at Ampleforth College. [Wikipedia: "Ampleforth College is a coeducational independent day and boarding school in the village of Ampleforth, North Yorkshire, England. It opened in 1802 as a boys' school, and is run by the Benedictine monks and lay staff of Ampleforth Abbey. Several monks and three members of the lay teaching staff molested children in their care over several decades. In 2005 Father Piers Grant-Ferris admitted 20 incidents of child abuse. This was not an isolated incident. "The Yorkshire Post reported in 2005: "Pupils at a leading Roman Catholic school suffered decades of abuse from at least six paedophiles following a decision by former Abbot Basil Hume not to call in police at the beginning of the scandal."] Let me remind you about Eliza, the original chatbot developed by Joseph Weizenbaum. "ELIZA worked by simple parsing and substitution of key words into canned phrases. Depending upon the initial entries by the user, the illusion of a human writer could be instantly dispelled, or could continue through several interchanges. "It was sometimes so convincing that there are many anecdotes about people becoming very emotionally caught up in dealing with [ELIZA] for several minutes until the machine's true lack of understanding became apparent. "Weizenbaum's own secretary reportedly asked him to leave the room so that she and ELIZA could have a real conversation. "As Weizenbaum later wrote, "I had not realized ... that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people." Eliza works by matching text input against a large database of templates. Each input template is linked to one or more possible output templates, with variables which can be instantiated to the substantive words from the input. Eliza might, for example, "respond to "My head hurts" with "Why do you say your head hurts?" A possible response to "My mother hates me" would be "Who else in your family hates you?" In addition to crafting a reply, Eliza could easily have updated a user-database with the information it was receiving. It's easy to see how this could be applied to helping victims of child abuse. A key design principle is that the abuser must not become aware that the child is passing on information: this rules out a tailored 'abuse app'. I suggest a special WhatsApp-connected chatbot with a widely publicised name - let's say Help!. The child contacts Help! on WhatsApp and the first thing he or she is asked to do is choose a name, say Peter, which is what will appear (instead of Help!) on their WhatsApp contacts list. I think the history of chats with Peter is going to have to vanish too, replaced with harmless confected froth. The child is typing to an Eliza-like chatbot (maybe more like IBM's Watson than Eliza) which has been trained on scripts from charities like Childline. Like Weizenbaum, we know that people of all ages are especially likely to confide in an AI agent. The database which Help! constructs is a transcript of alleged abuse. The real problem is what to do with it. No doubt it's encrypted and identity-protected but at some point someone has to assess whether this is a real or false allegation, and figure out how to proceed. But these are problems charities already have to deal with. I think they should move on the app. There's already one for carers. Wednesday, August 24, 2016 Croquet at the Bishop's Palace, Wells What a busy day today! I hoovered the house then walked across to the gym for my weight room induction session: bench-press machines and dumbbells; Clare got to mop the floor during this. Back home Clare cut my hair to something appropriate to iron-pumping and then we strolled to the Bishop's Palace for a late picnic lunch. Croquet in front of Wells Cathedral We're fed and watching the croquet intently I'm told that croquet only looks genteel; in reality it's hyper-confrontational and aggressive. What would I know? I thought they played it with flamingos. Blue Labour - so disappointing I was really prepared to like Blue Labour and to that end bought this book. Amazon link I read about half the essays before finally choking on the conceptual equivalent of mushy bran. Here's the story from Wikipedia. "Labour peer and London Metropolitan University academic Maurice Glasman launched Blue Labour in April 2009 at a meeting in Conway Hall, Bloomsbury. He called for "a new politics of reciprocity, mutuality and solidarity", an alternative to the post 1945 centralising approach of the Labour Party. "The movement grew through a series of seminars held in University College, Oxford and at London Metropolitan University in the aftermath of Labour's defeat in the 2010 general election. "Labour figures sometimes associated with the trend have criticised the New Labour administration of Tony Blair for having an uncritical view of the market economy, and that of Gordon Brown for being uncritical of both the market and the state. "Jon Cruddas, the Labour MP for Dagenham and Rainham and the party's policy review co-ordinator, argued that New Labour's focus on 'the progressive new' resulted in the party embracing "a dystopian, destructive neoliberalism, cut loose from the traditions and history of Labour". "Chuka Umunna, the Labour Shadow Business Secretary believes Blue Labour "provides the seeds of national renewal". Chuka Umunna, one of the high priests of metro-liberalism, a Blue Labour guru? Give me a break. "Blue Labour suggests that abstract concepts of equality and internationalism have held back the Labour Party from linking with the real concerns of many voters, the concept of equality leading to an 'obsession with the postcode lottery' and internationalism ignoring fears of low paid workers about immigration. "Blue Labour, alternatively, emphasises the importance of democratic engagement and insists that the Labour Party should seek to reinvigorate its relationships with different communities across the nation, with an approach based on what historian Dominic Sandbrook describes as "family, faith, and flag". The essays, highly overlapping and uniformly light on conceptual depth and rigour, feature: • An ethical/cultural critique of liberal individualism • An economic critique of rampant globalism • A religious critique of secular atomisation. So what is to be done? Way too many essays base themselves on Catholic Social Teaching (various Popes get extensive name checks), a naive idealisation of the increasingly dysfunctional German social and economic model (workers on boards, apprenticeships etc) and the communitarian history of the British labour and cooperative movements over the last couple of hundred years, going back to the Romantic tradition of John Clare (yes, he gets a name check too). There is no analysis of just why metro-liberal politics, economics and culture have proved so hegemonic over the last few decades everywhere in the western world, and no plausible political programme for going forwards - nothing beyond tinkering at the edges (community organising, anyone?). No, Blue Labour is a superficial nostalgia-fest built on sand. I also bought this which I have yet to read: Amazon link I have no hopes. Update (28th August 2016). I've completed Rowenna Davis's book and it's an easier read than the essay collection above, as well as being way more insightful. It covers the 18 months stretching from the end of Gordon Brown's premiership to the Miliband brothers election and Ed's first year. During this time Blue Labour emerged and then became dominated by Maurice Glasman once he had been elevated (by Ed) to the Lords. Glasman emerges as an argumentative, self-willed and naive radical activist who fell out with many of his co-thinkers. Interestingly, these were mostly Oxford academics. The other major support groups for Blue Labour were community activists (Citizens UK, led by a patrician bunch of Oxford graduates) and faith groups (Catholics and Muslims predominated). When the book ends, in 2011, Blue Labour is imploding due to Glasman's egotistical gaffes. Plainly, it recovered later, but in the age of Corbyn its profile today is invisible. I'm left with the lasting impression that, despite the academic credentials of its founders, the Blue Labour movement is distinctly lacking both in intellectual depth (in general) and in any analysis of 21st century capitalist dynamics (specifically). It is possible that the Labour Party, as we know it, has no viable go-forward mission at all. One small error on the Internet ... This interesting Guardian article, "The takeover: how police ended up running a paedophile site", is discussed by Bruce Schneier. Two high-profile, security-savvy paedophiles were taken down based on the smallest of errors. The paedophile site “... ran as a company or business,” Rouse says. Senior administrators took charge of individual boards, grouped around categories such as boys or girls, hardcore or non-nude. Users had to upload material at least every 30 days or risk exile. Each of its 45,000 accounts were ranked according to the quality of their output, with a “producer’s area” walled off to all but the most feted. At the top was one man, “effectively the CEO”. He regularly started his messages with the cheery greeting “hiyas”. The article explains how that one idiosyncrasy was enough to identify him. The second paedophile took exhaustive steps to cleanse his uploaded material of any identifying information. "Access to the full suite of Huckle’s material provided the breakthrough. It was not what he photographed, but what he photographed with. Embedded in some of his images, overlooked when he swept the files of metadata, was the brand and model of his Olympus camera. A tiny clue – but enough. "Officers exhaustively swept photography sites such as Flickr and TrekEarth for photos taken in south-east Asia using the make and model." Following that flimsy thread was, it turned out, enough. I've long been convinced that it's essentially impossible to stay secret on the Internet if a major intelligence agency is on your case. The article describes a fair amount of labour-intensive Internet searching by Australian police, but it's not hard to see how that could be mostly automated. And if the intelligence agency is allowed AI-based filtering of generic Internet streams, then security through obscurity doesn't really work either. It would be interesting to know how agencies such as the NSA and GCHQ assess the Internet tradecraft of Islamic fundamentalists. Based on the levels of smartness and training we've heard about to date, I would guess that to any efficient agency with legal access to the right tools the wannabe terrorist is effectively saying, "Here's where I live. Come on in, rummage freely and stay as long as you want." I think this explains the lack of successful attacks (touch wood) that we've seen in the UK the last few years. It's certainly not for want of attempts. Of course, if your communications security agency is not up to speed - Hello, Belgium? - even incompetent jihadis can still make it happen. Tuesday, August 23, 2016 The worst army in the world? Not entirely From The Telegraph today: "EU leaders want their own army, but can't agree on much else - five things we learned from the Renzi-Hollande-Merkel summit". In fact the official statement seemed quite vague, but a European Army would plainly be a most ineffectual institution: • No common language • No common esprit de corps • No cohesive leadership • No common experience of combat • No overall political master. Would there ever be political agreement to get this 'army' to do anything involving real combat? Look on the bright side. It's probably a way to get national armies in Europe to converge to similar doctrines, equipment and command-and-control protocols. European military cooperation will surely be necessary in the future yet today the ability of European nations to fight effectively together (in theory within NATO) is lamentably poor. The ideology of the 'United States of Europe' and 'no more European wars' is clearly alive, well and part of the rationale for this initiative. Although full federalism a la USA is never going to happen, the 'European Army' project should nevertheless aid national military renewal projects. In a Europe which is today pacifistic, disarmed and vaguely helpless (leaving aside the UK and France) this is to be welcomed. At least we know that in principle the Europeans can fight proper state-on-state wars: two world wars proved that. Compare and contrast the situation with the Arabs. I guess we should be relieved. A two mile circular walk from Priddy The walk goes along Dursdon Drove and the West Mendip Way. Parking at the Queen Victoria pub. Here's the route: we walked clockwise starting from the Queen Vic pub top left You could say it was hot - 32 degrees in the pub car park, and 26 on the open trail this afternoon. About an hour. Here are some pictures. The southern part of the trail, looking east Clare in camo-chic with wrap-shades Strangely-hued sheep - could be a savanna shot Your author recovering at the Queen Vic, Priddy Monday, August 22, 2016 How will AIs become politically correct? In my recent post, "Gloria Hunniford and the case for AI biometrics", I advocated the use of AI facial recognition systems in bank branches to check for scammers. They would be more effective than cashiers because 'AI systems don't have to be polite.' But of course they do. Hardly a day goes by without some story appearing about an AI system which 'noticed' certain unfortunate connections and had to be tweaked. Some of these stories reflect genuine issues of training sets and algorithm-configuration; others expose the system's aspie-like tendency to blurt out uncomfortable truths. And there are plenty of them - truths which fall outside the famous Overton window. I think it will be a very smart AI which can keep two sets of books: the accurate model of the world it generates from its deep learning and the acceptable model which it has to use and pay homage to in public. Since the acceptable model is ideological rather than based on evidence, it's a non-trivial process to concoct the politically-correct version from the data trawled exhaustively from reality. How would an AI handle this? Till we get AI self-deception really locked down, I see a long spell of high-pay-grade tweaking from specialists at Google, Facebook and the like, carefully guided by their in-house commissars. Kamm, Corbyn and NATO There's something about establishment vilification which makes a person reconsider old certainties. In today's Times, Oliver Kamm takes Jeremy Corbyn to task for his 'pacifism' (as usual with JC, nothing is really for sure). According to Kamm, what did Jeremy say? "... asked at a leadership hustings how he would respond if a Nato ally was invaded by Russia, Mr Corbyn replied: “I would want to avoid us getting involved militarily by building up the diplomatic relationships and also trying to not isolate any country in Europe . . .” He added: “I don’t wish to go to war. What I want to do is achieve a world where we don’t need to go to war, where there is no need for it.” We wearily recall Trotsky: "You may not be interested in war, but war is interested in you." Kamm continues: "President Putin’s regime has already unilaterally altered the boundaries of Europe by force on preposterous pretexts. Mr Corbyn has in effect announced to this aggressive and expansionist power that if he is in charge there will be no costs and no resistance if Russia adopts the same methods against allies to which we are bound by treaty obligations. "A few weeks ago, Nato announced plans to increase its strength in Poland and the Baltic states. Under a Corbyn government, those democratic allies won’t be able to rely on us." One should generally listen to Oliver Kamm's very trenchant, neoliberal/neocon views and then adopt precisely the opposite. Going to war with a serious opponent (Russia) is an existential business. This is not something any state does lightly. You may recall that America, our supposed great ally, took its time coming to our assistance in both the first and second world wars. Strangely, they took account of their own national interests. The problem with NATO is that its mutual self-defence treaty locks in a supposed commonality of national interests which cannot in fact exist. When NATO attacks some ultra-weak foe in a discretionary war, this is obscured - the war effort by some NATO members may be purely notional. This renders moot Oliver Kamm's point: "Imagine that Mr Corbyn’s wish had been acted on. In 1998, the UN Security Council voted three times to identify the crisis in Kosovo as a threat to international peace and security and demand a response by the government of Slobodan Milosevic in Serbia. Milosevic’s forces intensified their persecution of Kosovan Albanians, driving hundreds of thousands from their homes. "Belatedly, Nato launched a bombing campaign against Serbia in 1999. It thereby prevented a humanitarian catastrophe in Europe. Nato’s intervention rescued a threatened population and put Kosovo’s fate in the hands of a UN administration." We say NATO, but in reality this was a coalition of the neocon-willing: no-one in the US or the UK feared retaliation from the Serbs. NATO was a convenient figleaf. If Russia gets into a border war in Eastern Europe with a NATO member, does anyone really think NATO will automatically go to war on its behalf? General Sir Richard Shirreff pointed out in his recent book, "War with Russia", that for real wars NATO is an archaic, hollowed-out shell - an ineffectual paper tiger. In truth we go to war when that's the only way to further advance our national interests. Treaties which would attempt to drag us into war against such interests are merely foolish pretences. Would it really be such a catastrophe to let NATO go? Then we (and the rest of Europe) could get real about what we do, and do not, existentially care about. Saturday, August 20, 2016 Gloria Hunniford and the case for AI biometrics Here is how The Telegraph covered it: "Rip Off Britain presenter Gloria Hunniford was the victim of a £120,000 fraud by an imposter posing as the star. "The 76-year-old Loose Women panelist's bank account was emptied just days after the woman arrived at a Santander branch with her "daughter" and "grandson". "Personal banker Aysha Davis, 28, said the woman told her she had "a few bob" in there and had come to add the teenager as a signatory because she had been ill." Here's a picture of the glamorous Gloria Hunniford and the rather-less-so scammer. Should we condemn the unfortunate bank staffer Aysha Davis, who was charged (and rapidly acquitted) as an accomplice? The percentage of people who engage with banks using fake photo-ID must be miniscule. Say 1 in 10,000. How many of the 9,999 bona fide customers happen to look rather unlike their photos? Quite a lot, I'd say. So how many bank staff are going to say, "You look nothing like this glamourous photo, so I'm going to have to run a security check," given the overwhelming chances that the mismatch is actually a false positive? Davis said in court, "... as they had all the correct ID documents and paperwork it wasn't [my] job to pry for fear of causing offence." What would work is AI facial recognition, which now works better than the human eye - and doesn't have to be polite. However, outfitting every bank branch with a camera linked to an AI database (let alone building the customer facial database in the first place) would be a hard sell to customers as well as a major capital cost. This scam merely cost Santander a £120,000 refund. However, if there was an independent case for facial-ID biometrics in the banking industry (and pretty much everyone has access to a smartphone now, so there could easily be an app) then it looks rather more doable. I suggest that's the way to go. In related news: "Police officers in the US have arrested a fugitive after seeing through his elaborate disguise as an elderly man. "They surrounded a house in South Yarmouth, Massachusetts, and ordered Shaun "Shizz" Miller out. "He walked outside in disguise and when they realised the "elderly man" was actually the 31-year-old they were looking for, they arrested him. "He had been on the run since being charged with heroin trafficking offences in April." The police don't have to be polite ... Friday, August 19, 2016 The 10,000 year view Amazon link Richard Feynman once wrote: "From a long view of the history of mankind - seen from, say, ten thousand years from now - there can be little doubt that the most significant event of the 19th century will be judged as Maxwell's discovery of the laws of electrodynamics." What should we say about the other centuries? The seventeenth century, in 10,000 years time, will be remembered principally for Isaac Newton's laws of dynamics: And universal gravitation:  F = Gm1m2/r2  - plus calculus, co-discovered with Leibnitz. The eighteenth century was not rich in epoch-spanning discoveries, but future historians of science will recall it for Rev. Thomas Bayes, whose profound theorem will power the great AI learning engines down the ages. The nineteenth century we've already mentioned. Here are Maxwell's equations in the vector form he would not easily have recognised. The twentieth century is a cornucopia of fundamental science, but I think the most truly foundational, revolutionary and influential discovery has to be the Schrödinger equation, which explains .. well, almost everything around us. But I doubt the 10,000 year future will have forgotten Einstein - or BohrHeisenbergDirac, ... . Sean Carroll has a related list of his seven favourite equations here. Thursday, August 18, 2016 Reality and the MWI From "Many Worlds? An Introduction" by Simon Saunders. “As Popper once said, physics has always been in crisis, but there was a special kind of crisis that set in with quantum mechanics. For despite all its obvious empirical success and fecundity, the theory was based on rules or prescriptions that seemed inherently contradictory. There never was any real agreement on these matters among the founding fathers of the theory. “In what sense are the rules of quantum mechanics contradictory? They break down into two parts. One is the unitary formalism, notably the Schrödinger equation, governing the evolution of the quantum state. It is deterministic and encodes spacetime and dynamical symmetries. “Whether for a particle system or a system of fields, the Schrödinger equation is linear: the sum of two solutions to the equation is also a solution (the superposition principle). This gives the solution space of the Schrödinger equation the structure of a vector space (Hilbert space). “However, there are also rules for another kind of dynamical evolution for the state, which is - well, none of the above. These rules govern the collapse of the wavefunction. They are indeterministic and non-linear, respecting none of the spacetime or dynamical symmetries. And unlike the unitary evolution, there is no obvious route to investigating the collapse process empirically. “Understanding state collapse, and its relationship to the unitary formalism, is the measurement problem of quantum mechanics. There are other conceptual questions in physics, but few if any of them are genuinely paradoxical. None, for their depth, breadth, and longevity, can hold a candle to the measurement problem. “Why not say that the collapse is simply irreducible, ‘the quantum jump’, something primitive, inevitable in a theory which is fundamentally a theory of chance? Because it isn’t only the collapse process itself that is under-specified: the time of the collapse, within relatively wide limits, is undefined, and the criteria for the kind of collapse, linking the set of possible outcomes of the experiment to the wavefunction, are strange. “They either refer to another theory entirely - classical mechanics - or worse, they refer to our ‘intentions’, to the ‘purpose’ of the experiment. “They are the measurement postulates - (‘probability postulates’ would be better, as this is the only place where probabilities enter into quantum mechanics). One is the Born rule, assigning probabilities (as determined by the quantum state) to macroscopic outcomes; the other is the projection postulate, assigning a new microscopic state to the system measured, depending on the macroscopic outcome. “True, the latter is only needed when the measurement apparatus is functioning as a state-preparation device, but there is no doubt that something happens to the microscopic system on triggering a macroscopic outcome. “Whether or not the projection postulate is needed in a particular experiment, the Born rule is essential. It provides the link between the possible macroscopic outcomes and the antecedent state of the microscopic system. As such it is usually specified by giving a choice of vector basis - a set of orthogonal unit vectors in the state space - whereupon the state is written as a superposition of these. The modulus square of the amplitude of each term in the superposition, thus defined, is the probability of the associated macroscopic outcome. “But what dictates the choice of basis? What determines the time at which this outcome happens? How does the measurement apparatus interact with the microscopic system to produce these effects? From the point of view of the realist the answer seems obvious. The apparatus itself should be modelled in quantum mechanics, then its interaction with the microscopic system can be studied dynamically. But if this description is entirely quantum mechanical, if the dynamics is unitary, it is deterministic. Probabilities only enter the conventional theory explicitly with the measurement postulates. The straightforwardly physicalistic strategy seems bound to fail. How are realists to make sense of this? “The various solutions that have been proposed down the years run into scores, but they fall into two broadly recognizable classes. One concludes that the wavefunction describes not the microscopic system itself, but our knowledge of it, or the information we have available of it (perhaps ‘ideal’ or ‘maximal’ knowledge or information). No wonder modelling the apparatus in the wavefunction is no solution: that only shifts the problem further back, ultimately to ‘the observer’ and to questions about the mind, or consciousness, or information - all ultimately philosophical questions. “Anti-realists welcome this conclusion; according to them, we neglect our special status as the knowing subject at our peril. But from a realist point of view this just leaves open the question of what the goings-on at the microscopic level, thus revealed, actually are. By all means constrain the spatiotemporal description (by the uncertainty relations or information-theoretic analogues), but still some spatiotemporal description must be found, down to the length-scales of cells and complex molecules at least, even if not all the way to atomic processes. “That leads to the demand for equations for variables that do not involve the wavefunction, or, if none is to be had in quantum mechanics, to something entirely new, glimpsed hitherto only with regard to its statistical behaviour. This was essentially Einstein’s settled view on the matter. “The only other serious alternative (to realists) is quantum state realism, the view that the quantum state is physically real, changing in time according to the unitary equations and, somehow, also in accordance with the measurement postulates. “How so? Here differences in views set in. Some advocate that the Schrödinger equation itself must be changed (so as to give, in the right circumstances, collapse as a fundamental process). They are for a collapse theory. “Others argue that the Schrödinger equation can be left alone if only it is supplemented by additional equations, governing ‘hidden’ variables. These, despite their name, constitute the real ontology, the stuff of tables and chairs and so forth, but their behaviour is governed by the wavefunction. This is the pilot-wave theory. “Collapse in a theory like this is only ‘effective’, as reflecting the sudden irrelevance (in the right circumstances) of some part of the wavefunction in its influence on these variables. And once irrelevant in this way, always irrelevant: such parts of the wavefunction can simply be discarded. This explains the appearance of collapse. “But for others again, no such additional variables are needed. The collapse is indeed only ‘effective’, but that reflects, not a change in the influence of one part of the quantum state on some hidden or ‘real’ ontology, but rather the change in dynamical influence of one part of the wavefunction over another - the decoherence of one part from the other. “The result is a branching structure to the wavefunction, and again, collapse only in a phenomenological, effective sense. But then, if our world is just one of these branches, all these branches must be worlds. Thus the many worlds theory - worlds not spatially, but dynamically separated.” Saunders' introductory chapter from the book, "Many Worlds?" underlines the central puzzle of quantum mechanics. What would reality have to be like to make the theory of quantum mechanics so incredibly accurate? Realists driven to the 'Many Worlds Interpretation' can still make no sense of it (Sean Carroll is a consistent defender, though). As Saunders observes on page 20, “How does talk of macroscopic objects so much as get off the ground? What is the deep-down ontology in the Everett interpretation? It can’t just be wavefunction [...]; it is simply unintelligible to hold that a function on a high-dimensional space represents something physically real, unless and until we are told what it is a function of  - of what inhabits that space, what the elements of the function’s domain are. “If they are particle configurations, then there had better be particle configurations, in which case not only the wavefunction is real.” And so I have bought "The Many Worlds of Hugh Everett III: Multiple Universes, Mutual Assured Destruction, and the Meltdown of a Nuclear Family" by Peter Byrne. Wednesday, August 17, 2016 I'm with David Daniel Finkelstein has this interesting comment piece in The Times today. "When some years ago David Owen, one of the SDP’s founders, sent me an early draft of his memoirs, I understood for the first time that he had seen the SDP as essentially doomed — certainly in deep trouble — before I even joined it at the beginning of 1982. What had doomed it, in his view, was the decision to form a tight alliance with the Liberal Party. "Owen’s conception of the SDP, which was formed in 1981, is that it would be a tough-minded, hawkish party of the left. It would appeal to an aspirational working class, particularly in the north, who had tired of bureaucratic socialism and saw the point of Margaret Thatcher, but were not Tories. "When the future Labour foreign secretary was a student working on a building site he had been struck by the reaction of his fellow workers to the Suez crisis. It had been instinctively nationalist, uninterested in political protocol, and robust. It was these people he wanted the SDP to appeal to. "Roy Jenkins, former Labour chancellor but also biographer of the Liberal prime minister HH Asquith, wanted a centre party that reflected his own liberal instinct. This would be a southern party of the middle class, disdainful of Thatcher, fastidious rather than bulldog-like on international issues, avowedly centrist. "Everything about this Jenkins view — the electoral relationship with the Liberals in particular, but also the claret-drinking image — drove Owen crazy. But for all that he later did to shape the party, Owen was right that by 1982 Jenkins had won the battle. The SDP would be a liberal party. It lost almost all its northern and working-class seats, was not able to compete in the south because the Liberal Party took all the best constituencies, and ended up being swallowed up by its partner. "Owen and Jenkins were rowing over whether liberalism and being a Labour moderate or even a centrist were the same thing. Jenkins felt that practically and philosophically they were. Owen felt that practically and philosophically they were not." No-one in the current leadership cadre of the Labour Party, neither left, centrist or right, espouses David Owen's political views - with the possible exception of John Mann. And so they will not reconnect with their millions-strong working class roots. If Theresa May can find a way to overcome the respectable working class's tribal anti-Toryism, Labour are electoral toast till the end of time. Blue Labour website and Wikipedia article. Cheddar reservoir This morning's walk in our local area of outstanding natural beauty. Cheddar reservoir, a two mile circumference walk with birdlife. And if you swim near the pumps, you'll get to visit all of Bristol Like my wrap-around mirror shades? As we were returning to our point of departure after an hour, I could neither see our car nor recognise that strange pumping tower ahead. I speculated: 'This circular walk has an odd, helical topology - as you progress around you slip into an alternate universe, one which happens to lack our original car park." Clare was quick to pooh-pooh this suggestion - and within the next few yards I could see that I was in fact mistaken. That, and the linearity of quantum mechanics. Why the Great Stagnation? What next? This is what my CV says in the couple of years leading up to the great crash of 2008. Programme Management - BT Wireless Cities: May 2006 - Sep 2007 A lucrative sixteen month contract, rolling out urban WiFi for BT across major cities in the UK. Network architecture consultant to Dubai World Central: Jan 2008 - July 2008 A seven month contract in Dubai designing the network from scratch for a new ultra-wired airport/city complex. We completed the design and then the crash arrived .. and we flew home. Network architecture consultant to Media City, Manchester: Dec 2008 - Jan 2009 Security Accreditation (IL2/IL3) at C&W and other clients: Jan 2010 - Sep 2010 Managed an RFQ for an international law firm, London: Jan 2012 - July 2012. These were worthwhile but small-scale pieces of work. After that, things did not get any better. I was pleased to retire from network design in March 2014. The UK economy normally rebounds from dips within three years (12 quarters) as this chart shows, but as you can see, the 2008 crash was something special. The rate of growth was clearly negative for about a year and a half (six quarters) and after that - anaemic. This chart - same source - looks at the rate of change of GDP (ie growth) over the period 1949-2012 (during the last 3 years UK annual GDP growth has fluctuated between 2% and 3%). I read that financial crises always exhibit a longer recovery period, as people have to pay down their debts, but it's now been eight years since the big crash and growth rates are still subdued. What's going on? Larry Summers suggested an answer in his essay, "The Age of Secular Stagnation". "Had the American economy performed as the Congressional Budget Office fore­cast in August 2009 - after the stimulus had been passed and the recovery had started—U.S. GDP today would be about $1.3 trillion higher than it is." So what went wrong? "When significant growth is achieved, meanwhile—as in the United States between 2003 and 2007 - it comes from dangerous levels of borrowing that translate excess savings into unsustainable levels of investment (which in this case emerged as a housing bubble)." But why are people be so keen to save, rather than invest? "Greater saving has been driven by: • increases in inequality and in the share of income going to the wealthy,  • increases in uncertainty about the length of retirement and the availability of benefits,  • reductions in the ability to borrow (especially against housing), and  • a greater accumulation of assets by foreign central banks and sovereign wealth funds.  "Reduced investment has been driven by: • slower growth in the labor force,  • the availability of cheaper capital goods, and  • tighter credit (with lending more highly regulated than before). So how do we get out of this? It seems that austerity (clamping down on public expenditure to claw-back massive Government debt) has few friends left. Summers' remarks are addressed to a US audience, but are equally applicable to the UK. Finally we come to the politics. With our ever-expanding university sector, we're seriously in the business of elite overproduction. New graduates, particularly those articulate, idealistic young people with arts degrees, can't get high-status, well-rewarded jobs. They naturally channel their unhappiness into political activism. I'm interested in what Peter Turchin is going to make of all this in September with his new book, "Ages of Discord". Many of the recoveries we have seen in the past were driven by massive investment in new, productivity-raising technologies: electrification, petrol engines, scientific management, computers and the Internet. In every case, it took a good few years for the new technologies to develop, be perfected and for people to learn how to use them to increase productivity. It was only then that the economic tipping point occurred. The next revolution will be driven by new technologies such as AI, new sensors, robotics and VR, bound together by high-speed ubiquitous networks; also genetic engineering and genomics. These technologies will surely launch a huge boom, but plainly we're in the earliest days. So I expect a good decade or so of bouncing around in 'the new normal' before the next lift-off, deficit spending or no. Tyler Cowen of Marginal Revolution (amongst many others) has written about this too. Tuesday, August 16, 2016 Last Friday was our penultimate day in Hereford. The rest of the house had departed to Symonds Yat to do some canoeing, leaving the house to quiet and to me. I sat in the sunshine and listened to this. As a consequence, it's now become an Ohrwurm. An upcoming post will talk about "The Great Stagnation" for three reasons: 1. We're living through it and it's blighting many lives 2. It seems difficult to understand why we're stuck in it 3. Leftist groups have characterised it as the final crisis of capitalism. My point of departure is Larry Summer's influential essay on 'The Age of Secular Stagnation'. Monday, August 15, 2016 Paul Mason and PostCapitalism There are just a few people whose public personas I have rather taken against. Number one on my current list is Owen Smith. He, you may recall with difficulty, is the insincere and glib little weasel who resembles a mini-me version of François Hollande and who is attempting, for reasons of petty ambition, a doomed campaign to displace Jeremy Corbyn. We move on. Whenever I saw Paul Mason on BBC's Newsnight or Channel 4 news, I observed my twitching hand reach unconsciously for the channel-flip, impelled by some combination of his northern lad with a chip on his shoulder schtick, his self-righteous anger at every policy trying to fix the economy combined with a fanboy gullibility as regards the modish antics of Occupy and every other middle-class angst-fest. It was Kevin the Teenager reprised at fifty-something. I understand the type only too well: too smart, idealistic and empathic to fit in with his working class contemporaries; too working class to be accepted into the well-born elite. His perpetual estrangement from power and influence fueling an inchoate rage channelled into left wing rebellion. Paul Mason was a trotskyist in one of the more cerebral outfits, Workers Power, which has now dissolved/entered into the Corbynista mass to rebuild under the motif of Red Flag. But Paul Mason noticed that none of the trotskyist predictions of proletarian revolution ever came good. Being of independent mind, he conceptualised an alternative road to communism; at least one reviewer bathetically called him 'the new Marx'. Amazon link His book is uneven: historical discussions of subsequent misinterpretations of Marx echo those of Michael Heinrich (blogged about here) who discussed 'worldview Marxism' as a coarsening of Marxist theory. Mason believes, I think correctly, that Lenin and Trotsky fundamentally misunderstood what really happened in Russia in 1917, documenting the reasons for that failure in convincing detail. It's when he starts to advance his own ideas for post-capitalist transition that wishful thinking and blind hostility come to the fore. Owen Hatherley's review nails it. "The organised factory proletariat in the US, Europe and Japan never carved out a path to post-capitalism – or socialism as it was then known – but Occupy, Maidan, Tahrir Square, and even the protests against the Workers’ Party government in Brazil, ‘are evidence that a new historical subject exists. It is not just the working class in a different guise; it is networked humanity. "The ‘new gravedigger’ produced by capitalism consists of ‘the networked individuals who have camped in the city squares, blockaded the fracking sites, performed punk rock on the roofs of Russian cathedrals, raised defiant cans of beer in the face of Islamism on the grass of Gezi Park’ etc. This is kitsch, but more significant is Mason’s failure to analyse the political content of the movements of the young. "Not a lot of people in any of them considered ‘capitalism’ their main enemy, probably less so than the average striker in the 1930s or 1970s. They are a disparate bunch, from all manner of class backgrounds, advocating various positions across the political spectrum, but all united apparently by their use of Twitter and their distrust of ‘old elites’ and hierarchies. "Since they carry no baggage, it isn’t worth investigating why, say, the protests in Brazil so easily passed over into racism, why some in Tahrir Square preferred a new general to an elected Islamist, why both sides in Ukraine’s unrest had a crucial far-right element, or why the descendants of Occupy in London and New York now find themselves campaigning for ageing, old-school leftist social democrats. "Mason sweeps all this away on a tide of goofy utopianism." Taking Wikipedia as your model for post-capitalist relations of production is to completely miss the intrinsically parasitic, hobbyist and career-furthering (let alone corporate) nature of so much open-source activity. It's never going to shake its shoulders and sweep aside all those mundane commoditised relations of production which coordinate activities to keep us fed, sheltered, defended, powered-up and online. Mason would have been more acute had he observed that, while Marx gave a very good conceptual account of capitalism in terms of systematised and recurrent patterns of human economic and political activity (process rather than structural models, if you will), he had considerably less to say about why capitalism was either inherently bad news for humanity or precisely why it would necessarily create the conditions for its own supersession. Due to the inadequate development of the productive forces it inherited, capitalism was truly awful for its human participants (disproportionately for the working class, of course) in Marx's time and as recently as the second world war - but since then it has, by historical standards, not been so bad. Ask the Chinese or the Vietnamese. And don't blame capitalism for Africa or the Middle-East. Capitalism still seems pretty efficient at developing the forces of production as Mason, a fan of automation, is happy to concede. So what's going to light the fires of mass revolutionary zeal? Apparently nothing - so we're left with incremental socialism-creep within the interstices of capitalism, Good luck with that. Good try, Paul, but we need look elsewhere for possible paths to humanity's future. Free Weights vs Resistance Machines So this is the question I have recently been asking myself: for four years I have done the circuit of aerobic and resistance machines at the gym .. and resolutely walked past the weight room. Am I missing something important? 'Dr. Mercola' writes, "The primary difference between free weights and machines, however, is the fact that when using free weights, you can move in three dimensions: forward, backward, horizontally, and vertically. This is important, because this is how your body normally moves in daily life. "When you use free weights, you therefore end up engaging more muscles, as you have to work to stabilize the weight while lifting it. The drawback is that you’re at an increased risk of injury unless you maintain proper form. "Machines, on the other hand, are fixed to an axis that will only allow you to move in one or two planes. If used exclusively, this could lead to a lack of functional fitness, which can translate into injuries outside the gym. "Simply stepping off the sidewalk could result in a knee or ankle injury if stabilizing muscles have been ignored in favor of only working your larger muscle groups. On the upside, a machine will allow you to lift heavier weights, and allow you to target specific muscle groups." Other commentators noted that resistance machines tend to under-develop the 'core', which includes the abdominal and back muscles. Since I have had the odd twinge (some might call it a weakness) in my back, I am seriously thinking about doing some free weight training. But it's so complicated! I don't know anything about weights, apparatuses or forms. Still, when in doubt, buy the book. Obviously free weights can be done at the gym, but another thought occurred to me. As we walked back from our Bishop's Palace picnic today, I subtly murmured to Clare, "If you like, you can use my weights, when they arrive." (I have not in fact ordered any weights; the ground must first be prepared). This is what I heard: the house is not to be made into a gym; the last thing needed is a testosterone-heavy male around (I thought there already was one); and some remark about sweat I didn't quite catch. No real problems then. I emphasised that weight training is mostly kind and gentle, like yoga. Michael O'Neal eat your heart out; I will pump iron!! Bishop's Palace, Wells and the Dragon's-Lair A 'Spanish Plume' of warm air has sent us scampering to the Bishop's Palace today for a picnic lunch. They have just opened the 'Dragon's lair' for the summer holidays. The Dragon - 'Come hither, tasty children!' Clare explores the maze, where it's hard to get lost The author with picnic The garden fronting the cathedral Hereford Cathedral - British Camp - Malvern Hills We were away a few days last week near Hereford for my sister-in-law's 50th wedding anniversary, Here are some pix. Clare in the Chain Library at the cathedral From British Camp looking north over the Malvern Hills This was, sadly, about as high as we got British Camp is a spectacular Iron Age hill fort overlooking a reservoir. It's quite high, and our orbit around the Malvern Hills Hotel, with its beers, coffees and cream teas, was insufficiently eccentric to get us to the top.
5504db6c8152b0d0
Dismiss Notice Join Physics Forums Today! Schrödinger equation: macro level 1. Sep 7, 2007 #1 Is it possible, in theory, to describe a macroscopic object with the Schrödinger equation (its location for example)? Last edited: Sep 7, 2007 2. jcsd 3. Sep 7, 2007 #2 User Avatar Science Advisor Gold Member Yes, there is no "scale" in the SE. The main problem is that you of course also need an relevant Hamiltonian for what you are modeling; preferably one that can be used to solve the problem and for most macroscopic objekt the Hamiltonian is very complicated. In reality, most people tend to prefer the Heisenberg (or more generally interaction) picture when they model 'simple' macroscopic objects such as superconducting devices for various technical reasons (mainly because it is easier to handle dissipation) but you can always re-write this as a SE Also, note that solid state qubits are quite large, several square microns (which doesn't sound like much, but you can e.g. easily see them in a decent optical microscope). and they are quite well described by 'simple' SE that can actually be solved. 4. Sep 7, 2007 #3 Thanks! In another discussion I'm involved in I stated rather confidently that it is indeed possible, but then it suddenly struck me that my memory might be at fault. Similar Discussions: Schrödinger equation: macro level 1. Schrödinger equation (Replies: 11)
8e6346af395bc280
Wednesday, March 14, 2007 Immanuel Kant "Groundwork of the Metaphysic of Morals" *Not an exact quote from Schopenhauer. gregvw said... Supposing that humans have free will, that humans are fundamentally organic machines, and that there exist life forms which are too simplistic to have a free will, what is the minimum amount of complexity of the living mechanism necessary to support free will? Is there a discrete cutoff where one molecule is the difference between a sentient being and a automaton? Or, is there a veritable continuum between the two where increasing complexity supports increasing freeness of will. If so, what is partial free will? Or is it only the depth of the illusion of free will which varies with biocomplexity?Or, is free will built into the level of proteins and we are simply incapable of observing it? This seems pretty far fetched. Maybe free will is just how we perceive the law of averages. The superposition of many random or individually unresolvable phenomena, when taken together, appears as a distinct entity. Simple organisms may not have a sufficient number of processes to form a cleanly observable image of sentience. What if the ghost in the machine is ultimately statistical? Well, that's enough babble from me. Rufus said... "What if the ghost in the machine is ultimately statistical?" I think that's where a lot of thought on the subject is going. The real argument for free will is that it seems to coincide with our experiences. But, unfortunately, that's still a pretty thin reed to hang your hat on. gregvw said... It makes "Extraordinary Popular Delusions and the Madness of Crowds" seem less extraordinary. Rufus said... I guess you'd be the one to ask this though: Is the clockwork model of the universe accurate? Wasn't the point of the 20th century Weirdo Physics (my term) that the physical laws are more like guidelines? Or is that wrong too? gregvw said... Bonus points for a PotC reference. I am not sure if the answer to your question about whether the universe fundamentally behaves according to deterministic laws is known. It is this sort of head-scratcher that made me go into mathematics instead of physics. You have reminded me though, that I discovered an interesting result just recently. As I may have mentioned, I am working on developing optimal control methods for quantum mechanical problems. In particular, the current problem involves separating a single blob of Bose-Einstein Condensate into two distinct blobs with a time-varying magnetic field as the control mechanism. To do this we set the initial and target (final) probability distributions for the particles and then solve for the time evolution of the system with various functions modulating the magnetic field to find the function which best gives the desired "two lump" behavior at the end of the control interval. I discovered that when you decided that the global phase of the condensate is irrelevant, that there are a plethora of suitable control functions which bring the initial state to the desired, final, state. (Plot) What is especially interesting to me about this is that to reach the target state, there is a much wider range of acceptable solution curves near the beginning of the simulation (the curve must start at 0 and end at 1) whereas the distribution of curves is much narrower at the end time. Translation: The present is more particularly dependent on the recent past than it is on the distant past. Of course, we can observe this phenomenon in everyday life, but I know of no specific quantum level theory which says that it should be the case. gregvw said... That link did not quite work, it should finish with Rufus said... I once sat down and charted out all of the decisions I had made during the day that had led me to the present moment. Basically, I drew out all of the other possible choices as points and lines. Anyway, what I ended up with was a single present point at the end of a branching root-like structure that became very wide and diverse as you moved back in time. So it definitely is observable in everyday life. gregvw said... Sure, so what the hell does that mean that it occurs on the level of particles? There is something hidden in the Schrödinger equation that produces this behavior. Rufus said... It's a tough one, isn't it? Any hypothesis that I could come up with would sound completely bat shit.
417c6fb4247d0f60
NEWTON, Ask A Scientist! Name: Kim Status: student Grade: n/a Location: NH Country: USA Date: Winter 2012-2013 I am in 10th grade and have developed a real interest in Quantum Physics and my question has to do with the Copenhagen interpretation - which states that the act of measuring a quantum state causes the quantum wave function to collapse and it is not just that the scientist does not know which state it is in, but it is rather that the physical reality is not determined until the act of measurement takes place. How do you prove that something does not have a state until you observe or measure it in the first place? What follows is how I think one could explain this. Can you correct me where I am wrong? Ok, my explanation.....You have some fancy lab equipment that takes one particle with spin=0 and from this it creates two particles X and Y. One of these particles HAS to spin -1/2 and the other +1/2 but you do not know which one is which (until you look at one of them). If you measure X to be +1/2 then you know Y has to be -1/2. If you do not first look at or measure "X" and instead send it through a "particle flipping" device (which reverses whatever original spin it started with) then at this point you would think both particles should now have the same spin because you flipped one (and have not looked or measured yet)....but when you now measure flipped particle X, Y will have instantaneously changed to be the opposite of X (when you would of thought it would be the same after flipping the other), so in essence its state was undetermined before the flipping/measuring. So is this the correct way of explaining the Copenhagen interpretation? I am thinking even though YOU do not know what the particle flipper did, the particle flipper knows, so would this break down the function? Or does it have to be the photons entering your eyeball from the resulting measurement.I just cannot understand how scientists KNOW that something does not have a state before being observed. Thanks for any clarification. Hi Kim, Thanks for the question. First let me say that a particle can exist in two states at the same time. This is called a superposition. Let A be a wave function for the first state and let B be a wave function for the second state. A wave function satisfies the Schrödinger equation. A superposition exists because the sum A+B is also a solution to the Schrödinger equation as the S.E. is a linear differential equation. When you carry out a measurement on a superposition state (A+B) you will collapse the state into either A or B. The act of measurement forces the system to be in state A or state B as measurements will ONLY find so-called eigenstates--this is one of the postulates of quantum mechanics. An example of the construction of two opposite spins (mentioned in your text below) is pair production in which a 1024 keV photon generates an electron and a positron. Click here to return to the Physics Archives Educational Programs Building 223 9700 S. Cass Ave. Argonne, Illinois 60439-4845, USA Update: November 2011 Weclome To Newton Argonne National Laboratory
723f7fc0c1a3267e
Monster waves blamed for shipping disasters When the cruise ship Louis Majesty left Barcelona in eastern Spain for Genoa in northern Italy, it was for the leisurely final leg of a hopscotching tour around the Mediterranean. But the Mediterranean had other ideas. Storm clouds were gathering as the boat ventured eastwards out of the port at around 1pm on March 3, 2010. The sea swell steadily increased during the first hours of the voyage, enough to test those with less-experienced sea legs, but still nothing out of the ordinary. At 4.20 pm, the ship ran without warning into a wall of water 8 metres or more in height. As far as events can be reconstructed, the boat’s pitch as it descended the wave’s lee tilted it into a second, and possibly a third, monster wave immediately behind.  Water smashed through the windows of a lounge on deck 5, almost 17 metres above the ship’s water line. Two passengers were killed instantly and 14 more injured.  Then, as suddenly as the waves had appeared, they were gone. The boat turned and limped back to Barcelona. A few decades ago, rogue waves of the sort that hit the Louis Majesty were the stuff of salty sea dogs’ legends. No more. Real-world observations, backed up by improved theory and lab experiments, leave no doubt any more that monster waves happen – and not infrequently. The question has become: can we predict when and where they will occur? Science has been slow to catch up with rogue waves. There is not even any universally accepted definition. One with wide currency is that a rogue is at least double the significant wave height, itself defined as the average height of the tallest third of waves in any given region.  What this amounts to is a little dependent on context: on a calm sea with significant waves 10 centimetres tall, a wave of 20 centimetres might be deemed a rogue. If that seems a little lackadaisical, for a long time the models oceanographers used to predict wave heights suggested anomalously tall waves barely existed. These models rested on the principle of linear superposition: that when two trains of waves meet, the heights of the peaks and troughs at each point simply sum.  It was only in the late 1960s that Thomas Brooke Benjamin and J.E. Feir of the University of Cambridge spotted an instability in the underlying mathematics. When longer-wavelength waves catch up with shorter-wavelength ones, all the energy of a wave train can become abruptly concentrated in a few monster waves – or just one. Longer waves travel faster in the deep ocean, so this is a perfectly plausible real-world scenario.  The pair went on to test the theory in a then state-of-the-art, 400-metre-long towing tank, complete with wave-maker, at the a UK National Physical Laboratory facility on the outskirts of London. Near the wave-maker, which perturbed the water at varying speeds, the waves were uniform and civil. But about 60 metres on they became distorted, forming into short-lived, larger waves that we would now call rogues (though to avoid unwarranted splashing, the initial waves were just a few centimetres tall). We now know that rogue waves can arise in every ocean. That casts historical accounts in a new lightand rogue waves are thought to have had a part in the unexplained losses of some 200 cargo vessels in the two decades preceding 2004. It took a while for this new intelligence to trickle through. “Waves become unstable and can concentrate energy on their own,” says Takuji Waseda, an oceanographer at the University of Tokyo in Japan. “But for a long time, people thought this was a theoretical thing that does not exist in the real oceans.” Theory and observation finally crashed together in 1995 in the North Sea, about 150 kilometres off the coast of Norway. New Year’s Day that year was tumultuous around the Draupner sea platform, with a significant wave height of 12 metres.  At around 3.20pm, however, accelerometers and strain sensors mounted on the platform registered a single wave towering 26 metres over its surrounding troughs. According to the prevailing wisdom, this was a once-in-10,000-year occurrence. The Draupner wave ushered in a new era of rogue-wave science, says physicist Ira Didenkulova at Tallinn University of Technology in Estonia. In 2000, the European Union initiated the three-year MaxWave project. During a three-week stretch early in 2003, it used boat-based radar and satellite data to scan the world’s oceans for giant waves, turning up 10 that were 25 metres or more tall. We now know that rogue waves can arise in every ocean. The North Atlantic, the Drake Passage between Antarctica and the southern tip of South America, and the waters off the southern coast of South Africa are particularly prone.  Rogues possibly also occur in some large freshwater bodies such as the Great Lakes of North America.  That casts historical accounts in a new light and rogue waves are now thought to have had a part in the unexplained losses of some 200 cargo vessels in the two decades preceding 2004. So rogue waves exist, but what makes one in the real world? Miguel Onorato at the University of Torino, Italy, has spent more than a decade trying to answer that question.  His tool is the non-linear Schrödinger equation, which has long been used to second-guess unpredictable situations in both classical and quantum physics. Onorato uses it to build computer simulations and guide wave-tank experiments in an attempt to coax rogues from ripples. Gradually, Onorato and others are building up a catalogue of real-world rogue-generating situations. One is when a storm swell runs into a powerful current going the other way. This is often the case along the North Atlantic’s Gulf Stream, or where sea swells run counter to the Agulhas current off South Africa. Another is a “crossing sea”, in which two wave systems – often one generated by local winds and a sea swell from further afield – converge from different directions and create instabilities. Crossing seas have long been a suspect. A 2005 analysis used data from the maritime information service Lloyd’s List Intelligence to show that, depending on the precise definition, up to half of ship accidents chalked up to bad weather occur in crossing seas. In 2011, the finger was pointed at a crossing sea in the Draupner incident, and Onorato thinks it might also have been the Louis Majesty’s downfall. When he and his team fed wind and wave data into his model to “hindcast” the state of the sea in the area at the time, it indicated that two wave trains were converging on the ship, one from a north-easterly direction and one more from the south-east, separated by an angle of between 40 and 60 degrees. Simpler situations might generate rogues, too. Last year, Waseda revisited an incident in December 1980 when a cargo carrier loaded with coal lost its entire bow to a monster wave with an estimated height of 20 metres in the “Dragon’s Triangle”, a region of the Pacific south of Japan that is notorious for accidents.  A Japanese government investigation had blamed a crossing sea, but when Waseda used a more sophisticated wave model to hindcast the conditions, he found it likely that a strong gale had poured energy into a single wave system far larger than conventional models allowed. He thinks such single-system rogues could account for other accidents, too – and that 
the models need further updating. “We used to think ocean waves could be described simply, but it turns out they’re changing at 
the same pace and same time scale as the wind, which changes rapidly,” he says.  In 2012, Onorato and others showed that the models even allow for the possibility of “super rogues” towering as much as 11 times the height of the surrounding seas, a possibility since borne out in water-tank experiments. With climate change potentially whipping up more intense storms, such theoretical possibilities are becoming a serious practical concern. From 2009 to 2013, the EU funded a project called Extreme Seas, which brought shipbuilders together with academic researchers including Onorato, with the aim of producing boats with hulls designed to withstand rogue waves. That is a high-cost, long-term solution, however. The best defence remains simply knowing when a rogue wave is likely to strike. “We can at least warn that sea states are rapidly changing, possibly in a dangerous direction,” says Waseda. Various indices have been developed that aim to convert raw satellite and sea-state data into this sort of warning. One of the most widely used is the Benjamin-Feir index, named after the two pioneers of rogue-wave research. Formulated in 2003  by Peter Janssen of the European Centre for Medium-Range Weather Forecasts in Reading, UK, it is calculated for sea squares 20 kilometres by 20 kilometres, and is now incorporated into the centre’s twice-daily sea forecasts. “Ship routing officers use it as an indicator to see whether they should go through a particular area,” says Janssen. The ultimate aim would be to allow ships to do that themselves. Most large ocean-going ships now carry wide-sweeping sensors that determine the heights of waves by analysing radar echoes.  Computer software can turn those radar measurements into a three-dimensional map of the sea state, showing the size and motions of the surrounding swell.  It would be a relatively small step to include software that can flag up indicators of a sea about to go rogue, such as quickly changing winds or crossing seas. Such a system might let crew and passengers avoid at-risk areas of a ship. The main bar to that happening is computing power: existing models can’t quite crunch through all the fast-moving fluctuations of the ocean rapidly enough to generate fine-grained warnings in real time.  For Waseda, the answer is to develop a central early warning system, such as those that operate for tsunamis and tropical storms, to inform ships about to leave port. Thanks to our advances in understanding a phenomenon whose existence was doubted only decades ago, there is no reason now why we can’t do that for rogue waves, says Waseda. “At this point it’s not a shortage of theory, but a shortage of communication.”   - New Scientist Seven giants In 2007, Paul Liu at the US National Oceanic and Atmospheric Administration compiled a catalogue of more than 50 historical incidents probably associated with rogue waves. Here are some of the most significant: 1498 Columbus recounts how, on his third expedition to the Americas, a giant wave lifts up his boats during the night as they pass through a strait near Trinidad. Supposedly using Columbus’s words, to this day this area of sea is called the Bocas del Dragón – the Mouths of the Dragon. 1853 The Annie Jane, a ship carrying 500 emigrants from England to Canada, is hit. Only about 100 make it to shore alive, to Vatersay, an island in Scotland’s Outer Hebrides. 1884 A rogue wave off West Africa sinks the Mignonette, a yacht sailing from England to Australia. The crew of four escape in a dinghy. After 19 days adrift, the captain kills the teenage cabin boy to provide food for the other three survivors. 1909 The steamship SS Waratah disappears without trace with over 200 people on board off the coast of South Africa – a swathe of sea now known for its high incidence of rogue waves. 1943 Two monster waves in quick succession pummel the Queen Elizabeth cruise liner as it crosses the North Atlantic, breaking windows 28 metres above the waterline. 1978 The German merchant navy supertanker MS München disappears in the stormy North Atlantic en route from Bremerhaven to Savannah, Georgia, leaving only a scattering of life rafts and emergency buoys. 2001 Just days apart, two cruise ships – the Bremen and the Caledonian Star – have their bridge windows smashed by waves estimated to be 30 metres tall in the South Atlantic. - New Scientist
96a1bda1258833eb
A hard thermal loop benchmark for the extraction of the nonperturbative Q\bar{Q} potential A hard thermal loop benchmark for the extraction of the nonperturbative potential Yannis Burnier    Alexander Rothkopf Albert Einstein Center for Fundamental Physics, Institute for Theoretical Physics, University of Bern, 3012 Bern, Switzerland July 16, 2019 The extraction of the finite temperature heavy quark potential from lattice QCD relies on a spectral analysis of the Wilson loop. General arguments tell us that the lowest lying spectral peak encodes, through its position and shape, the real and imaginary part of this complex potential. Here we benchmark this extraction strategy using leading order hard-thermal loop (HTL) calculations. I.e. we analytically calculate the Wilson loop and determine the corresponding spectrum. By fitting its lowest lying peak we obtain the real- and imaginary part and confirm that the knowledge of the lowest peak alone is sufficient for obtaining the potential. Access to the full spectrum allows an investigation of spectral features that do not contribute to the potential but can pose a challenge to numerical attempts of an analytic continuation from imaginary time data. Differences in these contributions between the Wilson loop and gauge fixed Wilson line correlators are discussed. To better understand the difficulties in a numerical extraction we deploy the Maximum Entropy method with extended search space to HTL correlators in Euclidean time and observe how well the known spectral function and values for the real and imaginary part are reproduced. Possible venues for improvement of the extraction strategy are discussed. I Introduction Twenty seven years ago Matsui and Satz Matsui:1986dk () proposed the melting of , i.e. the ground state of the vector channel, as signal for the deconfinement transition in heavy-ion collisions. The recent success of relativistic heavy-ion experiments Adare:2006ns (); Tang:2011kr (); Chatrchyan:2012np (); Abelev:2012rv () in observing the relative suppression of charmonium and bottomonium serves as further motivation to develop a first principle description of the phenomena. In the framework of effective field theories, heavy quarks can be described by non-relativistic quantum chromodynamics (NRQCD) obtained form QCD by integrating out the hard energy scale, given by the rest mass of the heavy quarks. To describe the bound state of two quarks, one can further integrate out the typical momentum exchange between the bound quarks (see Brambilla:2004jw () and references therein), which leads to potential non-relativistic QCD (pNRQCD). In this effective field theory the bound state is described by a two point function satisfying a Schrödinger equation. At zero temperature, the potential between a heavy quark and anti-quark is defined from the late time behavior of a Wilson loop and can be directly calculated in Euclidean-time lattice simulations or in perturbation theory. At small distances, where perturbation theory converges, both results agree Bazavov:2012ka (). At high temperature, above the QCD phase transition, one might first expect that the problem becomes simpler as the potential is not confining anymore. Actually, this is not the case since even a proper definition of the potential becomes non-trivial. In fact, the presence of a heat bath is most conveniently incorporated in a Euclidean time framework with finite temporal extend. There the Wilson loop depends on imaginary time and needs to be analytically continued to real time. Only from the large real-time, i.e. behavior, the finite temperature potential can be extracted and happens to be complex Laine:2006ns (); Brambilla:2008cx (). Its imaginary part can be interpreted as Landau damping Beraudo:2007ky () and describes the decaying correlation of the system with its initial state due to scatterings in the plasma. Along the lines presented in Laine:2006ns (), one can compute the potential in finite temperature perturbation theory. This is a demanding task, as resummations need to be carried out in order to cure infrared divergences. To this day the full result is known only to leading order, whereas a short distance expansion has been calculated to higher order Brambilla:2010vq (); Brambilla:2011mk (). Even if higher orders were available, observing the deconfining transition will remain beyond the reach of perturbation theory. In Ref. Rothkopf:2011db (), a method was proposed to compute the heavy quark potential non-perturbatively from lattice QCD simulations. Starting from the measurement of the Euclidean Wilson loop on the lattice, its spectral function is reconstructed via the maximum entropy method (MEM). The definition of the potential is based on the peak structure of the Wilson loop spectrum. Previous numerical evaluations however lead to unexpected results: both the real and imaginary part appear to grow linearly at distances where other quantities, such as the free energies already show significant screening effects. This behavior persisted even at temperatures much larger than the QCD phase transition, where on general grounds, one would expect that the confining potential disappears because of Debye screening Digal:2005ht (). This problem was solved recently Burnier:2012az () by carefully disentangling the different timescales in the problem. Taking into account the remnants of early-time non-potential physics, the lowest lying spectral peak was found to deviate from a naive Lorentzian shape through skewing. Extracted values for real- and imaginary part based on this functional form result in a potential that is compatible with Debye screening. In this paper our aim is twofold: at first we wish to ascertain whether fitting of the lowest lying spectral peak indeed suffices to determine the static heavy quark potential, given the spectral function of the Wilson loop or even the gauge fixed Wilson line correlators. Subsequently it is our goal to better understand the challenges facing a numerical determination of the spectral function by Bayesian analytic continuation. Since in the perturbative approach both Euclidean correlator and spectrum are known, the outcome of the numerical reconstruction can be readily compared. In section II we review the basics of the method of Ref. Rothkopf:2011db () and its improvement introduced in Burnier:2012az (), which form the basis of the extraction of the potential from lattice simulations. From calculations of the real-time Wilson loop as well as gauge fixed Wilson line correlators in section III we determine and investigate the corresponding spectral functions in section IV. While in section V we apply the peak fitting procedure of Burnier:2012az () to the HTL spectra, section VI scrutinizes how well these spectra can be obtained with the maximum entropy method from the HTL Euclidean correlators. Our conclusion in section VII discusses the limitations of the method and points toward further possible improvements. Ii Heavy quark potential from Euclidean correlators The description of the interactions between a pair of heavy quarks and antiquarks at finite temperature in terms of a quantum mechanical potential requires the relevant physics to be well separated from the energy scale of pair creation. In particular needs to be fulfilled111See for instance Brambilla:2013dpa () for the discussion of the different limiting cases and their physics, which is satisfied exactly in the static limit (). In that case, the propagation amplitude of an infinitely heavy quark pair can be described by a rectangular temporal Wilson loop where are its temporal and spatial extend. This real-time quantity is defined as the closed contour integral over the matrix valued gauge field along the path of the heavy quarks If the scale hierarchy holds, it is permissible to substitute the field theoretical interactions by an instantaneous potential, so that obeys a Schrödinger type equation At late times, on expect the function to become time independent, so that we may define the heavy quark static potential as Due to the complex weighting factor in Feynman’s path integral, we cannot calculate the real-time Wilson loop using lattice QCD Monte Carlo simulations. Instead we have to rely on an analytic continuation of Euclidean time quantities that are accessible by these numerical methods. In order to connect the heavy-quark potential and the Euclidean Wilson Loop one introduces a spectral representation of the real-time quantity, where the time dependence now resides entirely in the integral kernel. Note that the function is not just a Fourier transform but can be shown to be a positive definite spectral function Rothkopf:2009pk ()222It is important to distinguish this -dependent Wilson loop spectral function from the quarkonium spectral function Burnier:2007qm (); Burnier:2008ia (); Ding:2012iy (); Ding:2012sp () representing the physical quarkonium spectrum.. After analytic continuation one observes that only the integral kernel has changed, whereas the spectral function remains the same Using the Maximum Entropy Method (MEM), a form of Bayesian inference, it is in principle possible, albeit challenging, to invert Eq.(6) and thus to extract the spectral function from . Note that the model independent method of refs. Cuniberti:2001hm (); Burnier:2011jq (); Burnier:2012ts () is not directly applicable as the Wilson loop is not periodic. However a similar method could probably be developed from the general results of refs. Viano1 (); Viano2 (). Once we are in possession of the spectral function we can insert Eq.(5) into Eq.(4), which yields Rothkopf:2009pk () Direct application of this formula in the case of a numerically reconstructed spectral function is very difficult. It is however possible to determine those structures in the spectral function, which dominate the integral in the infinite time limit. If we suppose that the time independent potential description holds for all times i.e. in equation (3), an intuitive connection between spectral features and the static potential can be established. In this case equation (3) can be solved and the spectral function turns out to be a simple Breit Wigner peak characterized by its peak position and width . In general the function however is time dependent at early times and one expects that a wealth of structures, different from the simple Lorentzian example, exists in the spectrum of the Wilson loop at finite temperature. Note that if the potential description is ultimately applicable, the function will become time independent at late times and therefore a corresponding well defined lowest peak must exist. This part of the spectrum encodes all the relevant information on the potential and it alone needs to be reconstructed from the Euclidean correlator. In Ref. Rothkopf:2011db () it was assumed that the lowest peak is solely described by the late time behavior of the potential and is not affected by the time dependence of the potential at short times. It was shown in Ref. Burnier:2012az () that this is actually not the case. The short time dynamics (non-potential terms, bound state formation) doesn’t just create additional structures at high frequency but also significantly modifies the shape of the low frequency peak. The most general form of this low peak, derived in Ref. Burnier:2012az (), can be written as Note that this result can also be obtained from pNRQCD where arises from the phase of the singlet normalization factors Brambilla:2004jw () In order to calculate the potential from Euclidean correlators we thus need to carry out the following steps: 1. Calculate the Wilson loop at several separation distances for all possible values along the imaginary time axis . 2. Use Bayesian inference to extract the most probable spectrum for each value of . 3. Use Eq. (7) to determine the potential 1. by direct Fourier transform of the full , which is usually impractical due to the uncertainties introduced by the MEM OR 2. by fitting the lowest lying peak with the functional form (9) and analytically carrying out the Fourier transform in Eq. (7) In the following section II we prepare a testing ground for this extraction strategy based on analytic calculations of the real-time and Euclidean Wilson loop in the HTL resummed perturbative approach. Since the analytic continuation can be performed explicitly in HTL, item three of the above list can be tested independently from questions arising from possible inadequacies of the maximum entropy method. The availability of both the spectrum and Euclidean data points on the other hand furthermore allows us to check the degree of success of the MEM itself in the form of a realistic mock data analysis. Iii Correlators from HTL resummed perturbation theory iii.1 Wilson loop In perturbation theory, the Wilson loop is calculated as an expansion in the coupling: starting form . The first non-trivial term () contains only a one gluon exchange and is not enough to describe the correct physics for large Euclidean time . To improve this situation, we resort to the usual ’exponential’ resummation Beraudo:2007ky (), noticing that Thus a better approximation for is as it resums all ’ladder diagrams’ and contains the correct leading order () large behavior. iii.1.1 Leading order term We now turn to the calculation of , for which we set the direction along the third spatial axis. In hard thermal loop (HTL) resummed perturbation theory, all diagrams contributing to have one HTL gluon running between the lines of the Wilson loop Laine:2006ns (): The gluon HTL propagator, written in Euclidean space () and covariant gauge reads: while the HTL self-energies are given in Appendix A and the projectors take the form: Following Ref. Laine:2006ns (), we rewrite the HTL self-energies as spectral functions, so that we can perform the sum over analytically: where we abbreviated the dependence of the second term through the function We can write the spatial vector in spherical coordinates and . In an isotropic plasma, the HTL spectral functions and self-energies depend on only. Integrating over is trivial and the integral over involves where is the sin integral function. Performing the angular integrals and using gives The first line of equation (III.1.1) is linear in , whereas the next lines are proportional to and therefore symmetric around . We will consider these terms separately in the following: iii.1.2 Part linear in The part linear in is formally divergent. Using dimensional regularization, the result can be read off from Ref. Laine:2006ns (); the first line of equation (III.1.1) hence gives: In the limit this part yields the real part of the potential: Note that the result is finite (for ) and the divergence at reflects the behavior of the Coulomb potential. On the lattice, this term behaves differently333The difference with dimensional regularization can be traced back to an infinite constant that is removed in the dimensional regularization procedure.. Roughly speaking, the integral is truncated by the lattice cutoff and thus finite. In this case it is easy to see that it vanishes at , which is expected as a Wilson loop without area is equal to unity. For , it decreases quickly and formally goes to in the limit of an infinite cutoff. This behavior cannot be canceled by the other terms in equation (III.1.1) as they have a different dependence. It should also not be removed as it encodes the Coulomb part of the potential that we want to obtain. To make a connection to the lattice, we therefore introduce a UV cut-off, mimicking the finite lattice spacing. In this case, performing the integral over the momentum from zero to in equation (22) gives: where are the and integral function. From the UV regularized version of the correlator we get the following potential, which is plotted in Fig. 7 together with the continuum () potential. iii.1.3 Symmetric part We calculate here the symmetric part of the correlator containing the lines 2-4 of equation (III.1.1). The functions receive a contribution from the cuts of if . For the opposite case they vanish except for a -function contribution coming from the pole of . In the following we calculate the contribution from the cuts and poles of the transverse and longitudinal self-energy separately, As before we introduce a cutoff on the momentum to mimic the effects of the lattice regularization. Cut contributions Using the symmetry , the cuts contribute to the Euclidean Wilson loop as where the integrals should be performed numerically and the functions are given in Appendix A. Note that in Eq. (26), the limit is well defined. Pole contribution form the longitudinal spectral function We can write the part of (III.1.1) coming from the pole contribution of the electric spectral function as: Here is the solution of and the remaining integral is performed numerically. The limit also exists in this case (see Appendix B). Pole contribution from the transverse spectral function We proceed in a similar way for the transverse spectral function. Here, the limit does not exist the integral in equation (III.1.3) is linearly divergent (see Appendix B). Note that such divergences were already observed in Burnier:2009bk (); Berwein:2012mw (), where the Wilson loop of maximal time extend is shown to diverge at next to leading order. The leading order divergence found in Eq. III.1.3 has yet a different nature and consistently vanishes for . In dimensional regularization, it can be shown (see appendix C) to match the cusp divergence Korchemsky:1987wg (); Brandt:1982gz (), which in this case gives Berwein:2012mw (). Here, we are not interested in trying to renormalize the Wilson loop. It is not needed for our purposes as we aim at a comparison with lattice results, which are also not renormalized. It is however interesting to note that these cusp divergences do not contribute to the potential and only make the Wilson loop heavily suppressed for , hence harder to measure with high accuracy. Removing these divergences in the lattice measurements, without affecting the potential would be of great help to improve the accuracy of the lattice data. One strategy deployed to this end could be the smearing of gluonic links Bazavov:2013zha (). iii.1.4 Imaginary part of the potential From the symmetric part, we obtain the imaginary part of the potential, As in the end the infinite time limit will be taken, it is sufficient to consider the low frequency part of the integrals, Performing the time derivative, using equation (A) and approximating for small as well as the identity we get: which coincides with the expression obtained in Laine:2006ns (); Beraudo:2007ky (); Brambilla:2008cx (). iii.1.5 Numerical evaluation To make close connection to actual lattice data with spatial lattice spacing , we choose to fix the cut-off in our HTL calculations to which naively corresponds to the largest momentum accessible under this finite resolution. Based on a numerical evaluation of the remaining integrals in eq. (21,25-III.1.3), we can generate an arbitrary large number of datapoints spanning the imaginary time axis, which carry numerical errors of the order of the machine precision only. Comparing this ideal HTL Euclidean regularized data to actual measurements from a Monte Carlo simulation in Fig. 1, we find a strong qualitative resemblance. Both graphs exhibit three characteristic features, i.e. a suppression region at small together with an upward trend at , both of which are closely linked to the divergences observed in III.1.3. The datapoints at intermediate are the ones encoding the potential. They exhibit nearly exponential behavior for small separation , where also is small but begin to show noticeable curvature for larger separation distances. After calculating the real-time values (see Fig. 2) using a similar numerical evaluation of the integrals in (21,25-III.1.3), it is possible to obtain the function . As shown in Fig. 2, we can explicitly observe the approach of to a constant value and thus the emergence of a simple exponential behavior of the Wilson loop. Note that in Fig. 2 we show times where the oscillatory behavior is clearly visible while a constant value is actually reached for larger . We refrain from attaching any physical meaning to the length of the swing-in period, as it is dominated by the same cusp divergences that lead to the suppression of the Euclidean Wilson loop data points. Figure 1: (top) The Euclidean HTL Wilson loop with momentum regularization GeVevaluated at MeVin steps of . (bottom) Quenched lattice QCD Wilson loop from a lattice with and anisotropy at . Figure 2: (top) The HTL real-time Wilson loop with momentum regularization GeVevaluated at and MeV. (bottom) Time evolution of the quantity obtained from through Eq.3. iii.2 Gauge fixed Wilson line correlator Cyclic Wilson line correlators (i.e. color singlet Polyakov loops) fixed to Coulomb gauge have been extensively studied on the lattice, both for the determination of the zero temperature potential as well as in investigations into the free-energy difference between a medium with and without inserted heavy quarks (see for instance Maezawa:2011aa (); Kaczmarek:2005ui ()). Due to the absence of spatial Wilson lines connecting the temporal links, these quantities offer a significantly better signal to noise ratio than the Wilson loop, especially if the multilevel algorithm Luscher:1996ug () is applied. Besides the technical question of whether the removal of spatial connectors (or e.g. the application of smearing on spatial links) can lead to an improved lattice observable for the extraction of the potential, it is conceptually of interest to understand whether gauge independent information, such as the potential can be extracted from a gauge dependent quantity such as the Wilson line correlators444The crucial difference to potential models is that we do not investigate the single point , but it is the full Euclidean time dependence of the gauge fixed correlator that is used to reveal the values of the potential.. We proceed with the determination of the Euclidean time Wilson line correlator analogously to III.1 Calculating in leading order hard thermal loop (HTL) resummed perturbation theory we obtain the expression which contains fewer terms than the Wilson loop of (III.1.1). iii.2.1 Coulomb gauge In Coulomb gauge, the HTL Euclidean gluon propagator reads where the self energies are the same as in covariant gauge (see Appendix A). Inserting the propagator into the expression (33) for the Wilson line correlator gives We now rewrite the HTL self-energies as spectral functions, use the formulas collected in Appendix A to perform the sum over and carry out the angular integrations: We find that the Coulomb gauge Wilson line correlator features a similar structure as the Wilson loop the symmetric expression however being of much simpler form, depending only on the longitudinal HTL spectral function. At this point we can already anticipate that it is these terms present in both Wilson loop and Wilson line correlator, which contribute to the values of the potential. In particular the cusp singularity connected to the transverse spectral function identified in III.1.3 is absent from the above expression. iii.2.2 Potential from the Wilson line correlator As in the case of the Wilson loop, a closed expression for the potential can be obtained using In the infinite time limit one can make use of which leads us to the same result we encountered for the Wilson loop with the imaginary part given by the integral expression From a practical standpoint this result is encouraging, as it tells us that (to leading order in HTL) the information content regarding the potential encoded in the Coulomb gauge Wilson line correlator is the same as the one found in the Wilson loop. If such a relation persisted into the non-perturbative realm, the absence of cusp divergences and with it the improved signal to noise ratio would make this an ideal observable to reconstruct the potential. iii.2.3 Numerical evaluation As for the Wilson loop we wish to compare the Euclidean HTL correlator to actual values measured in quenched lattice QCD Monte Carlo simulations. While the symmetric term is finite, the part linear in still requires a regularization. We deploy the same momentum space cutoff as introduced in III.1.2 and set its value to in the following. The absence of divergences in the symmetric part of the correlator leads to a significantly different behavior along the imaginary times . As can be seen in the top graph in Fig. 3, where we plot the HTL Wilson line correlator and the first five HTL Wilson loops as comparison. The large suppression at early times as well as the upward trend near are almost absent. Hence most of the datapoints actually carry information on the potential. Interestingly in the case of the lattice QCD Wilson line correlator, the upward trend is still visible between the last and second to last time step. However contrary to the leading order HTL result, where , the values of these two different correlators on the lattice do not agree at . Figure 3: (top) The Euclidean time Coulomb gauge HTL Wilson line correlator with momentum regularization GeVevaluated at MeVin steps of . (bottom) Quenched lattice QCD Wilson line correlator fixed to coulomb gauge from a lattice with and anisotropy at . Note that contrary to the HTL result the two correlators do not agree at on the lattice. iii.2.4 Covariant gauge The Wilson line correlator can be calculated in the covariant gauge as well. The result depends on the gauge parameter , and contains additional end point divergences Aoyama:1981ev (). These terms however do not contribute in the infinite time limit so that the obtained potential is again the same as in the Wilson loop case (40). Iv Spectral functions from HTL resummed perturbation theory iv.1 Spectrum of the Wilson loop The spectral function can be directly calculated from the real-time correlator via a Fourier transform, We start by analytically investigating the low frequency behavior of this function, as it allows insight into the spectral structures that encode the physics of the heavy quark potential and will be used in its extraction in section V. To benchmark the MEM extraction of the spectrum from Euclidean correlators it is however necessary to compare to the full spectrum, which we will determine from Eq.(42) numerically. iv.1.1 Analytical estimate for the low energy part of the spectral function Starting form equation (III.1.1), we introduce the momentum cutoff where the argument of the second exponential function reads For small frequencies, the main contribution to the spectral function (43) comes form small values of in the above integral. Expanding equation (IV.1.1) around gives: All terms with negative powers of are retained in this expression, as they dominate the integral for late times. Note that the imaginary part of the potential appears as an overall factor in the above expression. Within this approximation, the remaining integrals are carried out analytically and we get: with and denoting a not near specified normalization constant. From this result, we see that the pole of the spectral function indeed resides at and the width of the peak is closely related to the imaginary part of the potential. The result however is not a Lorentzian, but is precisely of the form (9) derived on general grounds in Burnier:2012az (). Note that the phase related to the skewing of the spectral peak is interestingly also given by the imaginary part of the potential iv.1.2 Full spectral function We proceed to calculate the full spectral function by integrating numerically equation (42). Applying the discrete Fourier transform to the real-time Wilson loop evaluated on a set of points separated by a , we obtain its values for a wide range of frequencies partly shown in Fig. 4. As expected from the minute values of at small separation distances, the peak one finds is extremely sharp. However it also becomes clear that the amplitude of the peak is rapidly suppressed as increases. At the same time non-potential contributions related to the divergent terms in the symmetric part of give rise to a huge background structure spanning a wide range of frequencies. Note that at a step in the otherwise smooth spectral function is visible. This is a manifestation of the momentum cutoff we introduced to regularize the formally divergent terms. At the same time one can observe that the spectrum continues beyond these frequencies, which is a reminder that the cutoff was not imposed on the HTL gluon spectral functions. In section V we will use the fitting function (9) to attempt an extraction of the heavy quark potential from the low frequency structures depicted in Fig. 4. Figure 4: The HTL Wilson loop spectral function for different spatial separations fm. Note that the peak is extremely sharp but that its amplitude becomes very small at large in comparison to the huge background induced mostly by the cusp divergences. iv.2 Spectrum of the Wilson line correlator in Coulomb gauge Analogously we can obtain the the spectral function related to the real time Wilson loop correlator At leading order in the HTL resummed expansion, we again have: The spectral function can then be calculated analytically close to its peak at small frequency, which yields Surprisingly at leading order in the HTL approximation we find that the skewing characterized by the quantity is exactly the same as for the Wilson loop. Note that the same result can also be obtained in the covariant gauge. iv.2.1 Full spectral function The full spectral functions for the HTL Wilson line correlator are plotted in Fig. 5. One immediately realizes from a comparison with Fig. 4 that even though the peak position, width and skewing are equal to the Wilson loop case, the Coulomb gauge spectral function looks quite different. The first major difference is that the amplitude of the lowest lying peak depends much less on the separation distance , the second is the virtual absence of the background terms populating a large frequency range in the Wilson loop case. Both facts are of course related, since their origin lies in the suppression of the Euclidean Wilson loop correlator induced in the presence of cusp divergences. Figure 5: The spectral function of the HTL Wilson line correlator in Coulomb gauge for different spatial separations fm. While the peak position, width and skewing is exactly as in the Wilson loop case (Fig. 4), the absence of the cusp divergences leads to a significantly reduced background and a much higher amplitude at larger separation distances. Note that the plotting range is much smaller than in Fig. 4. V The potential from perturbative spectral functions Now that we are in possession of the full HTL spectra obtained from both the Wilson loop and the Wilson line correlator in Coulomb gauge, we can test whether the knowledge of the lowest lying spectral features alone suffices to reconstruct the values of the inter-quark potential in practice. To this end we fit the low region of using the functional form (9) and compare the extracted values with the analytically calculated . We show here the fitting of the Wilson loop spectrum only, since its application to gives exactly the same results (the potential and the skewing are the same). In section VI, where the numerical reconstruction of the spectra from Euclidean time correlator data is concerned, the differences in e.g. the background contributions will however play a major role. Figure 6: Fits to the UV regularized HTL spectral function at (right) with a naive Lorentzian (L), a skewed Lorentzian (LS) and a skewed Lorentzian with additional polynomial terms (LSC0,LSC1,LSC2). Note that only the blue points (labeled ”Fitted”) are used for the fit and hence only these points enter the determination of the potential. Figure 7: (top) Real- and (bottom) imaginary part of the UV regularized () HTL heavy quark potential (red, solid line) at as well as the potential without cutoff (gray, dashed line). The various symbols denote the extracted values from fits of the HTL Wilson loop spectra based on a Lorentzian (L), skewed Lorentzian (LS) and a skewed Lorentzian with background terms (LSC0,LSC1,LSC2). Note that the simple Lorentzian cons
3dc2b46a1c0d1e4a
Why vacations are essential for physics Vacations are good for your health, they allow you to get away from the daily grind and let yourself unwind. They are vital in enabling you to recharge your batteries and get your psyche away from work, work, work. While they allow us to forget about the office for a bit, they can also help to stimulate us to create new and innovative ideas. Such an event occurred for a young German physicist struggling to make the breakthrough he was so very close to in 1925. Werner Heisenberg needed a break. Something’s got to give He had experienced a mental block and to make matters worse, he was suffering from a horrendous bout of hayfever. Heisenberg resided in Göttingen, and during one summer he was tortured by persistent allergic reactions, so something had to change. So he went on vacation to Helgoland, a tiny island in the middle of the North Sea to give his sinuses a rest more than anything else. A eureka moment Changing location really helped him as the change of scenery allowed him to breathe and think more clearly. Finding inspiration for his research, he realized he was employing a technique that would not permit him to measure his results, so he reformatted the mathematics into a quantity he could measure. Upon his return to Göttingen, his research partner managed to connect the dots, and the German research team took the tentative steps into what is now known as modern quantum mechanics. Hey! I’m tired too The strategic taking of a vacation was repeated with similar progress in the same year, 1925, with an equal measure of success. Erwin Schrödinger was working on his own quantum problem, regarding states within atoms. Desperately trying to make a breakthrough in his equations, he kept finding himself confronted with mathematical hurdles. After months of working and not getting anywhere fast, Schrödinger took a skiing vacation with one of his lady friends. This bout of rest and recreation was just the ticket, and, after hitting the slopes during the day and working over a desk in the evenings, he had found the equation he was so desperately looking for. That equation is now known as the Schrödinger equation, and it allows us to describe states of electrons found in hydrogen using de Broglie’s terms of electron waves. Mental fatigue is not your friend Clearly, physicists have demanding jobs and affording themselves a break every so often will enable them to refresh and reboot, something which can boost the creative process. Instead of slogging away for 12 hours a day in a lab, it would be useful to understand you aren’t necessarily getting anywhere and you need to come back in the next day or two with a fresher pair of eyes. Please, boss, it’ll help you too While not every physicist will figure out such groundbreaking theories, using the examples of the two scientists above shows that even the most brilliant mind needs time to stop working and chill out. Often bosses want more and more from their employees but think that going at it for hours upon hours will get the job done, sometimes you need to take a step, or two, backward to move forward. So if you’re stuck on a problem in your work, see if your boss will give you a little break, it might prove to be the best solution to yours and their problem.
7387ebda0f3e3914
Wednesday, October 16, 2019 Dark matter filaments. Computer simulation. [Image: John Dubinski (U of Toronto)] General relativity does not tell us what is going on. Monday, October 07, 2019 What does the future hold for particle physics? Here is Valerie Jamieson writing for New Scientist in 2008: Paul Langacker in 2010, writing for the APS: The Telegraph in 2010: A final one. Here is Steve Giddings writing in 2010 for Wednesday, October 02, 2019 Has Reductionism Run its Course? © Sabine Hossenfelder But this simple story is too simple. Sunday, September 29, 2019 Travel Update The coming days I am in Brussels, for a workshop that I’m not sure where it is or what it is about. It also doesn’t seem to have a website. In any case, I’ll be away, just don’t ask me exactly where or why. On Oct 15, I am giving a public lecture at the University of Minnesota. On Oct 17, I am giving a colloquium in Cleveland. On Oct 25, I am giving a public lecture in Göttingen (in German). On Oct 29, I’m in Genoa giving a talk at the “Festival della Scienza” to accompany the publication of the Italian translation of my book “Lost in Math.” I don’t speak Italian, so this talk will be in English. On Nov 5th I’m speaking in Berlin about dark matter. On Nov 6th I am supposed to give a lecture at the Einstein Forum on Potsdam, though that doesn’t seem to be on their website. These two talks in Berlin and Potsdam will also be in German. On Nov 12th I’m giving a seminar in Oxford, in case Britain still exists at that point. Dec 9th I’m speaking in Wuppertal, details to come, and that will hopefully be the last trip this year. Next time I’m in the USA will probably be late March 2020. In case you are interested that I stop by at your place, please get in touch. I am always happy to meet readers of my blog, so in case our paths cross, do not hesitate to say hi. Friday, September 27, 2019 The Trouble with Many Worlds Wednesday, September 18, 2019 Windows Black Screen Nightmare Folks, I have a warning to utter that is somewhat outside my usual preaching. For the past couple of days one of my laptops has tried to install Windows updates but didn’t succeed. In the morning I would find an error message that said something went wrong. I ignored this because really I couldn’t care less what problems Microsoft causes itself. But this morning, Windows wouldn’t properly start. All I got was a black screen with a mouse cursor. This is the computer I use for my audio and video processing. Now, I’ve been a Windows user for 20+ years and I don’t get easily discouraged by spontaneously appearing malfunctions. After some back and forth, I managed to open a command prompt from the task manager to launch the Windows explorer by hand. But this just produced an error message about some obscure .dll file being corrupted. Ok, then, I thought, I’ll run an sfc /scandisk. But this didn’t work; the command just wouldn’t run. At this point I began to feel really bad about this. I then rebooted the computer a few times with different login options, but got the exact same problem with an administrator login and in the so-called safe mode. The system restore produced an error message, too. Finally, I tried the last thing that came to my mind, a factory reset. Just to have Windows inform me that the command couldn’t be executed. With that, I had run out of Windows-wisdom and called a helpline. Even the guy on the helpline was impressed by this system’s fuckedupness (if that isn’t a word, it should be) and, after trying a few other things that didn’t work, recommended I wipe the disk clean and reinstall Windows. So that’s basically how I spent my day, today. Which, btw, happens to be my birthday. The system is running fine now, though I will have to reinstall all my software. Luckily enough my hard-disk partition seems to have saved all my video and audio files. It doesn’t seem to have been a hardware problem. It also doesn’t smell like a virus. The two IT guys I spoke with said that most likely something went badly wrong with one of those Windows updates. In fact, if you ask Google for Windows Black Screen you’ll find that similar things have happened before after Windows updates. Though, it seems, not quite as severe as this case. The reason I am telling you this isn’t just to vent (though there’s that), but to ask you that in case you encounter the same problem, please let us know. Especially if you find a solution that doesn’t require reinstalling Windows from scratch. Update: Managed to finish what I meant to do before my computer became dysfunctional Monday, September 16, 2019 Why do some scientists believe that our universe is a hologram? Today, I want to tell you why some scientists believe that our universe is really a 3-dimensional projection of a 2-dimensional space. They call it the “holographic principle” and the key idea is this. Usually, the number of different things you can imagine happening inside a part of space increases with the volume. Think of a bag of particles. The larger the bag, the more particles, and the more details you need to describe what the particles do. These details that you need to describe what happens are what physicists call the “degrees of freedom,” and the number of these degrees of freedom is proportional to the number of particles, which is proportional to the volume. At least that’s how it normally works. The holographic principle, in contrast, says that you can describe what happens inside the bag by encoding it on the surface of that bag, at the same resolution. This may not sounds all that remarkable, but it is. Here is why. Take a cube that’s made of smaller cubes, each of which is either black or white. You can think of each small cube as a single bit of information. How much information is in the large cube? Well, that’s the number of the smaller cubes, so 3 cube in this example. Or, if you divide every side of the large cube into N pieces instead of three, that’s N cube. But if you instead count the surface elements of the cube, at the same resolution, you have only 6 x N square. This means that for large N, there are many more volume bits than surface bits at the same resolution. The holographic principle now says that even though there are so many fewer surface bits, the surface bits are sufficient to describe everything that happens in the volume. This does not mean that the surface bits correspond to certain regions of volume, it’s somewhat more complicated. It means instead that the surface bits describe certain correlations between the pieces of volume. So if you think again of the particles in the bag, these will not move entirely independently. And that’s what is called the holographic principle, that really you can encode the events inside any volume on the surface of the volume, at the same resolution. But, you may say, how come we never notice that particles in a bag are somehow constrained in their freedom? Good question. The reason is that the stuff that we deal with in every-day life, say, that bag of particles, doesn’t remotely make use of the theoretically available degrees of freedom. Our present observations only test situations well below the limit that the holographic principle says should exist. The limit from the holographic principle really only matters if the degrees of freedom are strongly compressed, as is the case, for example, for stuff that collapses to a black hole. Indeed, the physics of black holes is one of the most important clues that physicists have for the holographic principle. That’s because we know that black holes have an entropy that is proportional to the area of the black hole horizon, not to its volume. That’s the important part: black hole entropy is proportional to the area, not to the volume. Now, in thermodynamics entropy counts the number of different microscopic configurations that have the same macroscopic appearance. So, the entropy basically counts how much information you could stuff into a macroscopic thing if you kept track of the microscopic details. Therefore, the area-scaling of the black hole entropy tells you that the information content of black holes is bounded by a quantity which proportional to the horizon area. This relation is the origin of the holographic principle. The other important clue for the holographic principle comes from string theory. That’s because string theorists like to apply their mathematical methods in a space-time with a negative cosmological constant, which is called an Anti-de Sitter space. Most of them believe, though it has strictly speaking never been proved, that gravity in an Anti-de Sitter space can be described by a different theory that is entirely located on the boundary of that space. And while this idea came from string theory, one does not actually need the strings for this relation between the volume and the surface to work. More concretely, it uses a limit in which the effects of the strings no longer appear. So the holographic principle seems to be more general than string theory. I have to add though that we do not live in an Anti-de Sitter space because, for all we currently know, the cosmological constant in our universe is positive. Therefore it’s unclear how much the volume-surface relation in Anti-De Sitter space tells us about the real world. And for what the black hole entropy is concerned, the mathematics we currently have does not actually tell us that it counts the information that one can stuff into a black hole. It may instead only count the information that one loses by disconnecting the inside and outside of the black hole. This is called the “entanglement entropy”. It scales with the surface for many systems other than black holes and there is nothing particularly holographic about it. Whether or not you buy the motivations for the holographic principle, you may want to know whether we can test it. The answer is definitely maybe. Earlier this year, Erik Verlinde and Kathryn Zurek proposed that we try to test the holographic principle using gravitational wave interferometers. The idea is that if the universe is holographic, then the fluctuations in the two orthogonal directions that the interferometer arms extend into would be more strongly correlated than one normally expects. However, not everyone agrees that the particular realization of holography which Verlinde and Zurek use is the correct one. Personally I think that the motivations for the holographic principle are not particularly strong and in any case we’ll not be able to test this hypothesis in the coming centuries. Therefore writing papers about it is a waste of time. But it’s an interesting idea and at least you now know what physicists are talking about when they say the universe is a hologram. Tuesday, September 10, 2019 Book Review: “Something Deeply Hidden” by Sean Carroll Something Deeply Hidden: Quantum Worlds and the Emergence of Spacetime Sean Carroll Dutton, September 10, 2019 Of all the weird ideas that quantum mechanics has to offer, the existence of parallel universes is the weirdest. But with his new book, Sean Carroll wants to convince you that it isn’t weird at all. Instead, he argues, if we only take quantum mechanics seriously enough, then “many worlds” are the logical consequence. Most remarkably, the many worlds interpretation implies that in every instance you split into many separate you’s, all of which go on to live their own lives. It takes something to convince yourself that this is reality, but if you want to be convinced, Carroll’s book is a good starting point. “Something Deeply Hidden” is an enjoyable and easy-to-follow introduction to quantum mechanics that will answer your most pressing questions about many worlds, such as how worlds split, what happens with energy conservation, or whether you should worry about the moral standards of all your copies. The book is also notable for what it does not contain. Carroll avoids going through all the different interpretations of quantum mechanics in detail, and only provides short summaries. Instead, the second half of the book is dedicated to his own recent work, which is about constructing space from quantum entanglement. I do find this a promising line of research and he presents it well. I was somewhat perplexed that Carroll does not mention what I think are the two biggest objections to the many world’s interpretation, but I will write about this in a separate post. Like Carroll’s previous books, this one is engaging, well-written, and clearly argued. I can unhesitatingly recommend it to anyone who is interested in the foundations of physics. [Disclaimer: Free review copy] Sunday, September 08, 2019 Away Note I'm attending a conference in Oxford the coming week, so there won't be much happening on this blog. Also, please be warned that comments may be stuck in the moderation queue longer than usual. Friday, September 06, 2019 The five most promising ways to quantize gravity Today, I want to tell you what ideas physicists have come up with to quantize gravity. But before I get to that, I want to tell you why it matters. That we do not have a theory of quantum gravity is currently one of the biggest unsolved problems in the foundations of physics. A lot of people, including many of my colleagues, seem to think that a theory of quantum gravity will remain an academic curiosity without practical relevance. I think they are wrong. That’s because whatever solves this problem will tell us something about quantum theory, and that’s the theory on which all modern electronic devices run, like the ones on which you are watching this video. Maybe it will take 100 years for quantum gravity to find a practical application, or maybe it will even take a 1000 years. But I am sure that understanding nature better will not forever remain a merely academic speculation. Before I go on, I want to be clear that quantizing gravity by itself is not the problem. We can, and have, quantized gravity the same way that we quantize the other interactions. The problem is that the theory which one gets this way breaks down at high energies, and therefore it cannot be how nature works, fundamentally. This naïve quantization is called “perturbatively quantized gravity” and it was worked out in the 1960s by Feynman and DeWitt and some others. Perturbatively quantized gravity is today widely believed to be an approximation to whatever is the correct theory. So really the problem is not just to quantize gravity per se, you want to quantize it and get a theory that does not break down at high energies. Because energies are proportional to frequencies, physicists like to refer to high energies as “the ultraviolet” or just “the UV”. Therefore, the theory of quantum gravity that we look for is said to be “UV complete”. Now, let me go through the five most popular approaches to quantum gravity. 1. String Theory The most widely known and still the most popular attempt to get a UV-complete theory of quantum gravity is string theory. The idea of string theory is that instead of talking about particles and quantizing them, you take strings and quantize those. Amazingly enough, this automatically has the consequence that the strings exchange a force which has the same properties as the gravitational force. This was discovered in the 1970s and at the time, it got physicists very excited. However, in the past decades several problems have appeared in string theory that were patched, which has made the theory increasingly contrived. You can hear all about this in my earlier video. It has never been proved that string theory is indeed UV-complete. 2. Loop Quantum Gravity Loop Quantum Gravity is often named as the biggest competitor of string theory, but this comparison is somewhat misleading. String theory is not just a theory for quantum gravity, it is also supposed to unify the other interactions. Loop Quantum Gravity on the other hand, is only about quantizing gravity. It works by discretizing space in terms of a network, and then using integrals around small loops to describe the space, hence the name. In this network, the nodes represent volumes and the links between nodes the areas of the surfaces where the volumes meet. Loop Quantum Gravity is about as old as string theory. It solves the problem of combining general relativity and quantum mechanics to one consistent theory but it has remained unclear just exactly how one recovers general relativity in this approach. 3. Asymptotically Safe Gravity Asymptotic Safety is an idea that goes back to a 1976 paper by Steven Weinberg. It says that a theory which seems to have problems at high energies when quantized naively, may not have a problem after all, it’s just that it’s more complicated to find out what happens at high energies than it seems. Asymptotically Safe Gravity applies the idea of asymptotic safety to gravity in particular. This approach also solves the problem of quantum gravity. Its major problem is currently that it has not been proved that the theory which one gets this way at high energies still makes sense as a quantum theory. 4. Causal Dynamical Triangulation The problem with quantizing gravity comes from infinities that appear when particles interact at very short distances. This is why most approaches to quantum gravity rely on removing the short distances by using objects of finite extensions. Loop Quantum Gravity works this way, and so does String Theory. Causal Dynamical Triangulation also relies on removing short distances. It does so by approximating a curved space with triangles, or their higher-dimensional counterparts respectively. In contrast to the other approaches though, where the finite extension is a postulated, new property of the underlying true nature of space, in Causal Dynamical Triangulation, the finite size of the triangles is a mathematical aid, and one eventually takes the limit where this size goes to zero. The major reason why many people have remained unconvinced of Causal Dynamical Triangulation is that it treats space and time differently, which Einstein taught us not to do. 5. Emergent Gravity Emergent gravity is not one specific theory, but a class of approaches. These approaches have in common that gravity derives from the collective behavior of a large number of constituents, much like the laws of thermodynamics do. And much like for thermodynamics, in emergent gravity, one does not actually need to know all that much about the exact properties of these constituents to get the dynamical law. If you think that gravity is really emergent, then quantizing gravity does not make sense. Because, if you think of the analogy to thermodynamics, you also do not obtain a theory for the structure of atom by quantizing the equations for gases. Therefore, in emergent gravity one does not quantize gravity. One instead removes the inconsistency between gravity and quantum mechanics by saying that quantizing gravity is not the right thing to do. Which one of these theories is the right one? No one knows. The problem is that it’s really, really hard to find experimental evidence for quantum gravity. But that it’s hard doesn’t mean impossible. I will tell you some other time how we might be able to experimentally test quantum gravity after all. So, stay tuned. Wednesday, September 04, 2019 What’s up with LIGO? The Nobel-Prize winning figure. We don’t know exactly what it shows. [Image Credits: LIGO] Almost four years ago, on September 14 2015, the LIGO collaboration detected gravitational waves for the first time. In 2017, this achievement was awarded the Nobel Prize. Also in that year, the two LIGO interferometers were joined by VIRGO. Since then, a total of three detectors have been on the lookout for space-time’s subtle motions. By now, the LIGO/VIRGO collaboration has reported dozens of gravitational wave events: black hole mergers (like the first), neutron star mergers, and black hole-neutron star mergers. But not everyone is convinced the signals are really what the collaboration claims they are. Already in 2017, a group of physicists around Andrew Jackson in Denmark reported difficulties when they tried to reproduce the signal reconstruction of the first event. In an interview dated November last year, Jackson maintained that the only signal they have been able to reproduce is the first. About the other supposed detections he said: “We can’t see any of those events when we do a blind analysis of the data. Coming from Denmark, I am tempted to say it’s a case of the emperor’s new gravitational waves.” For most physicists, the 170817 neutron-star merger – the strongest signal LIGO has seen so-far – erased any worries raised by the Danish group’s claims. That’s because this event came with an electromagnetic counterpart that was seen by multiple telescopes, which can demonstrate that LIGO indeed sees something of astrophysical origin and not terrestrial noise. But, as critics have pointed out correctly, the LIGO alert for this event came 40 minutes after NASA’s gamma-ray alert. For this reason, the event cannot be used as an independent confirmation of LIGO’s detection capacity. Furthermore, the interpretation of this signal as a neutron-star merger has also been criticized. And this criticism has been criticized for yet other reasons. It further fueled the critics’ fire when Michael Brooks reported last year for New Scientist that, according to two members of the collaboration, the Nobel-prize winning figure of LIGO’s seminal detection was “not found using analysis algorithms” but partly done “by eye” and “hand-tuned for pedagogical purposes.” To this date, the journal that published the paper has refused to comment. The LIGO collaboration has remained silent on the matter, except for issuing a statement according to which they have “full confidence” in their published results (surprise), and that we are to await further details. Glaciers are now moving faster than this collaboration. In April this year, LIGO started the third observation run (O3) after an upgrade that increased the detection sensitivity by about 40% over the previous run.  Many physicists hoped the new observations would bring clarity with more neutron-star events that have electromagnetic counterparts, but that hasn’t happened. Since April, the collaboration has issued 33 alerts for new events, but so-far no electromagnetic counterparts have been seen. You can check the complete list for yourself here. 9 of the 33 events have meanwhile been downgraded because they were identified as likely of terrestrial origin, and been retracted. The number of retractions is fairly high partly because the collaboration is still coming to grips with the upgraded detector. This is new scientific territory and the researchers themselves are still learning how to best analyze and interpret the data. A further difficulty is that the alerts must go out quickly in order for telescopes to be swung around and point at the right location in the sky. This does not leave much time for careful analysis. With the still lacking independent confirmation that LIGO sees events of astrophysical origin, critics are having a good time. In a recent article for the German online magazine Heise, Alexander Unzicker – author of a book called “The Higgs Fake” – contemplates whether the first event was a blind injection, ie, a fake signal. The three people on the blind injection team at the time say it wasn’t them, but Unzicker argues that given our lack of knowledge about the collaboration’s internal proceedings, there might well have been other people able to inject a signal. (You can find an English translation here.) In the third observation run, the collaboration has so-far seen one high-significance binary neutron star candidate (S190425z). But the associated electromagnetic signal for this event has not been found. This may be for various reasons. For example, the analysis of the signal revealed that the event must have been far away, about 4 times farther than the 2017 neutron-star event. This means that any electromagnetic signal would have been fainter by a factor of about 16. In addition, the location in the sky was rather uncertain. So, the electromagnetic signal was plausibly hard to detect. More recently, on August 14th, the collaboration reported a neutron-star black hole merger. Again the electromagnetic counterpart is missing. In this case they were able to locate the origin to better precision. But they still estimate the source is about 7 times farther away than the 2017 neutron-star event, meaning it would have been fainter by a factor of about 50. Still, it is somewhat perplexing the signal wasn’t seen by any of the telescopes that looked for it. There may have been physical reasons at the source, such that the neutron-star was swallowed in one bite, in which case there wouldn’t be much emitted, or that the system was surrounded by dust, blocking the electromagnetic signal. A second neutron star-black hole merger on August 17 was retracted And then there are the “glitches”. LIGO’s “glitches” are detector events of unknown origin whose frequency spectrum does not look like the expected gravitational wave signals. I don’t know exactly how many of those the detector suffers from, but the way they are numbered, by a date and two digits, indicates between 10 and 100 a day. LIGO uses a citizen science project, called “Gravity Spy” to identify glitches. There isn’t one type of glitch, there are many different types of them, with names like “Koi fish,” “whistle,” or “blip.” In the figures below you see a few examples. Examples for LIGO's detector glitches. [Image Source] This gives me some headaches, folks. If you do not know why your detector detects something that does not look like what you expect, how can you trust it in the cases where it does see what you expect? Here is what Andrew Jackson had to say on the matter: Jackson: “The thing you can conclude if you use a template analysis is [...] that the results are consistent with a black hole merger. But in order to make the stronger statement that it really and truly is a black hole merger you have to rule out anything else that it could be. “And the characteristic signal here is actually pretty generic. What do they find? They find something where the amplitude increases, where the frequency increases, and then everything dies down eventually. And that describes just about every catastrophic event you can imagine. You see, increasing amplitude, increasing frequency, and then it settles into some new state. So they really were obliged to rule out every terrestrial effects, including seismic effects, and the fact that there was an enormous lightning string in Burkina Faso at exactly the same time [...]” Interviewer: “Do you think that they failed to rule out all these other possibilities? Jackson: “Yes…” If it was correct what Jackson said, this would be highly problematic indeed. But I have not been able to think of any other event that looks remotely like a gravitational wave signal, even leaving aside the detector correlations. Unlike what Jackson states, a typical catastrophic event does not have a frequency increase followed by a ring-down and sudden near-silence. Think of an earthquake for example. For the most part, earthquakes happen when stresses exceed a critical threshold. The signal don’t have a frequency build-up, and after the quake, there’s a lot of rumbling, often followed by smaller quakes. Just look at the below figure that shows the surface movement of a typical seismic event. Example of typical earthquake signal. [Image Source] It looks nothing like that of a gravitational wave signal. For this reason, I don’t share Jackson’s doubts over the origin of the signals that LIGO detects. However, the question whether there are any events of terrestrial origin with similar frequency characteristics arguably requires consideration beyond Sabine scratching her head for half an hour. So, even though I do not have the same concerns as were raised by the LIGO critics, I must say that I do find it peculiar indeed there is so little discussion about this issue. A Nobel Prize was handed out, and yet we still do not have confirmation that LIGO’s signals are not of terrestrial origin. In which other discipline is it considered good scientific practice to discard unwelcome yet not understood data, like LIGO does with the glitches? Why do we still not know just exactly what was shown in the figure of the first paper? Where are the electromagnetic counterparts? LIGO’s third observing run will continue until March 2020. It presently doesn’t look like it will bring the awaited clarity. I certainly hope that the collaboration will make somewhat more efforts to erase the doubts that still linger around their supposed detections. Wednesday, August 28, 2019 Solutions to the black hole information paradox In the early 1970s, Stephen Hawking discovered that black holes can emit radiation. This radiation allows black holes to lose mass and, eventually, to entirely evaporate. This process seems to destroy all the information that is contained in the black hole and therefore contradicts what we know about the laws of nature. This contradiction is what we call the black hole information paradox. After discovering this problem 40 years ago, Hawking spent the rest of his life trying to solve it. He passed away last year, but the problem is still alive and there is no resolution in sight. Today, I want to tell you what solutions physicists have so-far proposed for the black hole information loss problem. If you want to know more about just what exactly is the problem, please read my previous blogpost. There are hundreds of proposed solutions to the information loss problem, that I can’t possibly all list here. But I want to tell you about the five most plausible ones. 1. Remnants. The calculation that Hawking did to obtain the properties of the black hole radiation makes use of general relativity. But we know that general relativity is only approximately correct. It eventually has to be replaced by a more fundamental theory, which is quantum gravity. The effects of quantum gravity are not relevant near the horizon of large black holes, which is why the approximation that Hawking made is good. But it breaks down eventually, when the black hole has shrunk to a very small size. Then, the space-time curvature at the horizon becomes very strong and quantum gravity must be taken into account. Now, if quantum gravity becomes important, we really do not know what will happen because we don’t have a theory for quantum gravity. In particular we have no reason to think that the black hole will entirely evaporate to begin with. This opens the possibility that a small remainder is left behind which just sits there forever. Such a black hole remnant could keep all the information about what formed the black hole, and no contradiction results. 2. Information comes out very late. Instead of just stopping to evaporate when quantum gravity becomes relevant, the black hole could also start to leak information in that final phase. Some estimates indicate that this leakage would take a very long time, which is why this solution is also known as a “quasi-stable remnant”. However, it is not entirely clear just how long it would take. After all, we don’t have a theory of quantum gravity. This second option removes the contradiction for the same reason as the first. 3. Information comes out early. The first two scenarios are very conservative in that they postulate new effects will appear only when we know that our theories break down. A more speculative idea is that quantum gravity plays a much larger role near the horizon and the radiation carries information all along, it’s just that Hawking’s calculation doesn’t capture it. Many physicists prefer this solution over the first two for the following reason. Black holes do not only have a temperature, they also have an entropy, called the Bekenstein-Hawking entropy. This entropy is proportional to the area of the black hole. It is often interpreted as counting the number of possible states that the black hole geometry can have in a theory of quantum gravity. If that is so, then the entropy must shrink when the black hole shrinks and this is not the case for the remnant and the quasi-stable remnant. So, if you want to interpret the black hole entropy in terms of microscopic states, then the information must begin to come out early, when the black hole is still large. This solution is supported by the idea that we live in a holographic universe, which is currently popular, especially among string theorists. 4. Information is just lost. Black hole evaporation, it seems, is irreversible and that irreversibility is inconsistent with the dynamical law of quantum theory. But quantum theory does have its own irreversible process, which is the measurement. So, some physicists argue that we should just accept black hole evaporation is irreversible and destroys information, not unlike quantum measurements do. This option is not particularly popular because it is hard to include additional irreversible process into quantum theory without spoiling conservation laws. 5. Black holes don’t exist. Finally, some physicists have tried to argue that black holes are never created in the first place in which case no information can get lost in them. To make this work, one has to find a way to prevent a distribution of matter from collapsing to a size that is below its Schwarzschild radius. But since the formation of a black hole horizon can happen at arbitrarily small matter densities, this requires that one invents some new physics which violates the equivalence principle, and that is the key principle underlying Einstein’s theory of general relativity. This option is a logical possibility, but for most physicists, it’s asking for too much. Personally, I think that several of the proposed solutions are consistent, that includes option 1-3 above, and other proposals such as those by Horowitz and Maldacena, ‘t Hooft, or Maudlin. This means that this is a problem which just cannot be solved by relying on mathematics alone. Unfortunately, we cannot experimentally test what is happening when black holes evaporate because the temperature of the radiation is much, much too small to be measurable for the astrophysical black holes we know of. And so, I suspect we will be arguing about this for a long, long time. Friday, August 23, 2019 How do black holes destroy information and why is that a problem? Today I want to pick up a question that many of you asked, which is how do black holes destroy information and why is that a problem? I will not explain here what a black hole is or how we that know black holes exist, for this you can watch my earlier video. Let me instead get right to black hole information loss. To understand the problem, you first need to know the mathematics that we use for our theories in physics. These theories all have two ingredients. First, there is something called the “state” of the system, that’s a complete description of whatever you want to make a prediction for. In a classical theory, that’s one which is not quantized, the state would be, for example, the positions and velocities of particles. To describe the state in a quantum theory, you would instead take the wave-functions. The second ingredient to the current theories is a dynamical law, which is also often called an “evolution equation”. This has nothing to do with Darwinian evolution. Evolution here just means this is an equation which tells you how the state changes from one moment of time to the next. So, if I give you a state at any one time, you can use the evolution equation to compute the state at any other time. The important thing is that all evolution equations that we know of are time-reversible. This means it never happens that two states that differ at an initial time will become identical states at a later time. If that was so, then at the later time, you wouldn’t know where you started from and that would not be reversible. A confusion that I frequently encounter is that between time-reversibility and time-reversal invariance. These are not the same. Time reversible just means you can run a process backwards. Time reversal invariance on the other hand means, it will look the same if you run it backwards. In the following, I am talking about time-reversibility, not time-reversal invariance. Now, all fundamental evolution equations in physics are time-reversible. But this time-reversibility is in many cases entirely theoretical because of entropy increase. If the entropy of a system increases, this means that it if you wanted to reverse the time-evolution you would have to arrange the initial state very, very precisely, more precisely than is humanly possible. Therefore, many processes which are time-reversible in principle are for all practical purposes irreversible. Think of mixing dough. You’ll never be able to unmix it in practice. But if only you could arrange precisely enough the position of each single atom, you could very well unmix the dough. The same goes for burning a piece of paper. Irreversible in practice. But in principle, if you only knew precisely enough the details of the smoke and the ashes, you could reverse it. The evolution equation of quantum mechanics is called the Schroedinger equation and it is just as time-reversible as the evolution equation of classical physics. Quantum mechanics, however, has an additional equation which describes the measurement process, and this equation is not time-reversible. The reason it’s not time-reversible is that you can have different states that, when measured, give you the same measurement outcome. So, if you only know the outcome of the measurement, you cannot tell what was the original state. Let us come to black holes then. The defining property of a black hole is the horizon, which is a one-way surface. You can only get in, but never get out of a black hole. The horizon does not have substance, it’s really just the name for a location in space. Other than that it’s vacuum. But quantum theory tells us that vacuum is not nothing. It is full of particle-antiparticle pairs that are constantly created and destroyed. And in general relativity, the notion of a particle itself depends on the observer, much like the passage of time does. For this reason, what looks like vacuum close by the horizon does not look like vacuum far away from the horizon. Which is just another way of saying that black holes emit radiation. This effect was first derived by Stephen Hawking in the 1970s and the radiation is therefore called Hawking radiation. It’s really important to keep in mind that you get this result by using just the normal quantum theory of matter in the curved space-time of a black hole. You do not need a theory of quantum gravity to derive that black holes radiate. For our purposes, the relevant property of the radiation is that it is completely thermal. It is entirely determined by the total mass, charge, and spin of the black hole. Besides that, it’s random. Now, what happens when the black hole radiates is that it loses mass and shrinks. It shrinks until it’s entirely gone and the radiation is the only thing that is left. But if you only have the radiation, then all you know is the mass, change, and spin of the black hole. You have no idea what formed the black hole originally or what fell in later. Therefore, black hole evaporation is irreversible because many different initial states will result in the same final state. And this is before you have even made a measurement on the radiation. Such an irreversible process does not fit together with any of the known evolution laws – and that’s the problem. If you combine gravity with quantum theory, it seems, you get a result that’s inconsistent with quantum theory. As you have probably noticed, I didn’t say anything about information. That’s because really the reference to information in “black hole information loss” is entirely unnecessary and just causes confusion. The problem of black hole “information loss” really has nothing to do with just exactly what you mean by information. It’s just a term that loosely speaking says you can’t tell from the final state what was the exact initial state. There have been many, many attempts to solve this problem. Literally thousands of papers have been written about this. I will tell you about the most promising solutions some other time, so stay tuned. Thursday, August 22, 2019 You will probably not understand this Hieroglyps. [Image: Wikipedia Commons.] Two years ago, I gave a talk at the University of Toronto, at the institute for the history and philosophy of science. At the time, I didn’t think much about it. But in hindsight, it changed my life, at least my work-life. I spoke about the topic of my first book. It’s a talk I have given dozens of times, and though I adapted my slides for the Toronto audience, there was nothing remarkable about it. The oddity was the format of the talk. I would speak for half an hour. After this, someone else would summarize the topic for 15 minutes. Then there would be 15 minutes discussion. Fine, I said, sounds like fun. A few weeks before my visit, I was contacted by a postdoc who said he’d be doing the summary. He asked for my slides, and further reading material, and if there was anything else he should know. I sent him references. But when his turn came to speak, he did not, as I expected, summarize the argument I had delivered. Instead he reported what he had dug up about my philosophy of science, my attitude towards metaphysics, realism, and what I might mean with “explanation” or “theory” and other philosophically loaded words. He got it largely right, though I cannot today recall the details. I only recall I didn’t have much to say about what struck me as a peculiar exercise, dedicated not to understanding my research, but to understanding me. It was awkward, too, because I have always disliked philosophers’ dissection of scientists’ lives. Their obsessive analyses of who Schrödinger, Einstein, or Bohr talked to when, about what, in which period of what marriage, never made a lot of sense to me. It reeked too much of hero-worship, looked too much like post-mortem psychoanalysis, equally helpful to understand Einstein’s work as cutting his brain into slices. In the months that followed the Toronto talk, though, I began reading my own blogposts with that postdoc’s interpretation in mind. And I realized that in many cases it was essential information to understand what I was trying to get across. In the past year, I have therefore made more effort to repeat background, or at least link to previous pieces, to provide that necessary context. Context which – of course! – I thought is obvious. Because certainly we all agree what a theory is. Right? But having written a public weblog for more than 12 years makes me a comparably simple subject of study. I have, over the years, provided explanations for just exactly what I mean when I say “scientific method” or “true” or “real”. So at least you could find out if only you wanted to. Not that I expect anyone who comes here for a 1,000 word essay to study an 800,000 word archive. Still, at least that archive exists. The same, however, isn’t the case for most scientists. I was reminded of this at a recent workshop where I spoke with another woman about her attempts to make sense of one of her senior colleague’s papers. I don’t want to name names, but it’s someone whose research you’ll be familiar with if you follow the popular science media. His papers are chronically hard to understand. And I know it isn’t just me who struggles, because I heard a lot of people in the field make dismissive comments about his work. On the occasion which the woman told me about, apparently he got frustrated with his own inability to explain himself, resulting in rather aggressive responses to her questions. He’s not the only one frustrated. I could tell you many stories of renown physicists who told me, or wrote to me, about their struggles to get people to listen to them. Being white and male, it seems, doesn’t help. Neither do titles, honors, or award-winning popular science books. And if you look at the ideas they are trying to get across, there’s a pattern. These are people who have – in some cases over decades – built their own theoretical frameworks, developed personal philosophies of science, invented their own, idiosyncratic way of expressing themselves. Along the way, they have become incomprehensible for anyone else. But they didn’t notice. Typically, they have written multiple papers circling around a key insight which they never quite manage to bring into focus. They’re constantly trying and constantly failing. And while they usually have done parts of their work with other people, the co-authors are clearly side-characters in a single-fighter story. So they have their potentially brilliant insights out there, for anyone to see. And yet, no one has the patience to look at their life’s work. No one makes an effort to decipher their code. In brief, no one understands them. Of course they’re frustrated. Just as frustrated as I am that no one understands me. Not even the people who agree with me. Especially not those, actually. It’s so frustrating. The issue, I think, is symptomatic of our times, not only in science, but in society at large. Look at any social media site. You will see people going to great lengths explaining themselves just to end up frustrated and – not seldom – aggressive. They are aggressive because no one listens to what they are trying so hard to say. Indeed, all too often, no one even tries. Why bother if misunderstanding is such an easy win? If you cannot explain yourself, that’s your fault. If you do not understand me, that’s also your fault. And so, what I took away from my Toronto talk is that communication is much more difficult than we usually acknowledge. It takes a lot of patience, both from the sender and the receiver, to accurately decode a message. You need all that context to make sense of someone else’s ideas. I now see why philosophers spend so much time dissecting the lives of other people. And instead of talking so much, I have come to think, I should listen a little more. Who knows, I might finally understand something. Saturday, August 17, 2019 How we know that Einstein's General Relativity cannot be quite right Today I want to explain how we know that the way Einstein thought about gravity cannot be correct. Einstein’s idea was that gravity is not a force, but it is really an effect caused by the curvature of space and time. Matter curves space-time in its vicinity, and this curvature in return affects how matter moves. This means that, according to Einstein, space and time are responsive. They deform in the presence of matter and not only matter, but really all types of energies, including pressure and momentum flux and so on. Einstein called his theory “General Relativity” because it’s a generalization of Special Relativity. Both are based on “observer-independence”, that is the idea that the laws of nature should not depend on the motion of an observer. The difference between General Relativity and Special Relativity is that in Special Relativity space-time is flat, like a sheet of paper, while in General Relativity it can be curved, like the often-named rubber sheet. General Relativity is an extremely well-confirmed theory. It predicts that light rays bend around massive objects, like the sun, which we have observed. The same effect also gives rise to gravitational lensing, which we have also observed. General Relativity further predicts that the universe should expand, which it does. It predicts that time runs more slowly in gravitational potentials, which is correct. General Relativity predicts black holes, and it predicts just how the black hole shadow looks, which is what we have observed. It also predicts gravitational waves, which we have observed. And the list goes on. So, there is no doubt that General Relativity works extremely well. But we already know that it cannot ultimately be the correct theory for space and time. It is an approximation that works in many circumstances, but fails in others. We know this because General Relativity does not fit together with another extremely well confirmed theory, that is quantum mechanics. It’s one of these problems that’s easy to explain but extremely difficult to solve. Here is what goes wrong if you want to combine gravity and quantum mechanics. We know experimentally that particles have some strange quantum properties. They obey the uncertainty principle and they can do things like being in two places at once. Concretely, think about an electron going through a double slit. Quantum mechanics tells us that the particle goes through both slits. Now, electrons have a mass and masses generate a gravitational pull by bending space-time. This brings up the question, to which place does the gravitational pull go if the electron travels through both slits at the same time. You would expect the gravitational pull to also go to two places at the same time. But this cannot be the case in general relativity, because general relativity is not a quantum theory. To solve this problem, we have to understand the quantum properties of gravity. We need what physicists call a theory of quantum gravity. And since Einstein taught us that gravity is really about the curvature of space and time, what we need is a theory for the quantum properties of space and time. There are two other reasons how we know that General Relativity can’t be quite right. Besides the double-slit problem, there is the issue with singularities in General Relativity. Singularities are places where both the curvature and the energy-density of matter become infinitely large; at least that’s what General Relativity predicts. This happens for example inside of black holes and at the beginning of the universe. In any other theory that we have, singularities are a sign that the theory breaks down and has to be replaced by a more fundamental theory. And we think the same has to be the case in General Relativity, where the more fundamental theory to replace it is quantum gravity. The third reason we think gravity must be quantized is the trouble with information loss in black holes. If we combine quantum theory with general relativity but without quantizing gravity, then we find that black holes slowly shrink by emitting radiation. This was first derived by Stephen Hawking in the 1970s and so this black hole radiation is also called Hawking radiation. Now, it seems that black holes can entirely vanish by emitting this radiation. Problem is, the radiation itself is entirely random and does not carry any information. So when a black hole is entirely gone and all you have left is the radiation, you do not know what formed the black hole. Such a process is fundamentally irreversible and therefore incompatible with quantum theory. It just does not fit together. A lot of physicists think that to solve this problem we need a theory of quantum gravity. So this is how we know that General Relativity must be replaced by a theory of quantum gravity. This problem has been known since the 1930s. Since then, there have been many attempts to solve the problem. I will tell you about this some other time, so don’t forget to subscribe. Tuesday, August 13, 2019 The Problem with Quantum Measurements Have you heard that particle physicists want a larger collider because there is supposedly something funny about the Higgs boson? They call it the “Hierarchy Problem,” that there are 15 orders of magnitude between the Planck mass, which determines the strength of gravity, and the mass of the Higgs boson. What is problematic about this, you ask? Nothing. Why do particle physicists think it’s problematic? Because they have been told as students it’s problematic. So now they want $20 billion to solve a problem that doesn’t exist. Let us then look at an actual problem, that is that we don’t know how a measurement happens in quantum mechanics. The discussion of this problem today happens largely among philosophers; physicists pay pretty much no attention to it. Why not, you ask? Because they have been told as students that the problem doesn’t exist. But there is a light at the end of the tunnel and the light is… you. Yes, you. Because I know that you are just the right person to both understand and solve the measurement problem. So let’s get you started. Quantum mechanics is today mostly taught in what is known as the Copenhagen Interpretation and it works as follows. Particles are described by a mathematical object called the “wave-function,” usually denoted Ψ (“Psi”). The wave-function is sometimes sharply peaked and looks much like a particle, sometimes it’s spread out and looks more like a wave. Ψ is basically the embodiment of particle-wave duality. The wave-function moves according to the Schrödinger equation. This equation is compatible with Einstein’s Special Relativity and it can be run both forward and backward in time. If I give you complete information about a system at any one time – ie, if I tell you the “state” of the system – you can use the Schrödinger equation to calculate the state at all earlier and all later times. This makes the Schrödinger equation what we call a “deterministic” equation. But the Schrödinger equation alone does not predict what we observe. If you use only the Schrödinger equation to calculate what happens when a particle interacts with a detector, you find that the two undergo a process called “decoherence.” Decoherence wipes out quantum-typical behavior, like dead-and-alive cats and such. What you have left then is a probability distribution for a measurement outcome (what is known as a “mixed state”). You have, say, a 50% chance that the particle hits the left side of the screen. And this, importantly, is not a prediction for a collection of particles or repeated measurements. We are talking about one measurement on one particle. The moment you measure the particle, however, you know with 100% probability what you have got; in our example you now know which side of the screen the particle is. This sudden jump of the probability is often referred to as the “collapse” of the wave-function and the Schrödinger equation does not predict it. The Copenhagen Interpretation, therefore, requires an additional assumption called the “Measurement Postulate.” The Measurement Postulate tells you that the probability of whatever you have measured must be updated to 100%. Now, the collapse together with the Schrödinger equation describes what we observe. But the detector is of course also made of particles and therefore itself obeys the Schrödinger equation. So if quantum mechanics is fundamental, we should be able to calculate what happens during measurement using the Schrödinger equation alone. We should not need a second postulate. The measurement problem, then, is that the collapse of the wave-function is incompatible with the Schrödinger equation. It isn’t merely that we do not know how to derive it from the Schrödinger equation, it’s that it actually contradicts the Schrödinger equation. The easiest way to see this is to note that the Schrödinger equation is linear while the measurement process is non-linear. This strongly suggests that the measurement is an effective description of some underlying non-linear process, something we haven’t yet figured out. There is another problem. As an instantaneous process, wave-function collapse doesn’t fit together with the speed of light limit in Special Relativity. This is the “spooky action” that irked Einstein so much about quantum mechanics. This incompatibility with Special Relativity, however, has (by assumption) no observable consequences, so you can try and convince yourself it’s philosophically permissible (and good luck with that). But the problem comes back to haunt you when you ask what happens with the mass (and energy) of a particle when its wave-function collapses. You’ll notice then that the instantaneous jump screws up General Relativity. (And for this quantum gravitational effects shouldn’t play a role, so mumbling “string theory” doesn’t help.) This issue is still unobservable in practice, all right, but now it’s observable in principle. One way to deal with the measurement problem is to argue that the wave-function does not describe a real object, but only encodes knowledge, and that probabilities should not be interpreted as frequencies of occurrence, but instead as statements of our confidence. This is what’s known as a “Psi-epistemic” interpretation of quantum mechanics, as opposed to the “Psi-ontic” ones in which the wave-function is a real thing. The trouble with Psi-epistemic interpretations is that the moment you refer to something like “knowledge” you have to tell me what you mean by “knowledge”, who or what has this “knowledge,” and how they obtain “knowledge.” Personally, I would also really like to know what this knowledge is supposedly about, but if you insist I’ll keep my mouth shut. Even so, for all we presently know, “knowledge” is not fundamental, but emergent. Referring to knowledge in the postulates of your theory, therefore, is incompatible with reductionism. This means if you like Psi-epistemic interpretations, you will have to tell me just why and when reductionism breaks down or, alternatively, tell me how to derive Psi from a more fundamental law. None of the existing interpretations and modifications of quantum mechanics really solve the problem, which I can go through in detail some other time. For now let me just say that either way you turn the pieces, they won’t fit together. So, forget about particle colliders; grab a pen and get started. Note: If the comment count exceeds 200, you have to click on “Load More” at the bottom of the page to see recent comments. This is also why the link in the recent comment widget does not work. Please do not complain to me about this shitfuckery. Blogger is hosted by Google. Please direct complaints to their forum. Saturday, August 10, 2019 Jeremy Baumberg Princeton University Press (16 Mar. 2018) On counting citations, he likewise remarks aptly:
cfb81003a83dff9c
World Library   Flag as Inappropriate Email this Article Time in physics Article Id: WHEBN0019595664 Reproduction Date: Title: Time in physics   Author: World Heritage Encyclopedia Language: English Subject: Time, Spacetime, Acceleration, Century leap year, Sidereal year Collection: Philosophy of Physics, Time, Time in Physics, Timekeeping Publisher: World Heritage Encyclopedia Time in physics • Markers of time 1 • The unit of measurement of time: the second 2 • The state of the art in timekeeping 2.1 • Conceptions of time 3 • Regularities in nature 3.1 • Mechanical clocks 3.1.1 • Galileo: the flow of time 3.2 • Newton's physics: linear time 3.3 • Thermodynamics and the paradox of irreversibility 3.4 • Electromagnetism and the speed of light 3.5 • Einstein's physics: spacetime 3.6 • Time in quantum mechanics 3.7 • Dynamical systems 4 • Signalling 5 • Technology for timekeeping standards 6 • Time in cosmology 7 • Reprise 8 • See also 9 • References 10 • Further reading 11 Markers of time Before there were clocks, time was measured by those physical processes[2] which were understandable to each epoch of civilization:[3] • the first appearance (see: heliacal rising) of Sirius to mark the flooding of the Nile each year[3] • the periodic succession of night and day, one after the other, in seemingly eternal succession[4] • the position on the horizon of the first appearance of the sun at dawn[5] • the position of the sun in the sky[6] • the marking of the moment of noontime during the day[7] • the length of the shadow cast by a gnomon[8] Eventually,[9][10] it became possible to characterize the passage of time with instrumentation, using operational definitions. Simultaneously, our conception of time has evolved, as shown below.[11] The unit of measurement of time: the second In the International System of Units (SI), the unit of time is the second (symbol: \mathrm{s}). It is a SI base unit, and it is currently defined as "the duration of 9 192 631 770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium 133 atom." [12] This definition is based on the operation of a caesium atomic clock. The state of the art in timekeeping The UTC timestamp in use worldwide is an atomic time standard. The relative accuracy of such a time standard is currently on the order of 10−15[13] (corresponding to 1 second in approximately 30 million years). The smallest time step considered observable is called the Planck time, which is approximately 5.391×10−44 seconds - many orders of magnitude below the resolution of current time standards. Conceptions of time Andromeda galaxy (M31) is two million light-years away. Thus we are viewing M31's light from two million years ago,[14] a time before humans existed on Earth. Both Galileo and Newton and most people up until the 20th century thought that time was the same for everyone everywhere. This is the basis for timelines, where time is a parameter. Our modern conception of time is based on Einstein's theory of relativity, in which rates of time run differently depending on relative motion, and space and time are merged into spacetime, where we live on a world line rather than a timeline. Thus time is part of a coordinate, in this view. Physicists believe the entire Universe and therefore time itself[15] began about 13.8 billion years ago in the big bang. (See Time in Cosmology below) Whether it will ever come to an end is an open question. (See philosophy of physics.) Regularities in nature Mechanical clocks Galileo: the flow of time Newton's physics: linear time Lagrange (1736–1813) would aid in the formulation of a simpler version[23] of Newton's equations. He started with an energy term, L, named the Lagrangian in his honor, and formulated Lagrange's equations: \frac{d}{dt} \frac{\partial L}{\partial \dot{\theta}} - \frac{\partial L}{\partial \theta} = 0. The dotted quantities, {\dot{\theta}} denote a function which corresponds to a Newtonian fluxion, whereas denote a function which corresponds to a Newtonian fluent. But linear time is the parameter for the relationship between the {\dot{\theta}} and the of the physical system under consideration. Some decades later, it was found that the second order equation of Lagrange or Newton can be more easily solved or visualized by suitable transformation to sets of first order differential equations. Lagrange's equations can be transformed, under a Legendre transformation, to Hamilton's equations; the Hamiltonian formulation for the equations of motion of some conjugate variables p,q (for example, momentum p and position q) is: \dot p = -\frac{\partial H}{\partial q} = \{p,H\} = -\{H,p\} \dot q =~~\frac{\partial H}{\partial p} = \{q,H\} = -\{H,q\} in the Poisson bracket notation and clearly shows the dependence of the time variation of conjugate variables p,q on an energy expression. This relationship, it was to be found, also has corresponding forms in quantum mechanics as well as in the classical mechanics shown above. These relationships bespeak a conception of time which is reversible. Thermodynamics and the paradox of irreversibility In 1824 Sadi Carnot (1796–1832) scientifically analyzed the steam engines with his Carnot cycle, an abstract engine. Rudolf Clausius (1822–1888) noted a measure of disorder, or entropy, which affects the continually decreasing amount of free energy which is available to a Carnot engine in the: Entropy is maximum in an isolated thermodynamic system, and increases. In contrast, Erwin Schrödinger (1887–1961) pointed out that life depends on a "negative entropy flow".[25] Ilya Prigogine (1917–2003) stated that other thermodynamic systems which, like life, are also far from equilibrium, can also exhibit stable spatio-temporal structures. Soon afterward, the Belousov-Zhabotinsky reactions[26] were reported, which demonstrate oscillating colors in a chemical solution.[27] These nonequilibrium thermodynamic branches reach a bifurcation point, which is unstable, and another thermodynamic branch becomes stable in its stead.[28] Electromagnetism and the speed of light In 1864, James Clerk Maxwell (1831–1879) presented a combined theory of electricity and magnetism. He combined all the laws then known relating to those two phenomenon into four equations. These vector calculus equations which use the del operator (\nabla) are known as Maxwell's equations for electromagnetism. \nabla \times \mathbf{B} = \mu_0 \varepsilon_0 \frac{\partial \mathbf{E}}{\partial t} = \frac{1}{c^2} \frac{\partial \mathbf{E}}{\partial t} \nabla \cdot \mathbf{E} = 0 \nabla \cdot \mathbf{B} = 0 c = 1/\sqrt{\epsilon_0 \mu_0} is the speed of light in free space, 299 792 458 m/s; E is the electric field; B is the magnetic field. The Michelson-Morley experiment failed to detect any difference in the relative speed of light due to the motion of the Earth relative to the luminiferous aether, suggesting that Maxwell's equations did, in fact, hold in all frames. In 1875, Hendrik Lorentz (1853–1928) discovered Lorentz transformations, which left Maxwell's equations unchanged, allowing Michelson and Morley's negative result to be explained. Henri Poincaré (1854–1912) noted the importance of Lorentz' transformation and popularized it. In particular, the railroad car description can be found in Science and Hypothesis,[30] which was published before Einstein's articles of 1905. Einstein's physics: spacetime Main articles: special relativity (1905), general relativity (1915). t_\text{B} - t_\text{A} = t'_\text{A} - t_\text{B}\text{.}\,\! —Albert Einstein, "On the Electrodynamics of Moving Bodies" [31] \mathbf{v}={d\mathbf{r}\over dt} \text{,} where r is position and t is time. \begin{cases} t' &= \gamma(t - vx/c^2) \text{ where } \gamma = 1/\sqrt{1-v^2/c^2} \\ x' &= \gamma(x - vt)\\ y' &= y \\ z' &= z \end{cases} More specifically, the Lorentz transformation is a hyperbolic rotation \begin{pmatrix} ct' \\ x' \end{pmatrix} = \begin{pmatrix} \cosh \phi & - \sinh \phi \\ - \sinh \phi & \cosh \phi \end{pmatrix} \begin{pmatrix} ct \\ x \end{pmatrix} \text{ where } \phi = \operatorname{artanh}\,\frac{v}{c} \text{,} which is a change of coordinates in the four-dimensional Minkowski space, a dimension of which is ct. (In Euclidean space an ordinary rotation \begin{pmatrix} x' \\ y' \end{pmatrix} = \begin{pmatrix} \cos \theta & - \sin \theta \\ \sin \theta & \cos \theta \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix} is the corresponding change of coordinates.) The speed of light c can be seen as just a conversion factor needed because we measure the dimensions of spacetime in different units; since the metre is currently defined in terms of the second, it has the exact value of 299 792 458 m/s. We would need a similar factor in Euclidean space if, for example, we measured width in nautical miles and depth in feet. In physics, sometimes units of measurement in which c = 1 are used to simplify equations. \Delta t= • c is the speed of light. T=\frac{dt}{\sqrt{\left( 1 - \frac{2GM}{rc^2} \right ) dt^2 - \frac{1}{c^2}\left ( 1 - \frac{2GM}{rc^2} \right )^{-1} dr^2 - \frac{r^2}{c^2} d\theta^2 - \frac{r^2}{c^2} \sin^2 \theta \; d\phi^2}} T is the gravitational time dilation of an object at a distance of r. dt is the change in coordinate time, or the interval of coordinate time. G is the gravitational constant M is the mass generating the field Or one could use the following simpler approximation: \frac{dt}{d\tau} = \frac{1}{ \sqrt{1 - \left( \frac{2GM}{rc^2} \right)}}. Time runs slower the stronger the gravitational field, and hence acceleration, is. The predictions of time dilation are confirmed by particle acceleration experiments and cosmic ray evidence, where moving particles decay more slowly than their less energetic counterparts. Gravitational time dilation gives rise to the phenomenon of gravitational redshift and delays in signal travel time near massive objects such as the sun. The Global Positioning System must also adjust signals to account for this effect. Time in quantum mechanics There is a time parameter in the equations of quantum mechanics. The Schrödinger equation[33] is One solution can be | \psi_e(t) \rangle = e^{-iHt / \hbar} | \psi_e(0) \rangle . where e^{-iHt / \hbar} is called the time evolution operator, and H is the Hamiltonian. \frac{d}{dt}A=(i\hbar)^{-1}[A,H]+\left(\frac{\partial A}{\partial t}\right)_\mathrm{classical}. \Delta E \Delta T \ge \frac{\hbar}{2} \Delta E is the uncertainty in energy \Delta T is the uncertainty in time \hbar is Planck's constant The more precisely one measures the duration of a sequence of events the less precisely one can measure the energy associated with that sequence and vice versa. This equation is different from the standard uncertainty principle because time is not an operator in quantum mechanics. Dynamical systems See dynamical systems and chaos theory, dissipative structures Evolution of a world line of an accelerated massive particle. This worldline is restricted to the timelike top and bottom sections of this spacetime figure and can not cross the top (future) nor the bottom (past) light cone. The left and right sections, outside the light cones are spacelike. Technology for timekeeping standards The primary time standard in the U.S. is currently NIST-F1, a laser-cooled Cs fountain,[35] the latest in a series of time and frequency standards, from the ammonia-based atomic clock (1949) to the caesium-based NBS-1 (1952) to NIST-7 (1993). The respective clock uncertainty declined from 10,000 nanoseconds per day to 0.5 nanoseconds per day in 5 decades.[36] In 2001 the clock uncertainty for NIST-F1 was 0.1 nanoseconds/day. Development of increasingly accurate frequency standards is underway. Time in cosmology If the universe were expanding, then it must have been much smaller and therefore hotter and denser in the past. Fred Hoyle (1915–2001), who invented the term 'Big Bang' to disparage it. Fermi and others noted that this process would have stopped after only the light elements were created, and thus did not account for the abundance of heavier elements. WMAP fluctuations of the cosmic microwave background radiation.[38] Gamow's prediction was a 5–10 kelvin black body radiation temperature for the universe, after it cooled during the expansion. This was corroborated by Penzias and Wilson in 1965. Subsequent experiments arrived at a 2.7 kelvin temperature, corresponding to an age of the universe of 13.8 billion years after the Big Bang. This dramatic result has raised issues: what happened between the singularity of the Big Bang and the Planck time, which, after all, is the smallest observable time. When might have time separated out from the spacetime foam;[39] there are only hints based on broken symmetries (see Spontaneous symmetry breaking, Timeline of the Big Bang, and the articles in Category:Physical cosmology). Ilya Prigogine's reprise is "Time precedes existence". He contrasts the views of Newton, Einstein and quantum physics which offer a symmetric view of time (as discussed above) with his own views, which point out that statistical and thermodynamic physics can explain irreversible phenomena[40] as well as the arrow of time and the Big Bang. See also 5. ^ "Heliacal/Dawn Risings". Retrieved 2012-08-17.  7. ^ Eratosthenes used this criterion in his measurement of the circumference of Earth 11. ^ Today, automated astronomical observations from satellites and spacecraft require relativistic corrections of the reported positions. 12. ^ "Unit of time (second)". SI brochure.   15. ^ See   22. ^ Newton 1687 page 738. 23. ^ "Dynamics is a four-dimensional geometry." --Lagrange (1796), Thèorie des fonctions analytiques, as quoted by Ilya Prigogine (1996), The End of Certainty ISBN 0-684-83705-6 p.58 24. ^ pp. 182-195. Stephen Hawking 1996. The Illustrated Brief History of Time: updated and expanded edition ISBN 0-553-10374-1 25. ^ Erwin Schrödinger (1945) What is Life? 26. ^ G. Nicolis and I. Prigogine (1989), Exploring Complexity 28. ^ Ilya Prigogine (1996) The End of Certainty pp. 63-71 29. ^ Clemmow, P. C. (1973). An introduction to electromagnetic theory. CUP Archive. pp. 56–57.  , Extract of pages 56, 57 30. ^ Henri Poincaré, (1902). Science and Hypothesis Eprint 32. ^   33. ^ E. Schrödinger, Phys. Rev. 28 1049 (1926) 34. ^ A Brief History of Atomic Clocks at NIST 38. ^ cosmic microwave background radiation Further reading
5f053c3c699e9b43
My watch list   Kohn-Sham equations The Kohn-Sham equations are a set of eigenvalue equations within density functional theory (DFT). DFT attempts to reduce a many-body problem for the N particle wavefunction \Psi(\mathbf{r}_1,s_1;\ldots;\mathbf{r}_N,s_N) (which depends on 4N variables) to one in terms of the charge density \rho(\mathbf{r}) (which depends on 3 variables), using the Hohenberg-Kohn theorems. Thus, one writes the total energy E of the system as a functional of the charge density: E[\rho] = T[\rho] + \int V_{ext}(\mathbf{r}) \rho(\mathbf{r}) d\mathbf{r} + V_{H}[\rho] + E_{xc}[\rho] \,\;, Additional recommended knowledge where T is the kinetic energy of the system, Vext is an external potential acting on the system, V_{H}={e^2\over2}\int{\rho(\mathbf{r})\rho(\mathbf{r}')\over|\mathbf{r}-\mathbf{r}'|}d\mathbf{r} d\mathbf{r}' is the Hartree energy and Exc is the exchange-correlation energy. The straightforward application of this formula has two obstacles: first, the exchange-correlation energy is not known exactly (see DFT Approximations for the workaround), and second, the kinetic term must be formulated in terms of the charge density. As was first proposed by Kohn and Sham, the charge density can be written as the sum of the squares of a set of orthonormal wave functions \psi_i(\mathbf{r},s): \rho(\mathbf{r})=\sum_i^N\sum_s |\psi_i(\mathbf{r},s)|^2, which are solutions to the Schrödinger equation for N noninteracting electrons moving in an effective potential v_{eff}(\mathbf{r}) -{\hbar^2\over2m}\nabla^2\psi_i(\mathbf{r},s) + v_{eff}(\mathbf{r})\psi_i(\mathbf{r},s) = \varepsilon_i \psi_i(\mathbf{r},s), where the effective potential is defined to be v_{eff}(\mathbf{r}) = V_{ext}(\mathbf{r}) + e^2\int {\rho(\mathbf{r}')\over|\mathbf{r}-\mathbf{r}'|}d\mathbf{r}' + {\delta E_{xc}[\rho]\over\delta\rho}. These three equations form the Kohn-Sham orbital equations in their canonical form. This system is then solved iteratively, until self-consistency is reached. Note that the eigenvalues εi have no physical meaning, only the total sum, which corresponds to the energy of the entire system E through the equation E = \sum_{i}^N \varepsilon_i - V_{H}[\rho] + E_{xc}[\rho] - \int {\delta E_{xc}[\rho]\over\delta \rho(\mathbf{r})} \rho(\mathbf{r}) d\mathbf{r}. 1. W. Kohn and L. J. Sham, Phys. Rev. 140 (1965) A1133 2. R. G. Parr and W. Yang, Density-Functional Theory of Atoms and Molecules (Oxford University Press, New York, 1989). This article is licensed under the GNU Free Documentation License. It uses material from the Wikipedia article "Kohn-Sham_equations". A list of authors is available in Wikipedia.
05b6d0a1111ac4e3
Materials in Electronics/Schrödinger's Equation From Wikibooks, open books for an open world < Materials in Electronics Jump to: navigation, search Schrödinger's Equation is a differential equation that describes the evolution of Ψ(x) over time. By solving the differential equation for a particular situation, the wave function can be found. It is a statement of the conservation of energy of the particle. Schrödinger's Equation in 1-Dimension[edit] In the simplest case, a particle in one dimension, it is derived as follows: • T(x) is the kinetic energy of the particle • V(x) is the potential energy of the particle • E is the energy of the particle, which is constant Substituting for the kinetic energy of wave, as shown here: Now we need to get this differential equation in terms of Ψ(x). Assume that Ψ(x) is given by Double differentiating our trial solution,: Rearranging for k2 Substituting this in the differential equation gives: Multiplying through by Ψ(x) gives us Schrödinger's Equation in 1D: [Schrödinger's Equation in 1D] Solving the Schrödinger Equation gives us the wavefunction of the particle, which can be used to find the electron distribution in a system. This is a time-independent solution - it will not change as time goes on. It is straightforward to add time-dependence to this equation, but for the moment we will consider only time-independent wave functions, so it is not necessary. The time-dependent wavefunction is denoted by While this equation was derived for a specific function, a complex exponential, it is more general than it appears as Fourier analysis can express any continuous function over range L as a sum of functions of this kind: The Schrödinger Equation as an Eigenequation[edit] The Schrödinger Equation can be expressed as an eigenequation of the form: [Schrödinger Equation as an Eigenequation] • ψ is the eigenfunction (or eigenstate, both mean the same thing) • E is the eigenvalue corresponding to the energy. • H is the Hamiltonian operator given by: [1D Hamiltonian Operator] This means that by applying the operator, H, to the function ψ(x), we will obtain a solution that is simply a scalar multiple of ψ(x). This multiple is E - the energy of the particle. This also means that every wavefunction (i.e. every solution to the Schrödinger Equation) has a particular associated energy. Higher Dimensions[edit] The equation that we just derived is the Schrödinger equation for a particle in one dimension. Adding more dimensions is not difficult. The three dimensional equation is: Where is the Laplace operator, which, in Cartesian coordinates, is given by: See this page for the derivation. It is also possible to add more dimensions, but this does not generally yield useful results, given that we inhabit a 3D universe. In order to integrate Schrödinger's equation with relativity, Paul Dirac showed that electrons have an additional property, called spin. This does not actually mean the electron is spinning on an axis, but in some ways it is a useful analogy. The spin on an electron can take two values; We can incorporate spin into the wavefunction, Ψ by multiplying by an addition component - the spin wavefunction, σ(s), where s is ±1/2. This is often just called "spin-up" and "spin-down", respectively. The full, time-dependent, wavefunction is now given by: Conditions on the Wavefunction[edit] In order to represent a particle's state, the wavefunction must satisfy several conditions: • It must be square-integrable, and moreover, the integral of the wavefunction's probability density function must be equal to unity, as the electron must exist somewhere in all of space: For 1D systems this is: • must be continuous, because its derivative, which in proportional to momentum, must be finite. • must be continuous, because its derivative, which is proportional to kinetic energy, must be finite. • must satisfy boundary conditions. In particular, as x tends to infinity, ψ(r) tends to zero. (This is required to satisfy the normalistation condition above). Examples of Use of Schrödinger's Equation[edit] Schrödinger's Equation can be used to find wavefunctions for many physical systems. See Confined Particles for more information. • Shrödinger's Equation (SE) is a statement of the Law of Conservation of Energy. • It is given by • By solving the equation, one can obtain the wavefunction, ψ. • From the wavefunction we find the distribution of the electron's probability function. • The probability of the electron existing over all space must be 1. • SE gives a set of discrete wavefunctions, each with an associated energy. • An electron cannot exist at an energy other than these.
e3c22120ced65a78
Skip to main content Chemistry LibreTexts Electron Configuration The electron configuration of an atomic species (neutral or ionic) allows us to understand the shape and energy of its electrons. Many general rules are taken into consideration when assigning the "location" of the electron to its prospective energy state, however these assignments are arbitrary and it is always uncertain as to which electron is being described. Knowing the electron configuration of a species gives us a better understanding of its bonding ability, magnetism and other chemical properties. The electron configuration is the standard notation used to describe the electronic structure of an atom. Under the orbital approximation, we let each electron occupy an orbital, which can be solved by a single wavefunction. In doing so, we obtain three quantum numbers (n,l,ml), which are the same as the ones obtained from solving the Schrodinger's equation for Bohr's hydrogen atom. Hence, many of the rules that we use to describe the electron's address in the hydrogen atom can also be used in systems involving multiple electrons. When assigning electrons to orbitals, we must follow a set of three rules: the Aufbau Principle, the Pauli-Exclusion Principle, and Hund's Rule. The wavefunction is the solution to the Schrödinger equation. By solving the Schrödinger equation for the hydrogen atom, we obtain three quantum numbers, namely the principal quantum number (n), the orbital angular momentum quantum number (l), and the magnetic quantum number (ml). There is a fourth quantum number, called the spin magnetic quantum number (ms), which is not obtained from solving the Schrödinger equation. Together, these four quantum numbers can be used to describe the location of an electron in Bohr's hydrogen atom. These numbers can be thought of as an electron's "address" in the atom.  To help describe the appropriate notation for electron configuration, it is best to do so through example. For this example, we will use the iodine atom. There are two ways in which electron configuration can be written: I: 1s22s22p63s23p64s23d104p65s24d105p5 I: [Kr]5s24d105p5 In both of these types of notations, the order of the energy levels must be written by increased energy, showing the number of electrons in each subshell as an exponent. In the short notation, you place brackets around the preceding noble gas element followed by the valence shell electron configuration. The periodic table shows that kyrpton (Kr) is the previous noble gas listed before iodine. The noble gas configuration encompases the energy states lower than the valence shell electrons. Therefore, in this case [Kr]=1s22s22p63s23p64s23d104p6. Quantum Numbers Principal Quantum Number (n) The principal quantum number n indicates the shell or energy level in which the electron is found. The value of n can be set between 1 to n, where n is the value of the outermost shell containing an electron. This quantum number can only be positive, non-zero, and integer values. That is, n=1,2,3,4,..  For example, an Iodine atom has its outmost electrons in the 5p orbital. Therefore, the principle quantum number for Iodine is 5. Orbital Angular Momentum Quantum Number (l) The orbital angular momentum quantum number, l, indicates the subshell of the electron. You can also tell the shape of the atomic orbital with this quantum number. An s subshell corresponds to l=0, a p subshell = 1, a d subshell = 2, a f subshell = 3, and so forth. This quantum number can only be positive and integer values, although it can take on a zero value. In general, for every value of n, there are n values of l. Furthermore, the value of l ranges from 0 to n-1. For example, if n=3, l=0,1,2. So in regards to the example used above, the l values of Iodine for n = 5 are l = 0, 1, 2, 3, 4.  Magnetic Quantum Number (ml) The magnetic quantum number, ml, represents the orbitals of a given subshell. For a given l, ml can range from -l to +l. A p subshell (l=1), for instance, can have three orbitals corresponding to ml = -1, 0, +1. In other words, it defines the px, py and pzorbitals of the p subshell. (However, the ml numbers don't necessarily correspond to a given orbital. The fact that there are three orbitals simply is indicative of the three orbitals of a p subshell.)  In general, for a given l, there are 2l+1 possible values for ml; and in a n principal shell, there are n2 orbitals found in that energy level. Continuing on from out example from above, the ml values of Iodine are ml = -4, -3, -2, -1, 0 1, 2, 3, 4. These arbitrarily correspond to the 5s, 5px, 5py, 5pz, 4dx2-y2, 4dz2, 4dxy, 4dxz, and 4dyz orbitals. Spin Magnetic Quantum Number (ms) The spin magnetic quantum number can only have a value of either +1/2 or -1/2. The value of 1/2 is the spin quantum number, s, which describes the electron's spin. Due to the spinning of the electron, it generates a magnetic field. In general, an electron with a ms=+1/2 is called ­­an alpha electron, and one with a ­ms=-1/2 is called a beta electron. No two paired electrons can have the same spin value. Out of these four quantum numbers, however, Bohr postulated that only the principal quantum number, n, determines the energy of the electron. Therefore, the 3s orbital (l=0) has the same energy as the 3p (l=1) and 3d (l=2) orbitals, regardless of a difference in l values. This postulate, however, holds true only for Bohr's hydrogen atom or other hydrogen-like atoms. When dealing with multi-electron systems, we must consider the electron-electron interactions. Hence, the previously described postulate breaks down in that the energy of the electron is now determined by both the principal quantum number, n, and the orbital angular momentum quantum number, l. Although the Schrodinger equation for many-electron atoms is extremely difficult to solve mathematically, we can still describe their electronic structures via electron configurations. General Rules of Electron Configuration There are a set of general rules that are used to figure out the electron configuration of an atomic species: Aufbau's Principle, Hund's Rule and the Pauli-Exclusion Principle. Before continuing, it's important to understand that each orbital can be occupied by two electrons of opposite spin (which will be further discussed later). The following table shows the possible number of electrons that can occupy each orbital in a given subshell. subshell number of orbitals total number of possible electrons in each orbital s 1 2 p 3 (px, py, pz) 6 d 5 (dx2-y2, dz2, dxy, dxz, dyz) 10 f 7 (fz3, fxz2, fxyz, fx(x2-3y2), fyz2, fz(x2-y2), fy(3x2-y2) Using our example, iodine, again, we see on the periodic table that its atomic number is 53 (meaning it contains 53 electrons in its neutral state). Its complete electron configuration is 1s22s22p63s23p64s23d104p65s24d105p5. If you count up all of these electrons, you will see that it adds up to 53 electrons. Notice that each subshell can only contain the max amount of electrons as indicated in the table above. Aufbau Principle The word 'Aufbau' is German for 'building up'. The Aufbau principle, also called the building-up principle, states that electron's occupy orbitals in order of increasing energy. The order of occupation is as follows: Another way to view this order of increasing energy is by using Madelung's Rule: Madelungs Rule.jpg Figure 1. Madelung's Rule is a simple generalization which dictates in what order electrons should  be filled in the however there are exceptions such as copper and chromium. This order of occupation roughly represents the increasing energy level of the orbitals. Hence, electrons occupy the orbitals in such a way that the energy is kept at a minimum. That is, the 7s, 5f, 6d, 7p subshells will not be filled with electrons unless the lower energy orbitals, 1s to 6p, are already fully occupied. Also, it is important to note that although the energy of the 3d orbital has been mathematically shown to be lower than that of the 4s orbital, electrons occupy the 4s orbital first before the 3d orbital. This observation can be ascribed to the fact that 3d electrons are more likely to be found closer to the nucleus; hence, they repel each other more strongly. Nonetheless, remembering the order of orbital energies, and hence assigning electrons to orbitals, can become rather easy when related to the periodic table. To understand this principle, let's consider the bromine atom. Bromine (Z=35), which has 35 electrons, can be found in Period 4, Group VII of the periodic table. Since bromine has 7 valence electrons, the 4s orbital will be completely filled with 2 electrons, and the remaining five electrons will occupy the 4p orbital. Hence the full or expanded electronic configuration for bromine in accord with the Aufbau principle is 1s22s22p63s23p64s23d104p5. If we add the exponents, we get a total of 35 electrons, confirming that our notation is correct. Hund's Rule Hund's Rule states that when electrons occupy degenerate orbitals (i.e. same n and l quantum numbers), they must first occupy the empty orbitals before double occupying them. Furthermore, the most stable configuration results when the spins are parallel (i.e. all alpha electrons or all beta electrons). Nitrogen, for example, has 3 electrons occupying the 2p orbital. According to Hund's Rule, they must first occupy each of the three degenerate p orbitals, namely the 2px orbital, 2py orbital, and the 2pz orbital, and with parallel spins (Figure 2). The configuration below is incorrect because the third electron occupies does not occupy the empty 2pz orbital. Instead, it occupies the half-filled 2px orbital. This, therefore, is a violation of Hund's Rule (Figure 2). nitrogen energy diagram.png Figure 2. A visual representation of the Aufbau Principle and Hund's Rule. Note that the filling of electrons in each orbital (px, py and pz) is arbitrary as long as the electrons are singly filled before having two electrons occupy the same orbital. (a)This diagram represents the correct filling of electrons for the nitrogen atom. (b) This diagramrepresents the incorrect filling of the electrons for the nitrogen atom. Pauli-Exclusion Principle Wolfgang Pauli postulated that each electron can be described with a unique set of four quantum numbers. Therefore, if two electrons occupy the same orbital, such as the 3s orbital, their spins must be paired. Although they have the same principal quantum number (n=3), the same orbital angular momentum quantum number (l=0), and the same magnetic quantum number (ml=0), they have different spin magnetic quantum numbers (ms=+1/2 and ms=-1/2). Electronic Configurations of Cations and Anions The way we designate electronic configurations for cations and anions is essentially similar to that for neutral atoms in their ground state. That is, we follow the three important rules: Aufbau's Principle, Pauli-exclusion principle, and Hund's Rule. The electronic configuration of cations is assigned by removing electrons first in the outermost p orbital, followed by the s orbital and finally the d orbitals (if any more electrons need to be removed). For instance, the ground state electronic configuration of calcium (Z=20) is 1s22s22p63s23p64s2. The calcium ion (Ca2+), however, has two electrons less. Hence, the electron configuration for Ca2+ is 1s22s22p63s23p6. Since we need to take away two electrons, we first remove electrons from the outermost shell (n=4). In this case, all the 4p subshells are empty; hence, we start by removing from the s orbital, which is the 4s orbital. The electron configuration for Ca2+ is the same as that for Argon, which has 18 electrons. Hence, we can say that both are isoelectronic. The electronic configuration of anions is assigned by adding electrons according to Aufbau's building up principle. We add electrons to fill the outermost orbital that is occupied, and then add more electrons to the next higher orbital. The neutral atom chlorine (Z=17), for instance has 17 electrons. Therefore, its ground state electronic configuration can be written as 1s22s22p63s23p5. The chloride ion (Cl-), on the other hand, has an additional electron for a total of 18 electrons. Following Aufbau's principle, the electron occupies the partially filled 3p subshell first, making the 3p orbital completely filled. The electronic configuration for Cl- can, therefore, be designated as 1s22s22p63s23p6. Again, the electron configuration for the chloride ion is the same as that for Ca2+ and Argon. Hence, they are all isoelectronic to each other. 1. Which of the princples explained above tells us that electrons that are paired cannot have the same spin value? 2. Find the values of n, l, ml, and ms for the following: a. Mg b. Ga c. Co 3. What is a possible combination for the quantum numbers of the 5d orbital? Give an example of an element which has the 5d orbital as it's most outer orbital. 4. Which of the following cannot exist (there may be more than one answer): a. n = 4; l = 4; ml = -2; ms = +1/2 b. n = 3; l = 2; ml = 1; ms = 1 c. n = 4; l = 3; ml = 0; ms = +1/2 d. n = 1; l = 0; ml = 0; ms = +1/2 e. n = 0; l = 0; ml = 0; ms = +1/2 5. Write electron configurations for the following: a. P b. S2- c. Zn3+ 1. Pauli-exclusion Principle 2. a. n = 3; l = 0, 1, 2; ml = -2, -1, 0, 1, 2; ms can be either +1/2 or -1/2     b. n = 4; l = 0, 1, 2, 3; ml = -3, -2, -1, 0, 1, 2, 3; ms can be either +1/2 or -1/2 3. n = 5; l = 3; ml = 0; ms = +1/2. Osmium (Os) is an example.  4. a. The value of l cannot be 4, because l ranges from (0 - n-1)     b. ms can only be +1/2 or -1/2      c. Okay     d. Okay     e. The value of n cannot be zero. 5. a. 1s22s22p63s23p3     b. 1s22s22p63s23p6     c. 1s22s22p63s23p64s23d7 1. Atkins, P. W., & De Paula, J. (2006). Physical Chemistry for the Life Sciences. New York, NY: W. H. Freeman and Company. 2. Petrucci, R. H., Harwood, W. S., & Herring, F. G. (2002). General Chemistry: Principles and Modern Applications. Upper Saddle River, NJ: Prentice-Hall, Inc. 3. Shagoury, Richard.  Chemistry 1A Lecture Book. 4th Ed.  Custom Publishing. 2006. Print • Lannah Lua, Andrew Iskandar (University of California Davis, Undergraduate) Mary Magsombol (University of California Davis)
21181753d896b777
Rakesh P. Tiwari Department of Physics University of Basel Klingelbergstrasse 82 CH-4056 Basel, Switzerland email:view address Short CV 2012 - presentPostdoc in the group of Prof. C. Bruder at, University of Basel, Switzerland 2010 - 2011Postdoc in the group of Prof. M. Blaauboer, Delft University of Technology, The Netherlands 2004 - 2010PhD in Physics under the supervision of Prof. David G. Stroud, The Ohio State University, United States 2000 - 2004Bachelor of Technology in Engineering Physics, Indian Institute of Technology, Mumbai, India. Bachelor Thesis Advisor: Prof. Alok Shukla Research Interests Fall Semester 2016: Nanophysics (VV 11016) Lecture 1 (30/11/2016, 8:15-10:00, Auditorium 2, Pharmazentrum) Lecture 2 (7/12/2016, 8:15-10:00, Auditorium 2, Pharmazentrum) Exercise 1 (9/12/2015, 12:00-13:00 AM, 3.12 Physik) Lecture 3 (14/12/2016, 8:15-10:00 AM, Auditorium 2, Pharmazentrum) Exercise 2 (16/12/2015, 12:00-13:00 AM, 3.12 Physik) Show all abstracts. 1.  Robust quantum optimizer with full connectivity Simon E. Nigg, Niels Loerch, and Rakesh P. Tiwari. Quantum phenomena have the potential to speed up the solution of hard optimization problems. For example quantum annealing, based on the quantum tunneling effect, has recently been shown to scale exponentially better with system size as compared with classical simulated annealing. However, current realizations of quantum annealers with superconducting qubits face two major challenges. First, the connectivity between the qubits is limited, excluding many optimization problems from a direct implementation. Second, decoherence degrades the success probability of the optimization. We address both of these shortcomings and propose an architecture in which the qubits are robustly encoded in continuous variable degrees of freedom. Remarkably, by leveraging the phenomenon of flux quantization, all-to-all connectivity is obtained without overhead. Furthermore, we demonstrate the robustness of this architecture by simulating the optimal solution of a small instance of the NP-hard and fully connected number partitioning problem in the presence of dissipation. 2.  Dynamic response functions and helical gaps in interacting Rashba nanowires with and without magnetic fields Christopher Pedder, Tobias Meng, Rakesh P. Tiwari, and Thomas L. Schmidt. A partially gapped spectrum due to the application of a magnetic field is one of the main probes of Rashba spin-orbit coupling in nanowires. Such a "helical gap" manifests itself in the linear conductance, as well as in dynamic response functions such as the spectral function, the structure factor, or the tunnelling density of states. In this paper, we investigate theoretically the signature of the helical gap in these observables with a particular focus on the interplay between Rashba spin-orbit coupling and electron-electron interactions. We show that in a quasi-one-dimensional wire, interactions can open a helical gap even without magnetic field. We calculate the dynamic response functions using bosonization, a renormalization group analysis, and the exact form factors of the emerging sine-Gordon model. For special interaction strengths, we verify our results by refermionization. We show how the two types of helical gaps, caused by magnetic fields or interactions, can be distinguished in experiments. 3.  Intrinsic Anomalous Hall Effect in Type-II Weyl Semimetals A. A. Zyuzin and Rakesh P. Tiwari. JETP Lett. 103, 717 (2016) Recently, a new type of Weyl semimetal called type-II Weyl semimetal has been proposed. Unlike the usual (type-I) Weyl semimetal, which has a point-like Fermi surface, this new type of Weyl semimetal has a tilted conical spectrum around the Weyl point. Here we calculate the anomalous Hall conductivity of a Weyl semimetal with a tilted conical spectrum for a pair of Weyl points, using the Kubo formula. We find that the Hall conductivity is not universal and can change sign as a function of the parameters quantifying the tilts. Our results suggest that even for the case where the separation between the Weyl points vanishes, tilting of the conical spectrum could give rise to a finite anomalous Hall effect, if the tilts of the two cones are not identical. 4.  Non local quantum state engineering with the Cooper pair splitter beyond the Coulomb blockade regime Phys. Rev. B 93, 075421 (2016) 5.  Snake states and their symmetries in graphene Yang Liu, Rakesh P. Tiwari, Matej Brada, C. Bruder, F.V. Kusmartsev, and E.J. Mele. Phys. Rev. B 92, 235438 (2015) Snake states are open trajectories for charged particles propagating in two dimensions under the influence of a spatially varying perpendicular magnetic field. In the quantum limit they are protected edge modes that separate topologically inequivalent ground states and can also occur when the particle density rather than the field is made nonuniform. We examine the correspondence of snake trajectories in single-layer graphene in the quantum limit for two families of domain walls: (a) a uniform doped carrier density in an antisymmetric field profile and (b) antisymmetric carrier distribution in a uniform field. These families support different internal symmetries but the same pattern of boundary and interface currents. We demonstrate that these physically different situations are gauge equivalent when rewritten in a Nambu doubled formulation of the two limiting problems. Using gauge transformations in particle-hole space to connect these problems, we map the protected interfacial modes to the Bogoliubov quasiparticles of an interfacial one-dimensional p-wave paired state. A variational model is introduced to interpret the interfacial solutions of both domain wall problems. 6.  8$\pi$ - periodic Josephson effect in time-reversal invariant interacting Rashba nanowires Chris J. Pedder, Tobias Meng, Rakesh P. Tiwari, and Thomas L. Schmidt. 7.  Josephson response of a conventional and a noncentrosymmetric superconductor coupled via a double quantum dot Bjorn Sothmann and Rakesh P Tiwari. Phys. Rev. B 92, 014504 (2015) We consider transport through a Josephson junction consisting of a conventional s-wave superconductor coupled via a double quantum dot to a noncentrosymmetric superconductor with both, singlet and triplet pairing. We calculate the Andreev bound state energies and the associated Josephson current. We demonstrate that the current-phase relation is a sensitive probe of the singlet-triplet ratio in the noncentrosymmetric superconductor. In particular, in the presence of an inhomogeneous magnetic field the system exhibits a $\phi$-junction behavior. 8.  Detecting nonlocal Cooper pair entanglement by optical Bell inequality violation Simon E. Nigg, Rakesh P. Tiwari, Stefan Walter, and Thomas L. Schmidt. Phys. Rev. B 91, 094516 (2015) Based on the Bardeen Cooper Schrieffer (BCS) theory of superconductivity, the coherent splitting of Cooper pairs from a superconductor to two spatially separated quantum dots has been predicted to generate nonlocal pairs of entangled electrons. In order to test this hypothesis, we propose a scheme to transfer the spin state of a split Cooper pair onto the polarization state of a pair of optical photons. We show that the produced photon pairs can be used to violate a Bell inequality, unambiguously demonstrating the entanglement of the split Cooper pairs. 9.  Non-Abelian parafermions in time-reversal invariant interacting helical systems Christoph P. Orth, Rakesh P Tiwari, Tobias Meng, and Thomas L. Schmidt. Phys. Rev. B 91, 081406 (2015) The interplay between bulk spin-orbit coupling and electron-electron interactions produces umklapp scattering in the helical edge states of a two-dimensional topological insulator. If the chemical potential is at the Dirac point, umklapp scattering can open a gap in the edge state spectrum even if the system is time-reversal invariant. We determine the zero-energy bound states at the interfaces between a section of a helical liquid which is gapped out by the superconducting proximity effect and a section gapped out by umklapp scattering. We show that these interfaces pin charges which are multiples of e/2, giving rise to a Josephson current with 8\pi periodicity. Moreover, the bound states, which are protected by time-reversal symmetry, are fourfold degenerate and can be described as Z4 parafermions. We determine their braiding statistics and show how braiding can be implemented in topological insulator systems. 10.  Josephson effect in normal and ferromagnetic topological insulator planar, step and edge junctions Jennifer Nussbaum, Thomas L. Schmidt, Christoph Bruder, and Rakesh P. Tiwari. Phys. Rev. B 90, 045413 (2014) 11.  Quantum charge pumping through fractional Fermions in charge density modulated quantum wires and Rashba nanowires Arijit Saha, Diego Rainis, Rakesh P Tiwari, and Daniel Loss. Phys. Rev. B 90, 035422 (2014) We study the phenomenon of adiabatic quantum charge pumping in systems supporting fractionally charged fermionic bound states, in two different setups. The first quantum pump setup consists of a charge-density-modulated quantum wire, and the second one is based on a semiconducting nanowire with Rashba spin-orbit interaction, in the presence of a spatially oscillating magnetic field. In both these quantum pumps transport is investigated in a N-X-N geometry, with the system of interest (X) connected to two normal-metal leads (N), and the two pumping parameters are the strengths of the effective wire-lead barriers. Pumped charge is calculated within the scattering matrix formalism. We show that quantum pumping in both setups provides a unique signature of the presence of the fractional-fermion bound states, in terms of asymptotically quantized pumped charge. Furthermore, we investigate shot noise arising due to quantum pumping, verifying that quantized pumped charge corresponds to minimal shot noise. 12.  Neutral edge modes in a superconductor -- topological-insulator hybrid structure in a perpendicular magnetic field Rakesh P Tiwari, U Zulicke, C Bruder, and Vladimir M. Stojanovic. EPL 108, 17009 (2014). We study the low-energy edge states of a superconductor -- 3D topological-insulator hybrid structure (NS junction) in the presence of a perpendicular magnetic field. The hybridization of electron-like and hole-like Landau levels due to Andreev reflection gives rise to chiral edge states within each Landau level. We show that by changing the chemical potential of the superconductor, this junction can be placed in a regime where the sign of the effective charge of the edge state within the zeroth Landau level changes more than once resulting in neutral edge modes with a finite value of the guiding-center coordinate. We find that the appearance of these neutral edge modes is related to the level repulsion between the zeroth and the first Landau levels in the spectra. We also find that these neutral edge modes come in pairs, one in the zeroth Landau level and its corresponding pair in the first. 13.  Signatures of tunable Majorana-fermion edge states Rakesh P Tiwari, U. Zulicke, and C. Bruder. New J. Phys. 16, 025004 (2014) Chiral Majorana-fermion modes are shown to emerge as edge excitations in a superconductor–topological-insulator hybrid structure that is subject to a magnetic field. The velocity of this mode is tunable by changing the magnetic-field magnitude and/or the superconductor's chemical potential. We discuss how quantum-transport measurements can yield experimental signatures of these modes. A normal lead coupled to the Majorana-fermion edge state through electron tunneling induces resonant Andreev reflections from the lead to the grounded superconductor, resulting in a distinctive pattern of differential-conductance peaks. 14.  Quantum transport signatures of chiral edge states in Sr2RuO4 Rakesh P Tiwari, W. Belzig, Manfred Sigrist, and C. Bruder. Phys. Rev. B 89, 184512 (2014) We investigate transport properties of a double quantum dot based Cooper pair splitter, where the superconducting lead consists of Sr$_2$RuO$_4$. The proposed device can be used to explore the symmetry of the superconducting order parameter in Sr$_2$RuO$_4$ by testing the presence of gapless chiral edge states, which are predicted to exist if the bulk superconductor is described by a chiral $p$--wave state. The odd orbital symmetry of the bulk order parameter ensures that we can realize a regime where the electrons tunneling into the double dot system come from the chiral edge states and thereby leave their signature in the conductance. The proposed Cooper pair splitter has the potential to probe order parameters in unconventional superconductors. 15.  Adiabatic quantum pumping of chiral Majorana fermions M. Alos-Palop, Rakesh P. Tiwari, and M. Blaauboer. Phys. Rev. B 89, 045307 (2014). We investigate adiabatic quantum pumping of chiral Majorana states in a system composed of two Mach--Zehnder type interferometers coupled via a quantum point contact. The pumped current is generated by periodic modulation of the phases accumulated by traveling around each interferometer. Using scattering matrix formalism we show that the pumped current reveals a definite signature of the chiral nature of the Majorana states involved in transport in this geometry. Furthermore, by tuning the coupling between the two interferometers the pump can operate in a regime where finite pumped current and zero two-terminal conductance is expected. 16.  Majorana fermions from Landau quantization in a superconductor--topological-insulator hybrid structure Rakesh P. Tiwari, U. Zulicke, and C. Bruder. Phys. Rev. Lett. 110, 186805 (2013) We show that the interplay of cyclotron motion and Andreev reflection experienced by massless-Dirac-like charge carriers in topological-insulator surface states generates a Majorana-particle excitation. Based on an envelope-function description of the Dirac-Andreev edge states, we discuss the kinematic properties of the Majorana mode and find them to be possible to be tuned by changing the superconductor's chemical potential and/or the magnitude of the perpendicular magnetic field. Our proposal opens up new possibilities for studying Majorana fermions in a controllable setup. 17.  Suppression of Conductance in a Topological Insulator Nanostep Junction M. Alos-Palop, Rakesh P Tiwari, and M. Blaauboer. Phys. Rev. B 87, 035432 (2013) We investigate quantum transport via surface states in a nanostep junction on the surface of a 3D topological insulator that involves two different side surfaces. We calculate the conductance across the junction within the scattering matrix formalism and find that as the bias voltage is increased, the conductance of the nanostep junction is suppressed by a universal factor of 1/3 compared to the conductance of a similar planar junction based on a single surface of a topological insulator. We also calculate and analyze the Fano factor of the nanostep junction and predict that the Fano factor saturates at 1/5, five times smaller than for a Poisson process. 18.  Adiabatic quantum pumping through surface states in 3D topological insulators New J. Phys. 14, 113003 (2012) We investigate adiabatic quantum pumping of Dirac fermions on the surface of a strong 3D topological insulator. Two different geometries are studied in detail, a normal metal -- ferromagnetic -- normal metal (NFN) junction and a ferromagnetic -- normal metal -- ferromagnetic (FNF) junction. Using a scattering matrix approach, we first calculate the tunneling conductance and then the adiabatically pumped current using different pumping mechanisms for both types of junctions. We explain the oscillatory behavior of the conductance by studying the condition for resonant transmission in the junctions and find that each time a new resonant mode appears in the transport window, the pumped current diverges. We also predict an experimentally distinguishable difference between the pumped current and the rectified current. 19.  Localization and circulating currents in curved graphene devices G. M. M. Wakker, Rakesh P Tiwari, and M. Blaauboer. Phys. Rev. B 84, 195427 (2011). We calculate the energy spectrum and eigenstates of a graphene sheet that contains a circular deformation. Using time-independent perturbation theory with the ratio of the height and width of the deformation as the small parameter, we find that due to the curvature the wave functions for the various states acquire unique angular asymmetry. We demonstrate that the pseudomagnetic fields induced by the curvature result in circulating probability currents. 20.  Quantum pumping in graphene with a perpendicular magnetic field Rakesh P Tiwari and M. Blaauboer. Appl. Phys. Lett. 97, 243112 (2010). We consider quantum pumping of Dirac fermions in a monolayer of graphene in the presence of a perpendicular magnetic field in the central pumping region. The two external pump parameters are electrical voltages applied to the graphene sheet on either side of the pumping region. We analyze this pump within scattering matrix formalism and calculate both pumped charge and spin currents. The predicted charge currents are of the order of 1000 nA, which is readily observable using current technology. 21.  Magnetic superlattice with two-dimensional periodicity as a waveguide for spin waves Rakesh P Tiwari and D. Stroud. Phys. Rev. B 81, 220403(R) (2010). We describe a simple method of including dissipation in the spin-wave band structure of a periodic ferromagnetic composite, by solving the Landau-Lifshitz equation for the magnetization with the Gilbert damping term. We use this approach to calculate the band structure of square and triangular arrays of Ni nanocylinders embedded in an Fe host. The results show that there are certain bands and special directions in the Brillouin zone where the spin-wave lifetime is increased by more than an order of magnitude above its average value. Thus, it may be possible to generate spin waves in such composites which decay especially slowly, and propagate especially large distances, for certain frequencies and directions in k space. 22.  Tunable band gap in graphene with a noncentrosymmetric superlattice potential Rakesh P. Tiwari and D. Stroud. Phys. Rev. B 79, 205435 (2009). We show that, if graphene is subjected to the potential from an external superlattice, a band gap develops at the Dirac point provided the superlattice potential has broken inversion symmetry. As numerical example, we calculate the band structure of graphene in the presence of an external potential due to periodically patterned gates arranged in a triangular graphene superlattice (TGS) or a square graphene superlattice with broken inversion symmetry, and find that a band gap is created at the original and, in the case of a TGS, the “second generation” Dirac point. This gap, which extends throughout the superlattice Brillouin zone, can be controlled, in principle, by changing the external potential and the lattice constant of the superlattice. For a square superlattice of lattice-constant 10 nm, we have obtained a gap as large as 65 meV, for gate voltages no larger than 1.5 V. 23.  Model for the magnetoresistance and Hall coefficient of inhomogeneous graphene Rakesh P Tiwari and D. Stroud. Phys. Rev. B 79, 165408 (2009). We show that when bulk graphene breaks into n-type and p-type puddles, the in-plane resistivity becomes strongly field dependent in the presence of a perpendicular magnetic field even if homogeneous graphene has a field-independent resistivity. We calculate the longitudinal resistivity ρxx and Hall resistivity ρxy as a function of field for this system using the effective-medium approximation. The conductivity tensors of the individual puddles are calculated using a Boltzmann approach suitable for the band structure of graphene near the Dirac points. The resulting resistivity agrees well with experiment provided that the relaxation time is weakly field dependent. The calculated Hall resistivity has the sign of the carriers in the puddles occupying the greater area of the composite and vanishes when there are equal areas of n- and p-type puddles. 24.  Sound propagation in light-modulated carbon nanosponge suspensions W. Zhou, Rakesh P Tiwari, R. Annamalai, R. Sooryakumar, V. Subramaniam, and D. Stroud. Phys. Rev. B 79, 104204 (2009). Single-walled carbon nanotube bundles dispersed in a highly polar fluid are found to agglomerate into a porous structure when exposed to low levels of laser radiation. The phototunable nanoscale porous structures provide an unusual way to control the acoustic properties of the suspension. Despite the high sound speed of the nanotubes, the measured speed of longitudinal-acoustic waves in the suspension decreases sharply with increasing bundle concentration. Two possible explanations for this reduction in sound speed are considered. One is simply that the sound speed decreases because of fluid heat induced by laser light absorption by the carbon nanotubes. The second is that this decrease results from the smaller sound velocity of fluid confined in a porous medium. Using a simplified description of convective heat transport, we estimate that the increase in temperature is too small to account for the observed decrease in sound velocity. To test the second possible explanation, we calculate the sound velocity in a porous medium, using a self-consistent effective-medium approximation. The results of this calculation agree qualitatively with experiment. In this case, the observed sound wave would be the analog of the slow compressional mode of porous solids at a structural length scale of order of 100 nm. 25.  Numerical study of energy loss by a nanomechanical oscillator coupled to a Cooper-pair box Rakesh P Tiwari and D. Stroud. Phys. Rev. B 77, 214520 (2008). We calculate the dynamics of a nanomechanical oscillator (NMO) coupled capacitively to a Cooper-pair box (CPB) by solving a stochastic Schrödinger equation with two Lindblad operators [ Commun. Math. Phys. 48 119 (1976)]. Both the NMO and the CPB are assumed dissipative, and the coupling is treated within the rotating wave approximation. We show numerically that, if the CPB decay time is smaller than the NMO decay time, the coupled NMO will lose energy faster and the coupled CPB more slowly than the uncoupled NMO and CPB do. The results show that the efficiency of energy loss by an NMO can be substantially increased if the NMO is coupled to a CPB. 26.  Suppression of tunneling in a superconducting persistent-current qubit Rakesh P Tiwari and D. Stroud. Phys. Rev. B 76, 220505(R) (2007). We consider a superconducting persistent-current qubit consisting of a three-junction superconducting loop in an applied magnetic field. We show that by choosing the field, Josephson couplings, and offset charges suitably, we can perfectly suppress the tunneling between the two oppositely directed states of circulating current, leading to a vanishing of the splitting between the two qubit states. This suppression arises from interference between tunneling along different paths and is analogous to that predicted previously for magnetic particles with half-integer spin. 27.  A basis-set based Fortran program to solve the Gross-Pitaevskii Equation for dilute Bose gases in harmonic and anharmonic traps. Rakesh P Tiwari and Alok Shukla. Comput. Phys. Commun. 174, 966 (2006). Inhomogeneous boson systems, such as the dilute gases of integral spin atoms in low-temperature magnetic traps, are believed to be well described by the Gross–Pitaevskii equation (GPE). GPE is a nonlinear Schrodinger equation which describes the order parameter of such systems at the mean field level. In the present work, we describe a Fortran 90 computer program developed by us, which solves the GPE using a basis set expansion technique. In this technique, the condensate wave function (order parameter) is expanded in terms of the solutions of the simple-harmonic oscillator (SHO) characterizing the atomic trap. Additionally, the same approach is also used to solve the problems in which the trap is weakly anharmonic, and the anharmonic potential can be expressed as a polynomial in the position operators x, y, and z. The resulting eigenvalue problem is solved iteratively using either the self-consistent-field (SCF) approach, or the imaginary time steepest-descent (SD) approach. Iterations can be initiated using either the simple-harmonic-oscillator ground state solution, or the Thomas–Fermi (TF) solution. It is found that for condensates containing up to a few hundred atoms, both approaches lead to rapid convergence. However, in the strong interaction limit of condensates containing thousands of atoms, it is the SD approach coupled with the TF starting orbitals, which leads to quick convergence. Our results for harmonic traps are also compared with those published by other authors using different numerical approaches, and excellent agreement is obtained. GPE is also solved for a few anharmonic potentials, and the influence of anharmonicity on the condensate is discussed. Additionally, the notion of Shannon entropy for the condensate wave function is defined and studied as a function of the number of particles in the trap. It is demonstrated numerically that the entropy increases with the particle number in a monotonic way.
6404b8f5c07a8c72
previous  home  next  PDF Electron in a Box Michael Fowler, University of Virginia  9/1/08 Plane Wave Solutions The best way to gain understanding of Schrödinger’s equation is to solve it for various potentials. The simplest is a one-dimensional “particle in a box” problem. The appropriate potential is V(x) = 0 for x between 0, L and V(x) = infinity otherwise—that is to say, there are infinitely high walls at x = 0 and x = L, and the particle is trapped between them. This turns out to be quite a good approximation for electrons in a long molecule, and the three-dimensional version is a reasonable picture for electrons in metals. Between x = 0 and x = L we have V = 0, so the wave equation is just A possible plane wave solution is On inserting this into the zero-potential Schrödinger equation above we find E = p2/2m, as we expect. It is very important to notice that the complex conjugate, proportional to, is not a solution to the Schrödinger equation! If we blindly put it into the equation we get E = –p2/2m, an unphysical result. However, a wave function proportional to  gives E = p2/2m, so this plane wave is a solution to the equation. Therefore, the two allowed plane-wave solutions to the zero-potential Schrödinger equation are proportional to and  respectively. Note that these two solutions have the same time dependence . To decide on the appropriate solution for our problem of an electron in a box, of course we have to bring in the walls—what they mean is that ψ = 0 for x < 0 and for x > L because remember | ψ |2 tells us the probability of finding the particle anywhere, and, since it’s in the box, it’s trapped between the walls, so there’s zero probability of finding it outside. The condition ψ = 0 at x = 0 and x = L reminds us of the vibrating string with two fixed ends—the solution of the string wave equation is standing waves of sine form. In fact, taking the difference of the two permitted plane-wave forms above gives a solution of this type: This wave function satisfies the Schrödinger equation between the walls, it vanishes at the x = 0 wall, it will also vanish at x = L provided that the momentum variable satisfies: Thus the allowed values of p are hn/2L, where n = 1, 2, 3… , and from E = p2/2m the allowed energy levels of the particle are: Note that these energy levels become more and more widely spaced out at high energies, in contrast to the hydrogen atom potential. (As we shall see, the harmonic oscillator potential gives equally spaced energy levels, so by studying how the spacing of energy levels varies with energy, we can learn something about the shape of the potential.) What about the overall multiplicative constant A in the wave function? This can be real or complex. To find its value, note that at a fixed time, say t = 0, the probability of the electron being between x and x + dx is |ψ |2dx or The total probability of the particle being somewhere between 0, L must be unity: When A is fixed in this way, by demanding that the total probability of finding the particle somewhere be unity, it is called the normalization constant. Stationary States Notice that at a later time the probability distribution for the wave function is the same, because time only appears as a phase factor in this time-dependent function, and so does not affect | ψ |2. A state with a time-independent probability distribution is called a stationary state. States with Moving Probability Distributions Recall that the Schrodinger equation is a linear equation, and the sum of any two solutions is also a solution to the equation.  That means that we can add two solutions having different energies, and still have a legal wave function.  We shall establish that in this case, the probability distribution varies in time The simplest way to see how this must be is to look at an example.  Let’s add the ground state to the first excited state, and normalize the sum: (You can check the normalization constant at t = 0). For general x, the two terms in the bracket rotate in the complex plane at different rates, so their sum has a time-varying magnitude. That is to say, | ψ(x,t)|2 varies in time, so the particle must be moving around—this is not a stationary state. Exercise: To see this, note that at t = 0 the wave function is: and sketch this function: the particle is more likely to be found in the left-hand half of the box.  Now, suppose the time is  so  At this time, and it’s easy to see that the particle is more likely to be found in the right-hand half. That is to say, this wave function, a linear sum of wave functions corresponding to different energies, has a probability distribution that sloshes back and forth in the box: and, any attempt to describe a classical-type particle motion, bouncing back and forth, necessarily involves adding quantum wave functions of different energies.  Note that the frequency of the sloshing motion depends on the difference of the two energies: how constructively the two components interfere depends on the difference of the phases in the energies at the time.  A single energy wave function always has a static probability distribution. Of course, the total probability of finding the particle somewhere in the box remains unity: the normalization constant is time-independent. The Time-Independent Schrödinger Equation: Eigenstates and Eigenvalues The only way to prevent |ψ(x,t)|2 varying in time is to have all its parts changing phase in time at the same rate. This means they all correspond to the same energy. If we restrict our considerations to such stationary states, the wave function can be factorized and putting this wave function into the Schrödinger equation we find This is the time-independent Schrödinger equation, and its solutions are the spatial wave functions for stationary states, states of definite energy. These are often called eigenstates of the equation. The values of energy corresponding to these eigenstates are called the eigenvalues. An Important Point: What, Exactly, Happens at the Wall? Consider again the wavefunction for the lowest energy state of a particle confined between walls at x = 0 and x = L. The reader should sketch the wavefunction from some point to the left of x = 0 over to the right of x = L. To the left of x = 0, the wavefunction is exactly zero, then at x = 0 it takes off to the right (inside the box) as a sine curve. In other words, at the origin the slope of the wavefunction ψ  is zero to the left, nonzero to the right. There is a discontinuity in the slope at the origin: this means the second derivative of ψ is infinite at the origin. On examining the time-independent Schrödinger equation above, we see the equation can only be satisfied at the origin because the potential becomes infinite there—the wall is an infinite potential. (And, in fact, since ψ becomes zero on approaching the origin from inside the box, the limit must be treated carefully.) It now becomes obvious that if the box does not have infinite walls, but merely high ones, ψ describing a confined particle cannot suddenly go to zero at the walls: the second derivative must remain finite. For non-infinite walls, ψ and its derivative must be continuous on entering the wall. This has the important physical consequence that ψ will be nonzero at least for some distance into the wall, even if classically the confined particle does not have enough energy to “climb the wall”. (Which it doesn’t, if it’s confined.) Thus, in quantum mechanics, there is a non-vanishing probability of finding the particle in a region which is “classically forbidden” in the sense that it doesn’t have enough energy to get there. previous  home  next  PDF
3ae7629330856e17
Making waves - singly First observed in the waters of a Scottish canal, solitary waves, or solitons, have applications right across physics, Ray Girvan discovers Scientific Computing World: May/June 2005 Background:'In the Hollow of a Wave off the Coast at Kanagawa', c. 1830, Katsushika Hokusai. The discovery of solitons is one of the nicer stories of important science arising from an apparently insignificant observation. In August 1834, the naval engineer John Scott Russell was watching a horse-drawn barge on the Union Canal, Hermiston, Edinburgh, as part of his work on hull design. When the cable snapped and the barge suddenly stopped, Russel was impressed by what happened: 'A mass of water rolled forward with great velocity, assuming the form of a large solitary elevation, a rounded, smooth and well-defined heap of water, which continued its course along the channel apparently without change of form or diminution of speed. I followed it on horseback, and overtook it still rolling on at a rate of some eight or nine miles an hour, preserving its original figure some 30 feet long and a foot to a foot and a half in height. Its height gradually diminished, and after a chase of one or two miles I lost it in the windings of the channel'. Russell went on to a distinguished career, but despite his experimental work in a homebrew wave tank and a subsequent paper, his contemporaries never shared his view of the importance of what he called the 'Wave of Translation'. Partial vindication came later in the 19th century, when Boussinesq (1872) and Korteweg and de Vries (1895) showed how such self-reinforcing solitary waves arose from partial differential equations describing shallow water motion. But in 1965, Martin Kruskal and Norman Zabusky discovered a surprising result. An entirely different system of coupled harmonic oscillators, the Fermi-Pasta-Ulam experiment, also yielded the Korteweg-de Vries (KdV) equation. Kruskal and Zabusky coined the term soliton for these travelling wave solutions, alluding to their particle-like properties. Whereas standard waves have peaks and troughs, and disperse as they travel, solitons consist of a single non-dispersing peak: linear effects spreading the waveform are exactly balanced by non-linear ones that focus it. Furthermore they show elastic interaction: two solitons of different sizes (and hence velocities) can pass through each other. This isn't, however, a normal wave collision where the heights add linearly; at the instant of superposition, solitons merge with a broader, lower peak. While solitons were first recognised on the surface of water, the commonest ones in water actually happen underneath, as internal oceanic waves propagating on the pycnocline (the interface between density layers). Sailors have long known of bands of rough and smooth sea - 'tide rips' and 'slicks' - as well as the phenomenon of 'dead water', increased drag at the mouth of fjords. But post-1970s observation, particularly with satellite Synthetic Aperture Radar (SAR) and ship-borne Doppler Current Profiling, revealed these to be the surface manifestation of 'undular bores', subsurface packets of solitons. Typically travelling as 'waves of depression' - troughs only - of tens of metres amplitude and often kilometres in wavelength, they are initiated when tidal flow is perturbed by underwater features such as ridges. They're more than a nautical curiosity, as they affect acoustic propagation in the sea (of military interest); mix sediments and nutrients (of ecological interest); and put potentially dangerous stress on the legs of oil rigs. Oceanic undular bores appear to arise by spontaneous breakdown of larger perturbations in systems governed by the KdV equation. This also occurs in the atmosphere, occasionally in the higher mesosphere, but most commonly in the lower atmosphere, the troposphere. A spectacular, much-publicised example is Morning Glory, an undular bore that forms seasonally over the Gulf of Carpentaria, Australia. In this case, the bore travels in an inversion layer - cold air trapped between the ground and warmer air above - each soliton generating a roll-shaped bank of cloud hundreds of kilometres long. Bores, naturally, are better known as a river surface phenomenon. One spectacular example appeared around January 10th this year, when many newspapers worldwide published a photograph supposedly showing the instant of impact of the 2004 'Boxing Day Tsunami'. Other papers were suspicious. Lack of attribution played a part, as did the grins and umbrellas of the onlookers shown in companion photos that joined it on the e-mail circuit. Collective debunking, now summarised at the urban-myths website, soon revealed that the photos dated from 2002 and showed a tidal bore on the Qiantang River, Hangzhou, China. The largest in the world, with a wavefront up to 9 metres high, the Hangzhou bore (called the Black Dragon) is the subject of an annual tide-watching festival. It's very often stated that tidal bores on rivers are the definitive example of solitons. The situation isn't so straightforward (and not helped by a mess of terminology from different fields). A bore is a general term for a moving step-discontinuity in water level, otherwise called a 'hydraulic jump'. The classic bore - regionally a mascaret, pororoca and aegir - arises in funnel-shaped estuaries that amplify incoming tides, the rapid rise propagating upstream against the flow of the river feeding the estuary. The profile depends on the Froude number, a dimensionless ratio of inertial and gravitational effects. At its most energetic, a bore has a turbulent breaking wavefront like an advancing waterfall; this is effectively a shockwave rather than a soliton. Slower bores take on an oscillatory profile with a leading wave (a dispersive shockwave) followed by a train of solitons. An even more complex question is whether tsunami waves involve solitons. Tsunami waves generated by sharp localised impulses, such as meteorite strikes into the sea, generally do. Models of the late Jurassic Mjolnir impact by the Simula Research Laboratory, University of Oslo, predicted trains of solitary waves. A similar effect occurred with 1958 mega-tsunami at Lituya, Alaska, when rock dropped en masse into a bay following a landslide. An even smaller, but still dangerous, equivalent is the wash from high-speed super-ferries that produce soliton wakes. Earthquake tsunamis such as the 2004 tsunami are initiated by broader scale impulses, and the spreading mechanism depends on the model. As long-wavelength water waves, tsunamis are generally modelled by the shallow water or long wave equations (a simplification of the Navier-Stokes equations for cases where the wavelength is much larger than the water depth). These incorporate Boussinesq and KdV equations as further approximations, both of which can give soliton solutions. Nevertheless, it's difficult to check models; tsunamis are near-impossible to observe in mid-ocean before they are modified by shore effects: the catastrophic increase in amplitude, and the steepening into bores. Sea level measurement shows, however, that tsunami waves, again unlike solitons in the strict sense, have both peaks and troughs. (A classic warning sign of an impending tsunami is the sea level dropping before the first wavefront arrives - a detail enshrined in the story of Hamaguchi Goryo, who in 1854 burned his rice harvest to warn villagers as the sea receded.) Some analyses have suggested, however, that the regime may depend on travel distance: that tsunami waves close to the epicentre arrive as continuous sinusoidal waves, but long-distance ones may resolve into solutions of the KdV equation and travel as a train of solitary waves. In general, though, all I can conclude is that 'tsunami' and 'soliton' aren't terms widely associated in the scientific literature. Soliton behaviour has also been seen in other fluid-like systems such as plasmas and flowing sand (barchan dunes have been observed to pass through each other). The Great Red Spot of Jupiter may also be some form of soliton. Following Kruskal and Zabusky's discovery of its broader applicability, soliton theory has extended well beyond the original application to fluids. Solitons appear in many other areas of physics governed by weakly nonlinear PDEs: for instance, the FitzHugh-Nagumo equations describing nerve impulse propagation; and the sine-Gordon equation in solid state physics and non-linear optics. It's hard to predict where solitons will pop up next. One of their more intriguing manifestations is 'light bullets', spherical solitary waves (as predicted by the non-linear Schrödinger equation) in non-linear optical media excited by laser. On collision, they show various behaviours - they can split, fuse, alter path and tunnel through each other - that might be harnessed to make optical computers. One optical effect that has reached fruition, however, is soliton-based communications. The idea was first suggested by Akira Hasegawa and Fred Tappert in 1973, when they showed theoretically that solitons could arise in optical fibres with a suitably tailored non-linear relation between light intensity and refractive index. Practical research took over a decade to catch up. Management of dispersion was one of the problems, and soliton technology, which sends laser data as 'pulse' vs. 'dark' states, has been in long-running competition for bandwidth and distance with the more traditional NRZ (non-return-to-zero) systems that send two intensity states with no zero state. Even so, a number of major telecoms providers, such as Marconi and Corvis Corporation, are now using soliton technology for ultra-long-haul (ULH) optical fibre networks that communicate over several thousand kilometres. John Scott Russell, one feels, would have been delighted by the wealth of phenomena arising from his Wave of Translation. As Chris Eilbeck's Solitons Home Page at Heriot-Watt University says, 'It is fitting that a fibre-optic cable linking Edinburgh and Glasgow now runs beneath the very tow-path from which John Scott Russell made his initial observations, and along the aqueduct which now bears his name'. Solitons Home Page: The Severn Bore Page: The Morning Glory: Tackling the mathematics Due to the analytical insolubility of the underlying partial differential equations, early work on solitons had to be done by numerical methods. This is still the mainstay of work with general starting conditions and geometries. Femlab, the finite element solver from Comsol, includes the Korteweg-de Vries as one of its standard equation-based models (images top and right, from Femlab 3.1). Using a time-dependent solver, it demonstrates a succession of faster solitons passing through a slower one, all reforming after the collision. The post-processed domain plot shows the solution extruded along the time axis. Dr Magnus Olsson, product manager for Comsol's Electromagnetics Module, told me that Femlab 3.2 will stress time-dependent formulations in non-linear optics and electromagnetics, the type of media in which soliton effects are of increasing practical importance. For instance, in photonic crystals, which manipulate light internally through non-linear optical properties, the lossless transmission of a pulse round a right-angled bend in a waveguide can be modelled with sine-Gordon solitons. (This model, incidentally, is also applicable to the motion of dislocations in metals and the 'unzipping' of DNA). The Mathematica image plots the exact solution for the interaction of two solitons, showing the characteristic phase shift and nonlinear superposition of amplitudes. It's now known that an isolated 2D soliton solution to the KdV equation has a sech^2 profile. This result arose from the analytical solution of the Korteweg-de Vries equation, found in 1967 using the inverse scattering transform of Gardner, Greene, Kruskal, and Miura. An especial understanding came via the work of Hungarian-born mathematician Peter D Lax, who recently won the Abel Prize 2005 for his lifetime contributions to PDE solution. The transform method worked through several obscure steps (Professor Helge Holden, summing up Lax's work on the Abel Prize site, calls them 'miracles'). Lax reformulated the solution in terms of two operators, a Lax pair, that not only explained how the transform worked but also made a large family of other soliton-generating PDEs integrable.
5a40bbb8844cdca0
Quantum theory is unsettling. Nobel laureate Richard Feynman admitted that it “appears peculiar and mysterious to everyone-both to the novice and to the experienced physicist.” Niels Bohr, one of its founders, told a young colleague, “If it does not boggle your mind, you understand nothing.” Physicists have been quarreling over its interpretation since the legendary arguments between Bohr and Einstein in the 1920s. So have philosophers, who agree that it has profound implications but cannot agree on what they are. Even the man on the street has heard strange rumors about the Heisenberg Uncertainty Principle, of reality changing when we try to observe it, and of paradoxes where cats are neither alive nor dead till someone looks at them. Quantum strangeness, as it is sometimes called, has been a boon to New Age quackery. Books such as The Tao of Physics (1975) and The Dancing Wu Li Masters (1979) popularized the idea that quantum theory has something to do with eastern mysticism. These books seem almost sober today when we hear of “quantum telepathy,” “quantum ESP,” and, more recently, “quantum healing,” a fad spawned by Deepak Chopra’s 1990 book of that name. There is a flood of such quantum flapdoodle (as the physicist Murray Gell-Mann called it). What, if anything, does it all mean? Amid all the flapdoodle, what are the serious philosophical ideas? And what of the many authors who claim that quantum theory has implications favorable to religious belief? Are they on to something, or have they been taken in by fuzzy thinking and New Age nonsense? It all began with a puzzle called wave-particle duality. This puzzle first appeared in the study of light. Light was understood by the end of the nineteenth century to consist of waves in the electromagnetic field that fills all of space. The idea of fields goes back to Michael Faraday, who thought of magnetic and electrical forces as being caused by invisible “lines of force” stretching between objects. He envisioned space as being permeated by such force fields. In 1864, James Clerk Maxwell wrote down the complete set of equations that govern electromagnetic fields and showed that waves propagate in them, just as sound waves propagate in air. This understanding of light is correct, but it turned out there was more to the story. Strange things began to turn up. In 1900, Max Planck found that a certain theoretical conundrum could be resolved only by assuming that the energy in light waves comes in discrete, indivisible chunks, which he called quanta. In other words, light acts in some ways like it is made up of little particles. Planck’s idea seemed absurd, for a wave is something spread out and continuous, while a particle is something pointlike and discrete. How can something be both one and the other? And yet, in 1905, Einstein found that Planck’s idea was needed to explain another puzzling behavior of light, called the photoelectric effect. These developments led Louis de Broglie to make an inspired guess: If waves (such as light) can act like particles, then perhaps particles (such as electrons) can act like waves. And, indeed, this proved to be the case. It took a generation of brilliant physicists (including Bohr, Heisenberg, Schrödinger, Born, Dirac, and Pauli) to develop a mathematically consistent and coherent theory that described and made some sense out of wave-particle duality. Their quantum theory has been spectacularly successful. It has been applied to a vast range of phenomena, and hundreds of thousands of its predictions about all sorts of physical systems have been confirmed with astonishing accuracy. Great theoretical advances in physics typically result in profound unifications of our understanding of nature. Newton’s theories gave a unified account of celestial and terrestrial phenomena; Maxwell’s equations unified electricity, magnetism, and optics; and the theory of relativity unified space and time. Among the many beautiful things quantum theory has given us is a unification of particles and forces. Faraday saw that forces arise from fields, and Maxwell saw that fields give rise to waves. Thus, when quantum theory showed that waves are particles (and particles waves), a deep unity of nature came into view: The forces by which matter interacts and the particles of which it is composed are both manifestations of a single kind of thing-“quantum fields.” The puzzle of how the same thing can be both a wave and a particle remains, however. Feynman called it “the only real mystery” in science. And he noted that, while we “can tell how it works,” we “cannot make the mystery go away by ‘explaining’ how it works.” Quantum theory has a precise mathematical formalism, one on which everyone agrees and that tells how to calculate right answers to the questions physicists ask. But what really is going on remains obscure-which is why quantum theory has engendered unending debates over the nature of physical reality for the past eighty years. The problem is this: At first glance, wave-particle duality is not only mysterious but inconsistent in a blatant way. The inconsistency can be understood with a thought experiment. Imagine a burst of light from which a light wave ripples out through an ever-widening sphere in space. As the wave travels, it gets more attenuated, since the energy in it is getting spread over a wider and wider area. (That is why the farther you are from a light bulb, the fainter it appears.) Now, suppose a light-collecting device is set up, a box with a shutter-essentially, a camera. The farther away it is placed from the light burst, the less light it will collect. Suppose the light-collecting box is set up at a distance where it will collect exactly a thousandth of the light emitted in the burst. The inconsistency arises if the original burst contained, say, fifty particles of light. For then it appears that the light-collector must have collected 0.05 particles (a thousandth of fifty), which is impossible, since particles of light are indivisible. A wave, being continuous, can be infinitely attenuated or subdivided, whereas a particle cannot. Quantum theory resolves this by saying that the light-collector, rather than collecting 0.05 particles, has a 0.05 probability of collecting one particle. More precisely, the average number of particles it will collect, if the same experiment is repeated many times, is 0.05. Wave-particle duality, which gave rise to quantum theory in the first place, forces us to accept that quantum physics is inherently probabilistic. Roughly speaking, in pre-quantum, classical physics, one calculated what actually happens, while in quantum physics one calculates the relative probabilities of various things happening. This hardly resolves the mystery. The probabilistic nature of quantum theory leads to many strange conclusions. A famous example comes from varying the experiment a little. Suppose an opaque wall with two windows is placed between the light-collector and the initial burst of light. Some of the light wave will crash into the wall, and some will pass through the windows, blending together and impinging on the light-collector. If the light-collector collects a particle of light, one might imagine that the particle had to have come through either one window or the other. The rules of the quantum probability calculus, however, compel the weird conclusion that in some unimaginable way the single particle came through both windows at once. Waves, being spread out, can go through two windows at once, and so the wave-particle duality ends up implying that individual particles can also. Things get even stranger, and it is clear why some people pine for the good old days when waves were waves and particles were particles. One of those people was Albert Einstein. He detested the idea that a fundamental theory should yield only probabilities. “God does not play dice!” he insisted. In Einstein’s view, the need for probabilities simply showed that the theory was incomplete. History supported his claim, for in classical physics the use of probabilities always stemmed from incomplete information. For example, if one says that there is a 60 percent chance of a baseball hitting a glass window, it is only because one doesn’t know the ball’s direction and speed well enough. If one knew them better (and also knew the wind velocity and all other relevant variables), one could definitely say whether the ball would hit the window. For Einstein, the probabilities in quantum theory meant only that there were as-yet-unknown variables: hidden variables, as they are called. If these were known, then in principle everything could be predicted exactly, as in classical physics. Many years have gone by, and there is still no hint from any experiment of hidden variables that would eliminate the need for probabilities. In fact, the famed Heisenberg Uncertainty Principle says that probabilities are ineradicable from physics. The thought experiment of the light burst and light-collector showed why: If one and the same entity is to behave as both a wave and a particle, then an understanding in terms of probabilities is absolutely required. (For, again, 0.05 of a particle makes no sense, whereas a 0.05 chance of a particle does.) The Uncertainty Principle, the bedrock of quantum theory, implies that even if one had all the information there is to be had about a physical system, its future behavior cannot be predicted exactly, only probabilistically. This last statement, if true, is of tremendous philosophical and theological importance. It would spell the doom of determinism, which for so long had appeared to spell the doom of free will. Classical physics was strictly deterministic, so that (as Laplace famously said) if the state of the physical world were completely specified at one instant, its whole future development would be exactly and uniquely determined. Whether a man lifts his arm or nods his head now would (in a world governed by classical physical laws) be an inevitable consequence of the state of the world a billion years ago. But the death of determinism is not the only deep conclusion that follows from the probabilistic nature of quantum theory. An even deeper conclusion that some have drawn is that materialism, as applied to the human mind, is wrong. Eugene Wigner, a Nobel laureate, argued in a famous essay that philosophical materialism is not “logically consistent with present quantum mechanics.” And Sir Rudolf Peierls, another leading physicist, maintained that “the premise that you can describe in terms of physics the whole function of a human being . . . including its knowledge, and its consciousness, is untenable.” These are startling claims. Why should a mere theory of matter imply anything about the mind? The train of logic that leads to this conclusion is rather straightforward, if a bit subtle, and can be grasped without knowing any abstruse mathematics or physics. It starts with the fact that for any physical system, however simple or complex, there is a master equation-called the Schrödinger equation-that describes its behavior. And the crucial point on which everything hinges is that the Schrödinger equation yields only probabilities. (Only in special cases are these exactly 0, or 100 percent.) But this immediately leads to a difficulty: There cannot always remain just probabilities; eventually there must be definite outcomes, for probabilities must be the probabilities of definite outcomes. To say, for example, there is a 60 percent chance that Jane will pass the French exam is meaningless unless at some point there is going to be a French exam on which Jane will receive a definite grade. Any mere probability must eventually stop being a mere probability and become a certainty or it has no meaning even as a probability. In quantum theory, the point at which this happens, the moment of truth, so to speak, is traditionally called the collapse of the wave function. The big question is when this occurs. Consider the thought experiment again, where there was a 5 percent chance of the box collecting one particle and a 95 percent chance of it collecting none. When does the definite outcome occur in this case? One can imagine putting a mechanism in the box that registers when a particle of light has been collected by making, say, a red indicator light to go on. The answer would then seem plain: The definite outcome happens when the red light goes on (or fails to do so). But this does not really produce a definite outcome, for a simple reason: Any mechanism one puts into the light-collecting box is just itself a physical system and is therefore described by a Schrödinger equation. And that equation yields only probabilities . In particular, it would say there is a 5 percent chance that the box collected a particle and that the red indicator light is on, and a 95 percent chance that it did not collect a particle and that the indicator light is off. No definite outcome has occurred. Both possibilities remain in play. This is a deep dilemma. A probability must eventually get resolved into a definite outcome if it is to have any meaning at all, and yet the equations of quantum theory when applied to any physical system yield only probabilities and not definite outcomes. Of course, it seems that when a person looks at the red light and comes to the knowledge that it is on or off, the probabilities do give way to a definite outcome, for the person knows the truth of the matter and can affirm it with certainty. And this leads to the remarkable conclusion of this long train of logic: As long as only physical structures and mechanisms are involved, however complex, their behavior is described by equations that yield only probabilities-and once a mind is involved that can make a rational judgment of fact, and thus come to knowledge, there is certainty. Therefore, such a mind cannot be just a physical structure or mechanism completely describable by the equations of physics. Has there been a sleight-of-hand? How did mind suddenly get into the picture? It goes back to probabilities. A probability is a measure of someone’s state of knowledge or lack of it. Since quantum theory is probabilistic, it makes essential reference to someone’s state of knowledge. That someone is traditionally called the observer. As Peierls explained, “The quantum mechanical description is in terms of knowledge, and knowledge requires somebody who knows.” I have been explaining some of the implications (as Wigner, Peierls, and others saw them) of what is usually called the traditional, Copenhagen, or standard interpretation of quantum theory. The term “Copenhagen interpretation” is unfortunate, since it carries with it the baggage of Niels Bohr’s philosophical views, which were at best vague and at worst incoherent. One can accept the essential outlines of the traditional interpretation (first clearly delineated by the great mathematician John von Neumann) without endorsing every opinion of Bohr. There are many people who do not take seriously the traditional interpretation of quantum theory-precisely because it gives too great an importance to the mind of the human observer. Many arguments have been advanced to show its absurdity, the most famous being the Schrödinger Cat Paradox. In this paradox one imagines that the mechanism in the light-collecting box kills a cat rather than merely making a red light go on. If, as the traditional view has it, there is not a definite outcome until the human observer knows the result, then it would seem that the cat remains in some kind of limbo, not alive or dead, but 95 percent alive and 5 percent dead, until the observer opens the box and looks at the cat-which is absurd. It would mean that our minds create reality or that reality is perhaps only in our minds. Many philosophers attack the traditional interpretation of quantum theory as denying objective reality. Others attack it because they don’t like the idea that minds have something special about them not describable by physics. The traditional interpretation certainly leads to thorny philosophical questions, but many of the common arguments against it are based on a caricature. Most of its seeming absurdities evaporate if it is recognized that what is calculated in quantum theory’s wavefunction is not to be identified simply with what is happening, has happened, or will happen but rather with what someone is in a position to assert about what is happening, has happened, or will happen. Again, it is about someone’s (the observer’s) knowledge . Before the observer opens the box and looks at the cat, he is not in a position to assert definitely whether the cat is alive or dead; afterward, he is-but the traditional interpretation does not imply that the cat is in some weird limbo until the observer looks. On the contrary, when the observer checks the cat’s condition, his observation can include all the tests of forensic pathology that would allow him to pin down the time of the cat’s death and say, for instance, that it occurred thirty minutes before he opened the box. This is entirely consistent with the traditional interpretation of quantum theory. Another observer who checked the cat at a different time would have a different “moment of truth” (so the wavefunction that expresses his state of knowledge would collapse when he looked), but he would deduce the same time of death for the cat. There is nothing subjective here about the cat’s death or when it occurred. The traditional interpretation implies that just knowing A, B, and C, and applying the laws of quantum theory, does not always answer (except probabilistically) whether D is true. Finding out definitely about D may require another observation. The supposedly absurd role of the observer is really just a concomitant of the failure of determinism. The trend of opinion among physicists and philosophers who think about such things is away from the old Copenhagen interpretation, which held the field for four decades. There are, however, only a few coherent alternatives. An increasingly popular one is the many-worlds interpretation, based on Hugh Everett’s 1957 paper, which takes the equations of physics as the whole story. If the Schrödinger equation never gives definite and unique outcomes, but leaves all the possibilities in play, then we ought to accept this, rather than invoking mysterious observers with their minds’ moments of truth. So, for example, if the equations assign the number 0.05 to the situation where a particle has been collected and the red light is on, and the number 0.95 to the situation where no particle has been collected and the red light is off, then we ought to say that both situations are parts of reality (though one part is in some sense larger than the other by the ratio 0.95 to 0.05). And if an observer looks at the red light, then, since he is just part of the physical system and subject to the same equations, there will be a part of reality (0.05 of it) in which he sees the red light on and another part of reality (0.95 of it) in which he sees the red light off. So physical reality splits up into many versions or branches, and each human observer splits up with it. In some branches a man will see that the light is on, in some he will see that the light is off, in others he will be dead, in yet others he will never have been born. According to the many-worlds interpretation, there are an infinite number of branches of reality in which objects (whether particles, cats, or people) have endlessly ramifying alternative histories, all equally real. Not surprisingly, the many-worlds interpretation is just as controversial as the old Copenhagen interpretation. In the view of some thinkers, the Copenhagen and many-worlds interpretation both make the same fundamental mistake. The whole idea of wave-particle duality was a wrong turn, they say. Probabilities are needed in quantum theory because in no other way can one make sense of the same entity being both a wave and a particle. But there is an alternative, going back to de Broglie, which says they are not the same entity. Waves are waves and particles are particles. The wave guides, or “pilots,” the particles and tells them where to go. The particles surf the wave, so to speak. Consequently, there is no contradiction in saying both that a tiny fraction of the wave enters the light collector and that a whole-number of particles enters-or in saying that the wave went through two windows at once and each particle went through just one. De Broglie’s pilot-wave idea was developed much further by David Bohm in the 1950s, but it has only recently attracted a significant following. “Bohmian theory” is not just a different interpretation of quantum theory; it is a different theory. Nevertheless, Bohm and his followers have been able to show that many of the successful predictions of quantum theory can be reproduced in theirs. (It is questionable whether all of them can be.) Bohm’s theory can be seen as a realization of Einstein’s idea of hidden variables, and its advocates see it as a vindication of Einstein’s well-known rejection of standard quantum theory. As Einstein would have wanted, Bohmian theory is completely deterministic. Indeed, it is an extremely clever way of turning quantum theory back into a classical and essentially Newtonian theory. The advocates of this idea believe that it solves all of the quantum riddles and is the only way to preserve philosophical sanity. However, most physicists, though impressed by its cleverness, regard it as highly artificial. In my view, the most serious objection to it is that it undoes one of the great theoretical triumphs in the history of physics: the unification of particles and forces. It gets rid of the mysteriousness of quantum theory by sacrificing much of its beauty. What, then, are the philosophical and theological implications of quantum theory? The answer depends on which school of thought-Copenhagen, many worlds, or Bohmian-one accepts. Each has its strong points, but each also has features that many experts find implausible or even repugnant. One can find religious scientists in every camp. Peter E. Hodgson, a well-known nuclear physicist who is Catholic, insists that Bohmian theory is the only metaphysically sound alternative. He is unfazed that it brings back Newtonian determinism and mechanism. Don Page, a well-known theoretical cosmologist who is an evangelical Christian, prefers the many-worlds interpretation. He isn’t bothered by the consequence that each of us has an infinite number of alter egos. My own opinion is that the traditional Copenhagen interpretation of quantum theory still makes the most sense. In two respects it seems quite congenial to the worldview of the biblical religions: It abolishes physical determinism, and it gives a special ontological status to the mind of the human observer. By the same token, it seems quite uncongenial to eastern mysticism. As the physicist Heinz Pagels noted in his book The Cosmic Code : “Buddhism, with its emphasis on the view that the mind-world distinction is an illusion, is really closer to classical, Newtonian physics and not to quantum theory [as traditionally interpreted], for which the observer-observed distinction is crucial.” If anything is clear, it is that quantum theory is as mysterious as ever. Whether the future will bring more-compelling interpretations of, or even modifications to, the mathematics of the theory itself, we cannot know. Still, as Eugene Wigner rightly observed, “It will remain remarkable, in whatever way our future concepts develop, that the very study of the external world led to the conclusion that the content of the consciousness is an ultimate reality.” This conclusion is not popular among those who would reduce the human mind to a mere epiphenomenon of matter. And yet matter itself seems to be telling us that its connection to mind is more subtle than is dreamt of in their philosophy. Articles by Stephen M. Barr
cf15cf7b7a1b2460
The Feynman Lectures on Physics From Wikipedia, the free encyclopedia Jump to: navigation, search The Feynman Lectures on Physics The Feynman Lectures on Physics.jpg The Feynman Lectures on Physics including Feynman's Tips on Physics: The Definitive and Extended Edition (2nd edition, 2005) Author Richard P. Feynman, Robert B. Leighton and Matthew Sands Country United States Language English Subject Physics Publisher Addison–Wesley Publication date OCLC 19455482 The Feynman Lectures on Physics is a physics textbook based on some lectures by Richard P. Feynman, a Nobel laureate who has sometimes been called “The Great Explainer”.[1] The lectures were given to undergraduate students at the California Institute of Technology (Caltech), during 1961–1963. The book's authors are Feynman, Robert B. Leighton, and Matthew Sands. The book comprises three volumes. The first volume focuses on mechanics, radiation, and heat, including relativistic effects. The second volume is mainly on electromagnetism and matter. The third volume is on quantum mechanics; it shows, for example, how the double-slit experiment contains the essential features of quantum mechanics. The book also includes chapters on mathematics and the relation of physics to other sciences. The Feynman Lectures on Physics is perhaps the most popular physics book ever written. It has been printed in a dozen languages.[2] More than 1.5 million copies have sold in English, and probably even more copies in foreign-language editions.[2] A 2013 review in Nature described the book as having "simplicity, beauty, unity … presented with enthusiasm and insight".[3] In 2013, Caltech made the book freely available, on the web site Feynman the “Great Explainer”: The Feynman Lectures on Physics found an appreciative audience beyond the undergraduate community. By 1960, Richard Feynman’s research and discoveries in physics had resolved a number of troubling inconsistencies in several fundamental theories. In particular, it was his work in quantum electrodynamics that would lead to the awarding in 1965 of the Nobel Prize in physics. At the same time that Feynman was at the pinnacle of his fame, the faculty of the California Institute of Technology was concerned about the quality of the introductory courses for undergraduate students. It was felt that these were burdened by an old-fashioned syllabus and that the exciting discoveries of recent years, many of which had occurred at Caltech, were not being conveyed to the students. Thus, it was decided to reconfigure the first physics course offered to students at Caltech, with the goal being to generate more excitement in the students. Feynman readily agreed to give the course, though only once. Aware of the fact that this would be a historic event, Caltech recorded each lecture and took photographs of each drawing made on the blackboard by Feynman. Based on the lectures and the tape recordings, a team of physicists and graduate students put together a manuscript that would become The Feynman Lectures on Physics. Although Feynman's most valuable technical contribution to the field of physics may have been in the field of quantum electrodynamics, the Feynman Lectures were destined to become his most widely read work. The Feynman Lectures are considered to be one of the best and most sophisticated college level introductions to physics.[4] Feynman, himself, however, stated, in his original preface, that he was “pessimistic” with regard to the success with which he reached all of his students. The Feynman lectures were written “to maintain the interest of very enthusiastic and rather smart students coming out of high schools and into Caltech.” Feynman was targeting the lectures to students who, “at the end of two years of our previous course, [were] very discouraged because there were really very few grand, new, modern ideas presented to them.” As a result, some physics students find the lectures more valuable after they obtain a good grasp of physics by studying more traditional texts. Many professional physicists refer to the lectures at various points in their careers to refresh their minds with regard to basic principles. As the two-year course (1961–1963) was still being completed, rumor of it spread throughout the physics community. In a special preface to the 1989 edition, David Goodstein and Gerry Neugebauer claim that as time went on, the attendance of registered students dropped sharply but was matched by a compensating increase in the number of faculty and graduate students. Sands, in his memoir accompanying the 2005 edition, contests this claim. Goodstein and Neugebauer also state that, “it was [Feynman’s] peers — scientists, physicists, and professors — who would be the main beneficiaries of his magnificent achievement, which was nothing less than to see physics through the fresh and dynamic perspective of Richard Feynman,” and that his "gift was that he was an extraordinary teacher of teachers". Addison–Wesley published a collection of problems to accompany The Feynman Lectures on Physics. The problem sets were first used in the 1962-1963 academic year and organized by Robert B. Leighton. Some of the problems are sophisticated enough to require understanding of topics as advanced as Kolmogorov's zero-one law, for example. Addison–Wesley also released in CD format all the audio tapes of the lectures, over 103 hours with Richard Feynman, after remastering the sound and clearing the recordings. For the CD release, the order of the lectures was rearranged from that of the original texts. (The publisher has released a table showing the correspondence between the books and the CDs.) In March 1964, Feynman appeared before the freshman physics class as a guest lecturer, but the notes for this lecture were lost for a number of years. They were finally located, restored, and made available as Feynman's Lost Lecture: The Motion of Planets Around the Sun. In 2005, Michael A. Gottlieb and Ralph Leighton co-authored Feynman's Tips on Physics, which includes four of Feynman's freshman lectures not included in the main text (three on problem solving, one on inertial guidance), a memoir by Matt Sands about the origins of the Feynman Lectures on Physics, and exercises (with answers) that were assigned to students by Robert B. Leighton and Rochus Vogt in recitation sections of the Feynman Lectures course at Caltech. Also released in 2005, was a "Definitive Edition" of the lectures which includes corrections to the original text. An account on the history of these famous volumes is given by Sands in his memoir article “Capturing the Wisdom of Feynman”, Physics Today, Apr 2005, p. 49.[5] In September 13, 2013, in an email to members of Feynman Lectures online forum, Gottlieb announced the launch of a new website by Caltech and The Feynman Lectures Website which offers "[A] free high-quality online edition" of the lecture text. Volume I of the lectures is initially posted on this website, but other volumes are expected to be available in the near future.[6] To provide a device independent reading experience, the website takes advantage of modern web technologies like HTML5, SVG, and Mathjax to present text, figures, and equations in any sizes while maintaining the display quality.[7] Volume I. Mainly mechanics, radiation, and heat[edit] Preface: “When new ideas came in, I would try either to deduce them if they were deducible or to explain that it was a new idea … and which was not supposed to be provable.” Volume II. Mainly electromagnetism and matter[edit] 1. Electromagnetism 2. Differential calculus of vector fields 3. Vector integral calculus 4. Electrostatics 5. Application of Gauss' law 6. The electric field in various circumstances 7. The electric field in various circumstances (continued) 8. Electrostatic energy 9. Electricity in the atmosphere 10. Dielectrics 11. Inside dielectrics 12. Electrostatic analogs 13. Magnetostatics 14. The magnetic field in various situations 15. The vector potential 16. Induced currents 17. The laws of induction 18. The Maxwell equations 19. Principle of least action 20. Solutions of Maxwell's equations in free space 21. Solutions of Maxwell's equations with currents and charges 22. AC circuits 23. Cavity resonators 24. Waveguides 25. Electrodynamics in relativistic notation 26. Lorentz transformations of the fields 27. Field energy and field momentum 28. Electromagnetic mass (ref. to Wheeler–Feynman absorber theory) 29. The motion of charges in electric and magnetic fields 30. The internal geometry of crystals 31. Tensors 32. Refractive index of dense materials 33. Reflection from surfaces 34. The magnetism of matter 35. Paramagnetism and magnetic resonance 36. Ferromagnetism 37. Magnetic materials 38. Elasticity 39. Elastic materials 40. The flow of dry water 41. The flow of wet water 42. Curved space Volume III. Quantum mechanics[edit] 1. Quantum behavior 2. The relation of wave and particle viewpoints 3. Probability amplitudes 4. Identical particles 5. Spin one 6. Spin one-half 7. The dependence of amplitudes on time 8. The Hamiltonian matrix 9. The ammonia maser 10. Other two-state systems 11. More two-state systems 12. The hyperfine splitting in hydrogen 13. Propagation in a crystal lattice 14. Semiconductors 15. The independent particle approximation 16. The dependence of amplitudes on position 17. Symmetry and conservation laws 18. Angular momentum 19. The hydrogen atom and the periodic table 20. Operators 21. The Schrödinger equation in a classical context: a seminar on superconductivity Abbreviated editions[edit] Six readily-accessible chapters were later compiled into a book entitled Six Easy Pieces: Essentials of Physics Explained by Its Most Brilliant Teacher. Six more chapters are in the book Six Not So Easy Pieces: Einstein's Relativity, Symmetry and Space-Time. Six Easy Pieces grew out of the need to bring to as wide an audience as possible, a substantial yet nontechnical physics primer based on the science of Richard Feynman…. General readers are fortunate that Feynman chose to present certain key topics in largely qualitative terms without formal mathematics….”[8] Six Easy Pieces (1994)[edit] 1. Atoms in motion 2. Basic Physics 3. The relation of physics to other sciences 4. Conservation of energy 5. The theory of gravitation 6. Quantum behavior Six Not-So-Easy Pieces (1998)[edit] 1. Vectors 2. Symmetry in physical laws 3. The special theory of relativity 4. Relativistic energy and momentum 5. Space-time 6. Curved space The Very Best of The Feynman Lectures (Audio, 2005)[edit] 1. The Theory of Gravitation (Vol. I, Chapter 7) 2. Curved Space (Vol. II, Chapter 42) 3. Electromagnetism (Vol. II, Chapter 1) 4. Probability (Vol. I, Chapter 6) 5. The Relation of Wave and Particle Viewpoints (Vol. III, Chapter 2) 6. Superconductivity (Vol. III, Chapter 21) Publishing information[edit] See also[edit] 1. ^ LeVine, Harry (2009). The Great Explainer: The Story of Richard Feynman. Greensboro, North Carolina: Morgan Reynolds. ISBN 978-1-59935-113-1.  2. ^ a b [1] 3. ^ Phillips R. (5 December 2013), "The Feynman lectures on physics", Nature, 504: 30-31. 4. ^ Rohrlich, Fritz (1989), From paradox to reality: our basic concepts of the physical world, Cambridge University Press, p. 157, ISBN 0-521-37605-X , Extract of page 157 5. ^ See also: Welton, T.A., “Memory of Feynman”, Physics Today, Feb 2007, p.46. 6. ^ Text of the email to Feynman Lectures Forum Members on Hacker News. 7. ^ Footnote on homepage of website The Feynman Lectures on Physics. 8. ^ Feynman, Richard Phillips; Leighton, Robert B.; Sands, Matthew (2011). Six Easy Pieces: Essentials of Physics Explained by Its Most Brilliant Teacher. Basic Books. p. vii. ISBN 0-465-02529-3. , Extract of page vii External links[edit]
94f3ccc3f16361b4
previous   home   next    PDF Wave Equations Michael Fowler, University of Virginia Photons and Electrons We have seen that electrons and photons behave in a very similar fashion—both exhibit diffraction effects, as in the double slit experiment, both have particle like or quantum behavior.  We can in fact give a complete analysis of photon behavior—we can figure out how the electromagnetic wave propagates, using Maxwell’s equations, then find the probability that a photon is in a given small volume of space dxdydz, is proportional to  |E|2dxdydz, the energy density.  On the other hand, our analysis of the electron’s behavior is incomplete—we know that it must also be described by a wave function   analogous to E, such that  gives the probability of finding the electron in a small volume dxdydz around the point (x, y, z) at the time tHowever, we do not yet have the analog of Maxwell’s equations to tell us how ψ varies in time and space.  The purpose of this section is to give a plausible derivation of such an equation by examining how the Maxwell wave equation works for a single-particle (photon) wave, and constructing parallel equations for particles which, unlike photons, have nonzero rest mass. Maxwell’s Wave Equation Let us examine what Maxwell’s equations tell us about the motion of the simplest type of electromagnetic wave—a monochromatic wave in empty space, with no currents or charges present.  First, we briefly review the derivation of the wave equation from Maxwell’s equations in empty space: To derive the wave equation, we take the curl of the third equation: together with the vector operator identity to give For a plane wave moving in the x-direction this reduces to  The monochromatic solution to this wave equation has the form (Another possible solution is proportional to cos(kx - ωt).  We shall find that the exponential form, although a complex number, proves more convenient.  The physical electric field can be taken to be the real part of the exponential for the classical case.) Applying the wave equation differential operator to our plane wave solution If the plane wave is a solution to the wave equation, this must be true for all x and t, so we must have This is just the familiar statement that the wave must travel at c What does the Wave Equation tell us about the Photon We know from the photoelectric effect and Compton scattering that the photon energy and momentum are related to the frequency and wavelength of the light by Notice, then, that the wave equation tells us that  and hence E = cp To put it another way, if we think of as describing a particle (photon) it would be more natural to write the plane wave as that is, in terms of the energy and momentum of the particle. In these terms, applying the (Maxwell) wave equation operator to the plane wave yields E2 = c2p2. The wave equation operator applied to the plane wave describing the particle propagation yields the energy-momentum relationship for the particle. Constructing a Wave Equation for a Particle with Mass The discussion above suggests how we might extend the wave equation operator from the photon case (zero rest mass) to a particle having rest mass m0.  We need a wave equation operator that, when it operates on a plane wave, yields Writing the plane wave function where A is a constant, we find we can get  by adding a constant (mass) term to the differentiation terms in the wave operator: This wave equation is called the Klein-Gordon equation and correctly describes the propagation of relativistic particles of mass m0.  However, it’s a bit inconvenient for nonrelativistic particles, like the electron in the hydrogen atom, just as E2 = m02c4 + c2p2 is less useful than E= p2/2m for this case.  A Nonrelativistic Wave Equation Continuing along the same lines, let us assume that a nonrelativistic electron in free space (no potentials, so no forces) is described by a plane wave: We need to construct a wave equation operator which, applied to this wave function, just gives us the ordinary nonrelativistic energy-momentum relationship, E = p2/2m.  The p2 obviously comes as usual from differentiating twice with respect to x, but the only way we can get E is by having a single differentiation with respect to time, so this looks different from previous wave equations: This is Schrödinger’s equation for a free particle.  It is easy to check that if  has the plane wave form given above, the condition for it to be a solution of this wave equation is just E = p2/2m. Notice one remarkable feature of the above equation—the i on the left means that  cannot be a real function. How Does a Varying Potential Affect a de Broglie Wave? The effect of a potential on a de Broglie wave was considered by Sommerfeld in an attempt to generalize the rather restrictive conditions in Bohr’s model of the atom.  Since the electron was orbiting in an inverse square force, just like the planets around the sun, Sommerfeld couldn’t understand why Bohr’s atom had only circular orbits, no Kepler-like ellipses. (Recall that all the observed spectral lines of hydrogen were accounted for by energy differences between these circular orbits.)  De Broglie’s analysis of the allowed circular orbits can be formulated by assuming that at some instant in time the spatial variation of the wave function on going around the orbit includes a phase term of the form , where here the parameter q measures distance around the orbit.  Now for an acceptable wave function, the total phase change on going around the orbit must be 2nπ, where n is an integer.  For the usual Bohr circular orbit, p is constant on going around, q changes by 2πr, where r is the radius of the orbit, giving the usual angular momentum quantization.  What Sommerfeld did was to consider a general Kepler ellipse orbit, and visualize the wave going around such an orbit.  Assuming the usual relationship , the wavelength will vary as the particle moves around the orbit, being shortest where the particle moves fastest, at its closest approach to the nucleus. Nevertheless, the phase change on moving a short distance Δq should still be , and requiring the wave function to link up smoothly on going once around the orbit gives Thus only certain elliptical orbits are allowed. The mathematics is nontrivial, but it turns out that every allowed elliptical orbit has the same energy as one of the allowed circular orbits. This is why Bohr’s theory gave all the energy levels. Actually, this whole analysis is old fashioned (it’s called the “old quantum theory”) but we’ve gone over it to introduce the idea of a wave with variable wavelength, changing with the momentum as the particle moves through a varying potential. Schrödinger’s Equation for a Particle in a Potential Let us consider first the one-dimensional situation of a particle going in the x-direction subject to a “roller coaster” potential.  What do we expect the wave function to look like?  We would expect the wavelength to be shortest where the potential is lowest, in the valleys, because that’s where the particle is going fastest—maximum momentum.  Perhaps slightly less obvious is that the amplitude of the wave would be largest at the tops of the hills (provided the particle has enough energy to get there) because that’s where the particle is moving slowest, and therefore is most likely to be found.  With a nonzero potential present, the energy-momentum relationship for the particle becomes the energy equation We need to construct a wave equation which leads naturally to this relationship.  In contrast to the free particle cases discussed above, the relevant wave function here will no longer be a plane wave, since the wavelength varies with the potential. However, at a given x, the momentum is determined by the “local wavelength”, that is, It follows that the appropriate wave equation is: This is the standard one-dimensional Schrödinger equation. In three dimensions, the argument is precisely analogous.  The only difference is that the square of the momentum is now a sum of three squared components, for the x, y and z directions, so , and the equation is: This is the complete Schrödinger equation. previous   home   next    PDF
4635f24987de6425
Biology Topics In an effort to illuminate connections between chemistry and biology and spark students' excitement for chemistry, we incorporate frequent biology-related examples into the lectures. These in-class examples range from two to ten minutes, designed to succinctly introduce biological connections without sacrificing any chemistry content in the curriculum. A list of the biology-, medicine-, and MIT research-related examples used in 5.111 is provided below. Click on the associated PDF for more information on each example. To reinforce the connections formed in lecture, we also include biology-related problems in each homework assignment. Selected homework problems and solutions are available below. L1 The importance of chemical principles Chemical principles in research at MIT   L2 Discovery of electron and nucleus, need for quantum mechanics   Activity. Rutherford backscattering experiment with ping-pong ball alpha particles L3 Wave-particle duality of light Quantum dot research at MIT (PDF)   L4 Wave-particle duality of matter, Schrödinger equation   Demo. Photoelectric effect demonstration L5 Hydrogen atom energy levels   Demo. Viewing the hydrogen atom spectrum L6 Hydrogen atom wavefunctions (orbitals)     L7 p-orbitals     L8 Multielectron atoms and electron configurations     L9 Periodic trends Alkali earth metals in the body: Na and K versus Li (lithiated 7-up) (PDF) Selected biology-related questions based on Lecture 1-9. (PDF) Answer key (PDF) L10 Periodic trends continued; covalent bonds Atomic size: sodium ion channels in neurons (PDF)   L11 Lewis structures Lewis sturucture examples: 1) Cyanide ion in cassava plants, cigarettes 2) Thionyl chloride for the synthesis of novacaine Exceptions to Lewis structure rules; Ionic bonds 1) Free radicals in biology (in DNA damage and essential for life) 2) Lewis structure example: Nitric Oxide (NO) in vasodilation (and Viagra) Polar covalent bonds; VSEPR theory 1) Water versus fat-soluble vitamins (comparing folic acid and vitamin A) 2) Molecuar shape: importance in enzyme-substrate complexes L14 Molecular orbital theory 2008 Nobel Prize in chemistry: Green Flourescent Protein (GFP) (PDF) L15 Valence bond theory and hybridization Restriction of rotation around double bonds: application to drug design (PDF)   Determining hybridization in complex molecules; Thermochemistry and bond energies / bond enthalpies 1) Hybridization example: ascorbic acid (vitamin C) 2) Thermochemistry of glucose oxidation: harnessing energy from plants L17 Entropy and disorder 1) Hybridization example: identifying molecules that follow the "morphine rule" 2) ATP hydrolysis in the body L18 Free energy and control of spontaneity 1) ATP-coupled reactions in biology 2) Thermodynamics of hydrogen bonding: relevance to DNA replication L19 Chemical equilibrium     L20 Le Chatelier's principle and applications to blood-oxygen levels 1) Maximizing the yield of nitrogen fixation: inspiration from bacteria 2) Le Chatelier's principle and hemoglobin: blood-oxygen levels (PDF - 2.7 MB) Selected biology-related questions based on Lectures 10-20 (PDF) Answer key (PDF) L21 Acid-base equilibrium: Is MIT water safe to drink?   Demo. Determining pH of household items using a color indicator from cabbage leaves L22 Chemical and biological buffers L23 Acid-base titrations pH and blood-effects from vitamin B12 deficiancy (PDF - 2.4 MB) L24 Balancing oxidation/reduction equations     L25 Electrochemical cells Oxidative metabolism of drugs (PDF) Demo. Oxidation of magnesium (resulting in a glowing block of dry ice) L26 Chemical and biological oxidation/reduction reactions Reduction of vitamin B12 in the body (PDF) Selected biology-related questions based on Lectures 21-26 (PDF) Answer key (PDF) L27 Transition metals and the treatment of lead poisoning 1) Metal chelation in the treatment of lead poisoning 2) Geometric isomers and drugs: i.e. the anti-cancer drug cisplatin L28 Crystal field theory     L29 Metals in biology Inspiration from metalloenzymes for the reduction of greenhouse gasses (PDF - 1.3 MB) Activity. Toothpick models: gumdrop d-orbitals, jelly belly metals and ligands L30 Magnetism and spectrochemical theory Demo. Oscillating clock reaction L31 Rate laws Kinetics of glucose oxidation (energy production) in the body (PDF) Activity. Hershey kiss "experment" on the oxidation of glucose L32 Nuclear chemistry and elementary reactions Medical applications of radioactive decay (technetium-99) (PDF) "Days of Our Halflives" poem L33 Reaction mechanism Reaction mechanism of ozone decomposition (PDF)   L34 Temperature and kinetics   Demo. Liquid nitrogen (glowsticks: slowing the chemiluminescent reaction) L35 Enzyme catalysis Eyzmes as the catalysts of life, inhibitors (i.e. HIV protease inhibitors) (PDF)   L36 Biochemistry The methionine synthase case study (chemistry in solution!) (PDF) Selected biology-related questions based on Lectures 27-36 (PDF) Answer key (PDF)
422ffaa553c64990
Friday, February 03, 2017 Lindblad equation can't solve any "problems" of quantum mechanics What I find more ludicrous is Weinberg's and Hossenfelder's suggestions that such new terms would "solve" something about what they consider mysteries, paradoxes, or problems of quantum mechanics. The first sentence of Weinberg's paper says In searching for an interpretation of quantum mechanics we seem to be faced with nothing but bad choices. and the following sentences repeat some of the by now standard Weinberg's critical words about Copenhagen as well as other "interpretations". The message is that this work about the extra "Lindblad terms" solve some mystery of quantum mechanics because they make something like the wave function collapse "more real". Similarly, Hossenfelder's most positive paragraph in favor of these efforts says: I don't think that the right word is "unpopular" to describe the statement that such "fundamental decoherence" would "really solve the problem". Instead, this statement is self-evidently wrong. Even if the extra Lindblad parameters \(\lambda_{mn}\) were nonzero and discovered, and it won't happen, we would't find any "more enlightening" version of quantum mechanics. We would still have similar equations with the same objects and with some new terms that used to be zero but now they are nonzero. If a conceptual change appeared at all, the situation would clearly get more mysterious, not less so. If someone finds neutrinos mysterious, the discovery of the nonzero neutrino masses hardly makes things easier for him. Or consider the same sentence with the QCD theta-angle, CP-violating phases, cosmological constant, or any other parameter that could have been zero but wasn't. If you couldn't understand the theory with a vanishing value of these parameters, the more complex or generalized theory with the new nonzero parameters will be even harder for you, won't it? OK, the Lindblad equation is the following equation for a density matrix:\[ \dot \rho(t) &= -i[H,\rho(t)]+\\ &+\sum_\alpha \left[ L_\alpha \rho(t) L^\dagger_\alpha-\frac 12\left\{ L_\alpha^\dagger L_\alpha,\rho(t) \right\} \right] \] This equation is the most general linear equation for the density matrix \(\rho(t)\) that preserves its trace (total probability) and the Hermiticity. The sum over \(\alpha\) runs over at most \(N^2-1\) new terms. Aside from the Hamiltonian matrix \(H\), one must pick many new operators \(L_\alpha\) and their conjugates to define the laws of physics. I've divided the equation to two lines. The first line is the normal equation for the density matrix, one easily derived from the Schrödinger equation for \(\ket\psi\). The second line contains all the new terms that are zero according to contemporary physics but proposed to be nonzero by Weinberg (and others) and that should be tested by atomic clocks. Note that \(\rho(t)\) is Hermitian, and so is therefore the left hand side. The first, normal term of the right hand side is a commutator with \(H\) which is Hermitian. For the commutator to be Hermitian as well, the coefficient has to be pure imaginary. On the contrary, the new Lindblad terms have a real coefficient. To see what these terms are doing or "should do", it's better to look at an Ansatz for a solution – which is Weinberg's equation (3):\[ \rho_{mn}(t) = \rho_{mn}(0) \times \exp\left[ -i(E_m-E_n)t -\lambda_{mn}t \right]. \] The Ansatz was written in an energy eigenstate basis. The oscillating part of the exponent looks just like in Heisenberg's papers and the frequency is \(E_m-E_n\). The diagonal elements of \(\rho(t)\) don't change at all while the off-diagonal elements have a phase that changes with time with this frequency. What's new is the extra, exponentially decreasing factor of \(\exp(-\lambda_{mn}t)\). The off-diagonal elements don't have a constant absolute value, as they should have in unitary quantum mechanics, but they're exponentially damped with some rate \(\lambda_{mn}\) which are parameters bilinear in the matrix elements of the \(L_\alpha\) matrices in the Lindblad equation. These off-diagonal elements of the density matrix contain the information about the relative phases of the wave function. Decoherence makes them go to zero. Here they are going to zero exponentially so it's "some kind of decoherence". Except that this is proposed to be decoherence due to new terms in the fundamental laws of physics, not due to the interaction with a subsystem labeled the "environment". The Lindblad equation may appear as an effective equation for an open system that interacts with some environment that we can't trace so instead, we trace over it. But does it make any sense to consider it as a fundamental equation? I don't think so. First, the modification back to \(\lambda_{mn}=0\) is just prettier and better. I decided to place this objection at the top. The point is that the addition of all these \(\lambda_{mn}\neq 0\) damped factors is extremely artificial and it makes sense to cut this whole line of generalization by Occam's razor. If the Lindblad equation for some \(H\) and some \(L_\alpha\) has some nice properties, you may be pretty sure that the equation where you simply set \(L_\alpha=0\) is at least equally pretty. You can't lose any virtue by that. On the contrary, you lose virtues when you consider nonzero \(L_\alpha\). Second, lots of new operators have to be defined on top of the Hamiltonian. This is an addition to the first complaint but it may be viewed as an independent one. In normal quantum mechanics, we only determine one matrix on the Hilbert space, the Hamiltonian (or directly the S-matrix etc.). Here we must choose the Hamiltonian and about \(N^2-1\) additional operators on the Hilbert space \(L_\alpha\). Who are they? What deeper principle could possibly determine or at least constrain them? Third, the Lindblad equation doesn't allow any Heisenberg picture at all. The normal equation has \(L_\alpha=0\) and only contains the commutator with \(H\) in the evolution. Consequently, the evolution in time is a unitary transformation. You may pick a time-dependent basis of the Hilbert space in which the coordinates of \(\ket\psi\) or \(\rho\) will look constant and the operators such as \(x(t),p(t)\) will be time-dependent instead. This is the Heisenberg picture. With the Lindblad equation, you can't do that. There's no basis in the Hilbert space in which \(\rho(t)\) could be constant – after all, its eigenvalues are changing with time. Consequently, you won't be able to write this theory in any Heisenberg picture. This is a far deeper problem than people like Weinberg may realize. One reason is that the equations for the operators in the Heisenberg picture basically emulate the classical evolution equations for \(x(t),p(t)\) etc. The Heisenberg picture is an elegant way to see that quantum mechanics reduces to classical physics. Now, because you can't write the Weinberg-Lindblad theory in the Heisenberg picture, you won't be able to show the right classical limit. So in fact, by adding the new Weinberg-Lindblad terms, you have made the theory less compatible with classical physics that Weinberg loves so much, not more so! For this reason, I also suspect that you wouldn't need any atomic clocks to falsify this theory. This theory almost certainly predicts some completely wrong unobserved things for physical systems that are highly classical. Fourth, the new terms are pretty much by definition proofs that "you are missing something" I've mentioned that the Lindblad equation may be obtained as an effective equation if you eliminate some environment you can't track. I would argue that the converse is true, too. If you have the Lindblad equation, it shows that it's some effective equation, you have eliminated some degrees of freedom, you should return to the blackboard and see what this deeper physics that you have ignored is and where it is hiding! Weinberg is acting as he believes that the opposite is true: If he found the ugly new terms that normally emerge in effective theories only, he would be led to believe that he has found a more fundamental theory. This thinking clearly seems upside down. OK, what are you missing when you see these new effective terms? Bonus: the Lindblad equation is a quantum counterpart of "classical physics with Brownian random forces" In classical deterministic physics, if you know the point \(x_i(t),p_i(t)\) in the phase space at one moment, you may calculate it at later moments \(t\), too. To explain the Brownian motion, Einstein (and the Polish guy) considered a generalization of deterministic classical physics in which the particle is also affected by classical but random forces (from the surrounding atoms) which are described by some distributions. So even if the precise position and momentum were known at one moment, they would be unknown after some time of the Brownian motion. The peaked distribution on the phase space would get "dissolved". This is exactly how you should think about the effect of the new Lindblad terms. They're like some random forces described in terms of the density matrix. Is something getting dissolved as well? Is the exponential decrease of the off-diagonal elements equivalent to the classical spreading of the distribution on the phase space? You bet. It's not obvious in the basis that Weinberg chose – if the diagonal entries of \(\rho\) don't change. But if you pick any different basis, even the diagonal entries will change – they will be evolving towards values that are closer to each other and that's equivalent to the dissolution of the peaked distribution in the phase space. So there should be some molecules etc. that are causing this randomization of the pollen particle etc.! Fifth, the new terms violate the conservation laws and/or locality In a 1983 paper that Weinberg is aware of, Banks, Susskind, and Peskin argued that the equation violates either locality or energy-momentum conservation. Weinberg mentions this paper as well as a 1995 paper by Unruh and Wald which claims to have found some counterexamples to Banks et al. I don't quite understand what those guys have done but I am pretty sure that the counterexamples would have to be extremely artificial. Look at the formula for \(\rho_{mn}(t)\) above. You see that if you want to preserve the energy conservation law, you really want the exponential decrease to affect the off-diagonal elements in an energy basis only. It means that the matrices \(L_\alpha\) in the extra terms must be able to determine or "calculate" what the energy eigenvectors are. If you just place some generic matrices, the conservation laws will be violated. Sixth, CPT theorem trouble Also, the solution to the Lindblad equation has entries that are exponentially decreasing in time. That's an intrinsic time-reversal asymmetry. Well, the legality of these solutions and the elimination of the opposite ones contradicts the existence of any CPT-symmetry. So the CPT-theorem just couldn't hold in any generalized Weinberg-Lindblad theory of this kind. You could ask whether it should hold at all. Well, I think it should. The CPT transformation is just a continuation of the Lorentz group, the rotation of the \(t_Ez\)-plane by 180 degrees which just happens to make sense even in the Minkowski signature. So the CPT symmetry is closely linked to the Lorentz symmetry. None of this reasoning may be quite applied to the Weinberg-Lindblad theory because operations (in particular, the evolution operations) are not identified with unitary transformations in that theory etc. But I think it must lead to inconsistencies – either non-locality or a violation of the conservation laws. I am convinced that under reasonable assumptions, it leads to problems with both – conservation laws as well as locality and/or Lorentz symmetry. One "morally non-relativistic" aspect of the Lindblad laws is that the evolution in time isn't represented just by a unitary operator while the translation i.e. evolution in space is still just a unitary transformation. So the temporal and spatial components of a four-vector (energy-momentum) seem to be qualitatively different. I would be surprised if the Lorentz invariance could be preserved by laws like that – at least if these laws are determined by some principles, instead of just by an artificial construction designed to prove me wrong. Seventh, it just doesn't help you with any "mysteries of quantum mechanics" But as I said, the most important problem isn't any particular technical flaw in the equations even though I do believe that the troubling observations above are flaws of the theory. The main problem is that these analyses have nothing to say about the "broader problem" that Weinberg talks about, namely his problems with the foundations of quantum mechanics. Imagine that the new terms exist and are nonzero. So there exists an experiment, e.g. one with an atomic clock, that may show that some \(\lambda_{mn}\neq 0\). This experiment must be accurate enough – so far, similar experiments couldn't see any violation of normal quantum mechanics i.e. they couldn't have proven any \(\lambda_{mn}\neq 0\). The evidence that the new parameters are nonzero is increasing with some time – because these terms cause some intrinsic decoherence that deepens with time. OK, so even if you said that the experiment for times \(t\gt t_C\) that are enough to see the new Weinberg-Lindblad effects proves that "things are less mysterious" because the relative phases have dropped almost to zero, it would still be true that for \(t\lt t_C\), the damping is small or negligible and the system basically follows the good old unitary rules of quantum mechanics. So the "trouble with quantum mechanics" when applied to your experiment at \(t\lt t_C\) would be exactly the same as it was before you introduced the new terms! The effect of all the new terms would be small or negligible, just like in all experiments that have been confirming unitary quantum mechanics so far. The idea that the damping of some elements of the density matrix reduces the mystery of quantum mechanics is utterly irrational. At most, the Lindblad-Weinberg equation – if a natural version of it could exist, and I feel certain that it can't – could pick a preferred basis of the Hilbert space e.g. of your brain that would tell you which things you may feel and which you can't. Except that even in normal quantum mechanics, it's not needed. Even without decoherence, any density matrix may be diagonalized in some basis. So you may always view it as the basis that may be would-be classically perceived, if you adopt the viewpoint is that the non-vanishing off-diagonal elements clash with the perception. And like ordinary decoherence, this Lindblad-induced decoherence doesn't actually pick one of the outcomes. Decoherence makes a density matrix diagonal but it doesn't bring it to the form \({\rm diag}(0,0,1,0,0,0)\) or a similar one. To summarize, even if pieces of the analyses of atomic clocks are correct, the broader talk about all these things is completely wrong. None of these hypothesized new terms can "solve" any of the "problems" that Weinberg talks about. Weinberg has confined these wrong comments about the interpretation to the first paragraph of his paper. But Hossenfelder didn't confine them. Let me mention her sentences that aren't right: Our world is never un-quantum. Our world – and both small and large objects in it – obey the laws of quantum mechanics. If you think that any observation of large objects we know disagrees with quantum mechanics, and it's the only meaning of "un-quantum" I can imagine, then you misunderstand what quantum mechanics actually does and predicts. Decoherence is not "needed" for anything. It's just an effective re-organization of the dynamics in situations where a part of the physical system may be viewed as an environment, a re-organization that explains why the relative phases are being forgotten – and therefore one of the first steps needed to explain why a classical theory is sufficient to approximately describe everything (decoherence is needed for that because the main thing that classical physics refuses to remember are the relative quantum phases). But the forgetting still obeys the laws of quantum mechanics, it in no way contradicts it. If "someone" is doing something else, it's just not quantum mechanics. The dynamical laws of quantum mechanics are performing the evolution of the probability amplitudes – either in the state vector, density matrix, or operators. The rest is to connect these probability amplitudes with the observations. But this isn't done by Nature. Instead, it's done by the physicist. It's the physicist who must understand what a probability amplitude or a probability means and that's what allows him to apply the calculations of the unitary evolution on objects around him. But the application of the laws isn't something that "Nature does". Instead, it is what a "physicist does". And if she doesn't know how to do it right, or if she has some religious or psychological obstacles that prevent her from doing it at all, it's her f*cking defect, not Nature's. (Note that I have used "she" and "her" in order to be politically correct.) No comments: Post a Comment
1124356196e5d223
Open Access Article This Open Access Article is licensed under a Creative Commons Attribution 3.0 Unported Licence Toward fully quantum modelling of ultrafast photodissociation imaging experiments. Treating tunnelling in the ab initio multiple cloning approach Dmitry V. Makhov a, Todd J. Martinez b and Dmitrii V. Shalashilin a aSchool of Chemistry, University of Leeds, Leeds, LS2 9JT, UK. E-mail: D.Makhov@leeds.ac.uk; D.Shalashilin@leeds.ac.uk Received 11th April 2016 , Accepted 1st May 2016 First published on 2nd May 2016 We present an account of our recent effort to improve simulation of the photodissociation of small heteroaromatic molecules using the Ab Initio Multiple Cloning (AIMC) algorithm. The ultimate goal is to create a quantitative and converged technique for fully quantum simulations which treats both electrons and nuclei on a fully quantum level. We calculate and analyse the total kinetic energy release (TKER) spectra and Velocity Map Images (VMI), and compare the results directly with experimental measurements. In this work, we perform new extensive calculations using an improved AIMC algorithm that now takes into account the tunnelling of hydrogen atoms. This can play an extremely important role in photodissociation dynamics. I. Introduction Quantum non-adiabatic molecular dynamics is a powerful tool for understanding the details of the mechanisms of important photo-induced processes, such as the photodissociation of pyrrole and other heteroaromatic molecules. In these processes, quantum effects such as electronically non-adiabatic transitions and tunnelling are important, and an approach that goes beyond surface hopping, such as multiconfigurational time dependent Hartree (MCTDH),1 for example, is often required. MCTDH can be very accurate, and was recently used to simulate the dissociation of pyrrole.2 However it needs a parameterized potential energy surface as a starting point, which significantly restricts its practicality. A good alternative is represented by a variety of methods3–11 based on trajectory-guided Gaussian basis functions (TBF). Despite the fact that such approaches use classical trajectories, they are still fully quantum mechanical because these trajectories are employed only for propagating the basis, while the evolution of their amplitudes and, thus, of the total nuclear wave-function is determined by the time-dependent Schrödinger equation. An important advantage of trajectory-guided quantum dynamics methods is that they are fully compatible with direct or ab initio molecular dynamics where excited state energies, gradients, and non-adiabatic coupling terms are evaluated on the fly simultaneously with the nuclear propagation. The disadvantage is that trajectory based direct dynamics is very expensive due to the high cost of electronic structure calculations and typically can afford only a limited number of trajectories, which can be an obstacle to full convergence. Recently, we introduced the ab initio multiple cloning (AIMC)10 method, where TBFs are moving along Ehrenfest trajectories, as in the multiconfigurational Ehrenfest (MCE)8,9 approach, with bifurcation of the wave-functions taken into account via basis function cloning. While leading to the growth of the number of trajectories, the use of cloning helps to adopt the basis set to quantum dynamics significantly better than in the classical MCE approach. AIMC also uses a number of tricks to efficiently sample the trajectory basis and to use the information obtained on the fly: (1) similar to the previously developed trajectory based methods AIMC relies on importance sampling of initial conditions. (2) AIMC uses the so called time displaced or train basis sets,10,12,13 which increase the basis set size almost without any additional extra cost by reusing the ab initio data which has already been obtained. (3) The method calculates quantum amplitudes in a “post-processing technique” after the trajectories of the basis set functions have been found. As a result, the trajectories can be calculated one by one in parallel and good statistics can be accumulated. In this work, we present a new implementation of the AIMC approach that is improved to take into account the tunnelling of hydrogen atoms by identifying possible tunnelling points and placing additional TBFs of the other side of the barrier. We use this new implementation to simulate the dynamics of the photodissociation of pyrrole, a process where tunnelling can play a very important role. We calculate the TKER spectrum and velocity map image (VMI), and directly compare the results of our calculations with experimental observations.14 The paper is organized as follows. In Section II we describe the proposed implementation of the AIMC approach. Section III contains the computational details of our simulations. In Section IV, we present and discuss the results. Conclusions are given in Section V. II. Theory II.1 Working equations The AIMC method10 is based on the same ansatz as the multiconfigurational Ehrenfest (MCE) approach,8,9 in which the total wave-function |Ψ(t)〉 is represented in a trajectory-guided basis |ψn(t)〉: image file: c6fd00073h-t1.tif(1) The basis functions |ψn(t)〉 are composed of nuclear and electronic parts: image file: c6fd00073h-t2.tif(2) The nuclear part |χn(t)〉 is a Gaussian coherent state moving along an Ehrenfest trajectory: image file: c6fd00073h-t3.tif(3) where [R with combining macron]n(t) and [P with combining macron]n(t) are the phase space coordinate and momentum vectors of the basis function centre, γn(t) is a phase, and the parameter α determines the width of the Gaussians. The electronic part of basis functions |ψn(t)〉 is represented as a superposition of several adiabatic eigenstates |ϕI〉 with quantum amplitudes a(n)I. The time dependence of the Ehrenfest amplitudes a(n)I is given by the equations image file: c6fd00073h-t4.tif(4) where the matrix elements of electronic Hamiltonian Hel(n)IJ are expressed as: image file: c6fd00073h-t5.tif(5) here VI([R with combining macron]n) is the Ith potential energy surface and dIJ([R with combining macron]n) = 〈ϕI|R|ϕJ〉 is the non-adiabatic coupling matrix element (NACME). The motion of the centres of the Gaussians follows the standard Newton's equations: image file: c6fd00073h-t6.tif(6) where the force [F with combining macron]n is an Ehrenfest force that includes both the usual gradient term and the additional term related to the change of quantum amplitudes as a result of non-adiabatic coupling: image file: c6fd00073h-t7.tif(7) Finally, the phase γn evolves as: image file: c6fd00073h-t8.tif(8) Eqn (3)–(8) form a complete set, determining the basis and its time evolution. The evolution of the total wave-function |Ψ(t)〉 (eqn (1)) is defined by both the evolution of the basis functions |ψn(t)〉 and the evolution of the relevant amplitudes cn(t). The time dependence of the amplitudes cn(t) is given by the equation image file: c6fd00073h-t9.tif(9) which can be easily obtained by substituting (1) into the time dependent Schrödinger equation. The Hamiltonian matrix elements Hmn can be written as: image file: c6fd00073h-t10.tif(10) Assuming that the second derivative of the electronic wave-function |ϕI〉 with respect to R can be disregarded, we get: image file: c6fd00073h-t11.tif(11) The matrix elements of the kinetic energy operator image file: c6fd00073h-t12.tif can be calculated analytically. For potential energy and non-adiabatic coupling matrix elements, we use a simple approximation:10 image file: c6fd00073h-t13.tif(12) image file: c6fd00073h-t14.tif(13) The approximation (12) represents a linear interpolation of the potential energy between the two points and can be improved further at the cost of calculating higher derivatives of the potential energy along the trajectories. It has been tested previously,10 and no visible change of the results was found when this approximation was applied compared to the saddle point approximation which expands around a distinct centroid for each pair of TBFs.4 The term image file: c6fd00073h-t15.tif in eqn (9), which originates from the time dependence of the basis, can be expressed as: image file: c6fd00073h-t16.tif(14) image file: c6fd00073h-t17.tif(15) Notice that in the AIMC approach, all off-diagonal matrix elements entering eqn (9) are calculated from the electronic structure data at the TBF centres, which is needed for the propagation of the basis. Thus, quantum coupling between the configurations comes at almost no extra cost. Moreover, eqn (9) can be solved after the trajectories have been calculated, provided the appropriate electronic structure information has been saved. The detailed derivation of MCE equations together with the expressions for relevant matrix elements can be found in our previous works.10,11 II.2 Basis set sampling and cloning The Ehrenfest basis set is guided by an average potential, which can be advantageous when quantum transitions are frequent. However, it becomes unphysical in regions of low non-adiabatic coupling when two or more electronic states have significant amplitudes: in this case, the difference of the shapes of potential energy surfaces for different electronic states should lead to branching of the wavepacket. In order to reproduce the bifurcation of the wave-function after leaving the non-adiabatic coupling region, AIMC methods adopt the cloning procedure,10 where the appropriate basis function is replaced by two basis functions, each guided (mostly) by a single potential energy surface. After the cloning event, an Ehrenfest configuration image file: c6fd00073h-t18.tif yields two configurations: image file: c6fd00073h-t19.tif(16) image file: c6fd00073h-t20.tif(17) The first clone configuration has non-zero amplitudes for only one electronic state, and the second clone contains contributions of all other electronic states. The amplitudes of the two new configurations become: image file: c6fd00073h-t21.tif(18) so that the contribution of the two clones |ψn〉 and |ψn〉 to the whole wave-function (1) remains the same as the contribution of original function: image file: c6fd00073h-t22.tif(19) We apply the cloning procedure shortly after a trajectory passes near a conical intersection, when the non-adiabatic coupling is lower than a threshold, and, at the same time, the so-called breaking force image file: c6fd00073h-t23.tif(20) which is the force pulling the Ith state away from the remaining states, is sufficiently strong. The cloning procedure is very much in spirit of the spawning, used in the Ab Initio Multiple Spawning approach (AIMS). Cloning does not require any back-propagation of spawned/cloned basis functions, unlike many4 (but not all15,16) implementations of spawning. As has been described in our previous work,7 we rely on importance sampling when generating the initial conditions. Using the linearity of the Schrödinger equation, we first represent the initial wave-function as a superposition of Gaussians and then propagate each of them independently, “bit-by-bit”.7 We use a time-displaced basis set (coherent state trains), where several Gaussian basis functions are moving along the same trajectory but with a time-shift Δt, allowing us to reuse the same electronic structure data for each of the basis functions in the “train.” Fig. 1 shows a time displaced basis guided by a trajectory and its bifurcation via cloning. The best possible result with AIMC can be achieved when a swarm of trains is used to propagate each “bit” of the initial wave-function. image file: c6fd00073h-f1.tif Fig. 1 A sketch of the AIMC propagation scheme. The wave-function is represented as a superposition of Gaussian coherent states, which form a train moving along the trajectory. After passing the intersection, the train branches in the process of cloning. The figure shows a single train with cloning. In the most detailed AIMC calculation, a basis of several cloning trains interacting with each other is used. II.3 Tunnelling The tunnelling of hydrogen atoms can play an important role in photodissociation processes. As mentioned above, MCE, AIMC and AIMS are fully quantum methods because classical trajectories are used only to propagate the basis, while the amplitudes cn(t) are found by solving the time dependent Schrödinger equation. When Gaussian basis functions are present on two sides of the potential barrier, the interaction between them can provide quantum tunnelling through the barrier. However, in the case of direct ab initio dynamics, the basis is usually very small, far from being complete. As a result, no basis functions normally would be present on the other side, and they must be placed there by hand in order to take tunnelling into account. In this paper we adopt the ideas17,18 previously used in the AIMS method to describe tunnelling for use with the AIMC technique. Fig. 2 illustrates the algorithm that we apply. First, we calculate the usual AIMC trajectories and find turning points, where the distance between the hydrogen atom and the radical reaches a local maximum. Then, for each of these turning points, we calculate the shape of the potential barrier: we increase manually the length of the N–H bond keeping all other degrees of freedom frozen, calculate potential energies, and find the point on the other side of the barrier with the same energy as in the turning point. If this point lies further than a set threshold from the turning point, we assume that tunnelling is not possible here, as the potential barrier is too wide. Otherwise, we use it as a starting point for an additional AIMC trajectory. The new trajectory is calculated both forward and backward in time, and the initial momenta are taken as the same as in the turning point, ensuring that new trajectories have the same total classical energies as their parent trajectories. This is exactly the procedure used in the multiple spawning approach, thus our method combines cloning for non-adiabatic events and spawning for tunnelling events. The forward propagation of new trajectories often involves branching as a result of cloning; backward propagation is performed without cloning and for a sufficiently short time, until new and parent trajectories separate in phase space. image file: c6fd00073h-f2.tif Fig. 2 Illustration of the algorithm used to treat tunnelling in our approach. (A) Identify turning point; (B) find a point with the same potential energy on the opposite side of the barrier; (C) run an additional trajectory through this point; (D) solve time-dependent Schrödinger equation in the basis of a coherent state trains10 moving along the trajectories on both sides of the barrier. When all the trajectories are calculated, we solve eqn (9) for quantum amplitudes cn(t) in a time-displaced basis set (coherent state trains). This is similar to our previous approach10,11 but with the difference that now the basis is better adapted to treat tunnelling. The train basis on the new trajectory is placed in such way that it reaches the tunnelling point at the same time as the train basis on the parent trajectory. Because the new trajectory differs from its parent by only one coordinate at a tunnelling point, namely by the length of the N–H bond, there is a significant overlap between Gaussian basis functions belonging to these two trajectories. This interaction is retained for a significant time while the coherent state trains are passing the tunnelling point, ensuring the transfer of quantum amplitude across the barrier. III. Computational details Using our AIMC approach, we have simulated the dynamics of pyrrole following excitation to the first excited state. Trajectories were calculated using the AIMS-MOLPRO19 computational package, which has been modified to incorporate Ehrenfest dynamics. Electronic structure calculations were performed with the complete active space self-consistent field (CASSCF) method using the cc-pVDZ basis set. As in our previous works,9,11 we used an active space of eight electrons in seven orbitals (three ring π orbitals and two corresponding π* orbitals, one σ orbital and a corresponding σ* orbital). State averaging was performed over four singlet states using equal weights, i.e. the electronic wave-function is SA4-CAS(8,7)/cc-pVDZ. The width of Gaussian functions α was taken as 4.7 bohr−2 for hydrogen, 22.7 bohr−2 for carbon, and 19.0 bohr−2 for nitrogen atoms, as suggested in ref. 20. Three electronic states were taken into consideration during the dynamics – the ground state and the two lowest singlet excited states. The initial positions and momenta were randomly sampled from the ground state vibrational Wigner distribution in the harmonic approximation using vibrational frequencies and normal modes were calculated at the same CASSCF level of theory. We approximate the photoexcitation by simply lifting the ground state wavepacket to the excited state, as would be appropriate for an instantaneous excitation pulse within the Condon approximation. Of course, the fine details of the initial photoexcited wavepacket are lost in this approximation, however, we do not expect these details to have much effect on the observables shown in this paper. We have run 900 initial Ehrenfest trajectories, each propagated with a time-step of ∼0.06 fs (2.5 a.u.) for 200 fs or until the dissociation occurred, defined as an N–H distance exceeding 4.0 Å. For a small number of trajectories, simulations exhibiting N–H dissociation were carried out to the full 200 fs in order to investigate the dynamics of the radical. Cloning was applied to TBFs when the breaking acceleration of eqn (20) exceeded a threshold of 5 × 10−6 a.u. and the norm of the non-adiabatic coupling vector was simultaneously less than 2 × 10−3 a.u. For all initial trajectories, as well as for their branches resulting from cloning, we identified turning points for the N–H bond length and calculated the width of the potential barrier. Additional trajectories on the other side of the barrier were placed if the width of the barrier did not exceed 0.5 bohr, which corresponds to an overlap of ∼0.3 between Gaussian basis functions. The new trajectories were propagated backward for 20 fs to accommodate the train basis set, and forward until dissociation or until the trajectory time exceeds 200 fs. For each initial trajectory with all its branches and tunnelling sub-trajectories, we solved eqn (9) using a train basis set of N = 21 Gaussians per branch, separated by 10 time steps, which corresponds to an average overlap of ∼0.6 between the nearest Gaussians in the train. The total size of the basis is constantly changing because of the inclusion of new branches. The final amplitudes cn give statistical weights for each of the branches, which are used in the analysis that follows. IV. Results As a result of cloning, 900 initial configurations give rise to 1131 trajectory branches. This corresponds to an average of ∼0.25 cloning events per initial trajectory. For these branches, we have found 7702 local maxima of N–H bond length, of which 2376 have been identified as possible tunnelling points. For all these points, we run sub-trajectories, which finally gives 3203 additional branches, 4334 branches in total. The majority of these branches undergo N–H dissociation within our computational time of 200 fs: the total statistical weight of dissociative trajectories is 92%, of which 53% is the contribution of tunnelling sub-trajectories. The kinetic energy distribution of the ejected hydrogen atom is presented in Fig. 3 together with the experimental TKER spectrum.14 Both distributions clearly exhibit two contributions: a large peak at higher energies, and a small contribution at lower energies. It is important to note that adopting the basis set to tunnelling shifts the high-energy peak of TKER spectrum toward the lower energies by about ∼1000 cm−1 and makes the low-energy peak slightly more pronounced. While the calculated energies are still on average about 1.5 times higher than experimental values, this difference can be ascribed to the lack of dynamic electron correlation in the CASSCF potential energy surfaces. We previously showed11 that a more accurate MS-CASPT2 PES would lead to a shift in the kinetic energy peak of approximately ∼1800–1900 cm−1 towards lower energies, significantly improving the agreement with experiment. image file: c6fd00073h-f3.tif Fig. 3 Total kinetic energy release (TKER) spectrum of hydrogen atoms after dissociation calculated with (solid) and without (dash) taking tunnelling into account. Both spectra are averaged over the same ensemble of initial configuration. The curves are smoothed by replacing delta-functions with Gaussian functions (σ = 200 cm−1). The inset shows the experimentally measured spectrum.14 Analysis of the electronic state amplitudes in the Ehrenfest configurations (eqn (2)) shows that the bifurcation of the wave-function while passing through a conical intersection plays an important role in the formation of a two-peak spectrum: the high kinetic energy product is predominantly in the ground state, while the low energy peak is formed by mostly low-weight branches with substantial contribution from excited electronic states. Fig. 4 presents an example of such a bifurcating trajectory. At about 55 fs after photoexcitation, this trajectory reaches an intersection for the first time. After passing the intersection, the ground and first excited states of the original TBF are approximately equally populated, so the cloning procedure is applied creating instead two TBF, one in the ground state and one in the excited state. At this point, the potential energy surfaces for the ground and excited states have opposite gradients. This leads to the acceleration of the hydrogen atom for the TBF associated with the ground state and, at the same time, slows it down for the exited state TBF. As a result, although both branches are leading to dissociation, the kinetic energies of the ejected atoms are significantly different: the ground state branch contributes to the high energy peak of the distribution in Fig. 3, while the excited state branch contributes to the low energy peak. For the ground state branch, the remaining vibrational energy of the radical is low, so it remains in the ground state for the rest of the run and does not reach the intersection again. For the excited state branch, the energy taken away by the hydrogen atom is lower, leaving the pyrrolyl radical with sufficient energy to pass through numerous intersections with population transfer between the ground and both excited states. Naturally, quenching to the ground state will happen eventually for this branch but the time scale of this process is much longer than that for the dissociation, while the TKER spectrum is only affected by the radical dynamics until the H atom is lost. image file: c6fd00073h-f4.tif Fig. 4 An example of trajectory bifurcation on conical intersection. Electronic state populations (a), the kinetic energy of the H atom (b) and the N–H distance (c) as a function of time. Fast and slow branches are referred as (1) and (2) respectively. The black vertical line indicates the moment when cloning was applied. In order to calculate the velocity map image with respect to the laser pulse polarization, we must average the velocity distribution of hydrogen atoms relative to the axes of the molecule, given by calculations, over all possible orientations of the molecule: image file: c6fd00073h-t24.tif(21) where α, β and γ are Euler angles, θ is the angle between the atom velocity vector v and the transition dipole of the molecule, ξ(α,β,γ) is the angle between the transition dipole and light polarization vectors, and ϕ(θ,α,β,γ) is the angle between the light polarization vector and atom velocity. Here we take into account that the probability of excitation is proportional to cos2(ξ). Integrating over Euler angles and replacing, as usual, the δ-function for |v| with a narrow Gaussian function, we obtain image file: c6fd00073h-t25.tif(22) Fig. 5 shows the simulated velocity map with respect to the laser pulse polarization assuming that the transition dipole is normal to the molecular plane. The simulations reproduce well the main feature of the velocity map image, which is the anisotropy of the intense high energy part. Our results are also consistent with experiment14 in the low energy region showing an isotropic distribution, although admittedly the statistics of both experiment and simulation are poorer in the region of low energy. image file: c6fd00073h-f5.tif Fig. 5 Simulated velocity map image with respect to the laser pulse polarization assuming that the transition dipole moment is normal to the molecule plane. The experimental VMI14 is shown in the inset. V. Conclusion We simulated the photodissociation dynamics of pyrrole excited to the lowest singlet excited state (1A11A2) using a new implementation of the AIMC approach, which now is modified to take into account the tunnelling of hydrogen atoms more accurately. AIMC is a fully quantum technique but its computational cost in our implementation is compatible with classical “on the fly” molecular dynamics, which allows the accumulation of sufficient statistics to clarify the details of photo-induced processes in pyrrole. The treatment of tunnelling in our implementation provides a promising starting point for the further development of fully quantum methods for non-adiabatic dynamics and tunnelling with the ultimate goal of reaching well converged quantitative results. The current version of AIMC is already accurate enough to reproduce features of the experimentally observed TKER spectrum and velocity map images. DM and DS acknowledge the support from EPSRC through grants EP/J001481/1 and EP/N007549/1. 1. G. A. Worth, H.-D. Meyer, H. Köppel, L. S. Cederbaum and I. Burghardt, Using the MCTDH wavepacket propagation method to describe multimode non-diabatic dynamics, Int. Rev. Phys. Chem., 2008, 27, 569–606 CrossRef CAS . 2. G. Wu, S. P. Neville, O. Schalk, T. Sekikawa, M. N. R. Ashfold, G. A. Worth and A. Stolow, Excited state non-adiabatic dynamics of pyrrole: a time-resolved photoelectron spectroscopy and quantum dynamics study, J. Chem. Phys., 2015, 142, 074302 CrossRef PubMed . 3. T. J. Martinez, M. Ben-Nun and G. Ashkenazi, Classical/quantal method for multistate dynamics: a computational study, J. Chem. Phys., 1996, 104, 2847 CrossRef CAS . 4. M. Ben-Nun and T. J. Martínez, Ab Initio Quantum Molecular Dynamics, Adv. Chem. Phys., 2002, 121, 439 CrossRef CAS . 5. D. V. Shalashilin, Quantum mechanics with the basis set guided by Ehrenfest trajectories: theory and application to spin-boson model, J. Chem. Phys., 2009, 130(24), 244101 CrossRef PubMed . 6. S. L. Fiedler and J. Eloranta, Nonadiabatic dynamics by mean-field and surface hopping approaches: energy conservation considerations, Mol. Phys., 2010, 108(11), 1471–1479 CrossRef CAS . 7. D. V. Shalashilin, Nonadiabatic dynamics with the help of multiconfigurational Ehrenfest method: improved theory and fully quantum 24D simulation of pyrazine, J. Chem. Phys., 2010, 132(24), 244111 CrossRef PubMed . 8. D. V. Shalashilin, Multiconfigurational Ehrenfest approach to quantum coherent dynamics in large molecular systems, Faraday Disc., 2011, 153, 105 RSC . 9. K. Saita and D. V. Shalashilin, On-the-fly ab initio molecular dynamics with multiconfigurational Ehrenfest method, J. Chem. Phys., 2012, 137, 8 CrossRef PubMed . 10. D. V. Makhov, W. J. Glover, T. J. Martinez and D. V. Shalashilin, Ab initio multiple cloning algorithm for quantum nonadiabatic molecular dynamics, J. Chem. Phys., 2014, 141(5), 054110 CrossRef PubMed . 11. D. V. Makhov, K. Saita, T. J. Martinez and D. V. Shalashilin, Ab initio multiple cloning simulations of pyrrole photodissociation: TKER spectra and velocity map imaging, Phys. Chem. Chem. Phys., 2015, 17, 3316 RSC . 12. D. V. Shalashilin and M. S. Child, Basis set sampling in the method of coupled coherent states: coherent state swarms, trains and pancakes, J. Chem. Phys., 2008, 128, 054102 CrossRef PubMed . 13. M. Ben-Nun and T. J. Martinez, Exploiting Temporal Non-Locality to Remove Scaling Bottlenecks in Nonadiabatic Quantum Dynamics, J. Chem. Phys., 1999, 110, 4134–4140 CrossRef CAS . 14. G. M. Roberts, C. A. Williams, H. Yu, A. S. Chatterley, J. D. Young, S. Ullrich and V. G. Stavros, Probing ultrafast dynamics in photoexcited pyrrole: timescales for (1)pi sigma* mediated H-atom elimination, Faraday Discuss., 2013, 163, 95–116 RSC . 15. M. Ben-Nun and T. J. Martínez, A Continuous Spawning Method for Nonadiabatic Dynamics and Validation for the Zero-Temperature Spin-Boson Problem, Isr. J. Chem., 2007, 47, 75–88 CrossRef CAS . 16. S. Yang, J. D. Coe, B. Kaduk and T. J. Martínez, An “Optimal” Spawning Algorithm for Adaptive Basis Set Expansion in Nonadiabatic Dynamics, J. Chem. Phys., 2009, 130, 134113 CrossRef PubMed . 17. M. Ben-Nun and T. J. Martinez, Semiclassical tunneling rates from ab initio molecular dynamics, J. Phys. Chem. A, 1999, 103(31), 6055–6059 CrossRef CAS . 18. M. Ben-Nun and T. J. Martínez, A Multiple Spawning Approach to Tunneling Dynamics, J. Chem. Phys., 2000, 112, 6113–6121 CrossRef CAS . 19. B. G. Levine, J. D. Coe, A. M. Virshup and T. J. Martinez, Implementation of ab initio multiple spawning in the Molpro quantum chemistry package, Chem. Phys., 2008, 347(1), 3–16 CrossRef CAS . 20. A. L. Thompson, C. Punwong and T. J. Martinez, Optimization of width parameters for quantum dynamics with frozen Gaussian basis sets, Chem. Phys., 2010, 370, 70–77 CrossRef CAS . This journal is © The Royal Society of Chemistry 2016
73674d073eb49900
Monday, April 10, 2017 On Reports Of Putative Relict Pterosaurs: A Reappraisal While I have written about reports of alleged surviving relict pterosaurs on this blog before, I took a mostly critical and sceptical perspective, in which I pointed out that, as the reports do not seem to describe what is now known about pterosaur anatomy from the fossil record, I deem the said reports unlikely to actually be describing living pterosaurs. At the time, I wrote that misidentifications of known animals, such as bats and birds, are likely the main culprits, with perhaps some reports possibly representing encounters with unknown species of birds and bats. However, a recent reappraisal of the situation, spurred by my becoming aware of some more reports that seem to be describing morphological features known in pterosaurs from the fossil record, and that, additionally, are obscure features not known to the average layperson, has inspired me to revisit this topic and reconsider my thoughts on the issue of these purported mystery animals. On an article ( on the weblog Mysterious Universe, the following report is detailed: "In 2012, another witness claimed to have seen what appeared to be a baby pterodactyl under a bridge in Tucson, Arizona. The winged creature was said to have a wingspan of around 8 feet, and to be covered in whitish fur, with a head sporting a "top knot" that appeared to be molting. The strange creature was apparently quite aggressive towards the intruders, spreading its wings, hissing, and assuming an attack stance." What immediately stood out to me from this report is that the putative juvenile pterosaur is described as being covered with "whitish fur," much like the pycnofibres (fur-like integumentary structures) that fossils of pterosaurs from the Mesozoic Era show them to have possessed. This is in stark contrast to the other reports which seem to describe scaly- or leathery-skinned winged monsters, with nary a semblance to the actual prehistoric ornithodiran, or avemetatarsalian, archosaurian winged reptiles of the Mesozoic Era. Not only is this feature of the report anatomically accurate, but, notably, pycnofibres are also a relatively obscure anatomical feature that the average layperson, exposed to naked-skinned Flintstones-esque inaccurate portrayals of pterosaurs, would not be overtly familiar with. This renders the above report more impelling, in my eyes, than most. Yet another realization I have had is that, as pterosaurs' bones tended to be relatively hollow and lightweight, like those of birds (an adaptation to flight), it would seem less implausible for them to have a 66-million-year-long ghost lineage between the end of the Cretaceous Period and the present-day. The criticism of the idea that surviving prehistoric species might explain cryptozoological encounters has been made that many of the proposed candidates were in possession of large, dense, bones resistant to erosion, whose fossils would not easily leave a ghost lineage. (However, it shall be noted that even this is not necessarily a tightly-binding rule, as there exists a group of ichthyosaurs -- marine reptiles with large, dense, erosion-resistant bones -- for which a 66-million-year-long ghost lineage exists, the same length of time as has elapsed between the end of the Cretaceous Period and now). As pterosaurs' bones were relatively light, hollow, and fragile, the idea of them leaving a considerable ghost lineage seems to be, on the face of it, by no means absurd. So I decided it was time for a reappraisal of my views on the matter. I now view the idea of surviving relict populations of the clade Pterosauria, as, by no means confirmed or likely, but not overly implausible or far-fetched, and a possibility that, while ought still to be parsed skeptically, should be considered, rather than rejected outright, when analyzing cryptozoological reports said to describe creatures similar to them. Swancer, Brent. 22 July 2016. "Mysterious Living Dinosaurs of the Wild West". Mysterious Universe. Sunday, March 19, 2017 A New Combined Many Worlds/Multiverse–Quantum Entanglement–Wormhole Model Once again, I am posting an article that is unrelated to palaeontology, zoology, or biology, but, instead, covers topics in physics and cosmology that, likewise, fascinate me. In this article, I present my own hypothesis regarding several aspects of quantum physics and cosmology. Here, I propose my own hypothetical model which attempts to combine the Many Worlds Interpretation of quantum mechanics, the numerous Hubble volumes multiverse model, quantum entanglement, ER=EPR/wormholes, and retrocausality/time travel to the past into one unified, elegant model. You might have heard of the concept of a multiverse. If not, I will now proceed to explicate it. A multiverse is a hypothesized plurality of universes that exist. In other words, just as there are planets besides Earth, solar systems besides the one that contains Earth, and galaxies besides the Milky Way, there could, likewise, be other universes besides the one we are inhabiting. Quantum entanglement refers to a process wherein two or more particles are described using the same wave function. This means that anything that happens to one particle will instantly be responded to by the other, regardless of how far apart the particles happen to be. Quantum entanglement was criticized by Albert Einstein, who referred to it as “spooky action at a distance”, as he thought that it was impossible to occur, as it implied the sending of information faster than the speed of light in a vacuum, in contradiction to the postulate of relativity that nothing can travel faster than the speed of light in a vacuum. First, it is necessary to clarify some basics of quantum mechanics. In quantum mechanics, entities such as light and electrons are in possession of both a particle nature, as well as a wave nature. In other words, they can sometimes behave like particles, and sometimes like waves, depending upon how they are being experimented upon. For example, electrons sent through a sheet containing a pair of slits show interference, like waves, while light is made up of tiny particles, or corpuscles, known as photons, as well as showing wave phenomena such as interference. This means that, just as a mathematical equation can be used to describe the state of a wave at a particular time, as all particles have a wave nature, a wave equation can be used to describe them, as well. In quantum mechanics, the wave equation that is utilized for subatomic particles is referred to as the Schrödinger equation, named after physicist Erwin Schrödinger, who formulated it. A solution to this equation is referred to as a wave function. Strangely, however, the wave function does not describe exactly where the particle’s location is, but, rather, the probabilities that its location will be in various places. It was once thought by many physicists, including Einstein, that this uncertainness entailed that scientists were unaware of certain information, and that, once this information was to be filled in, the wave function would be able to tell us the particle’s exact location with certainty. In other words, physicists thought that this probability at such tiny scales was no different from the probability we encounter in everyday life, for example, if someone trapped inside a building who has no idea what the weather is outside were to say “There is a 60% probability that it is rainy right now, and a 40% probability that it is sunny right now”. In reality, it would be either rainy or sunny outside right now, but the individual stuck in the building does not currently possess enough information to make the determination as to which one happens to be the case. More experimental evidence showed that this was, alas, not the case. Rather than merely reflecting scientists’ lack of knowledge, it was shown that the probability at the quantum scale is inherent, meaning that, prior to measurement, a particle really does lack a precise location, and that it subsequently restricts itself to a particular location once it is measured. This baffled physicists profoundly. Many found themselves incredulous, and started searching for explanations. Some of the explanations have included the one that the consciousness of the observer, when observing and measuring the particle, forces it to become restricted to one particular location. Some others have included the process known as quantum decoherence, in which interaction with the environment causes a superposition of states to break down, in a sense, into what appears to be a single state, as the smaller quantum system under observation coagulates into a larger quantum system composed of itself and parts of its environment. The interpretation of the probabilities of quantum mechanics that this article focuses its attention on, however, is the Many Worlds Interpretation, originally formulated by physicist Hugh Everett III in the year 1957 of the decimal Gregorian calendar. This interpretation states that the probabilities described by the wave function represent a superposition of all of the copies of the object being measured that exist in parallel universes, and that, when the measurement is performed, the observer can only observe the particle that exists in the universe that they are in. Meanwhile, leaving the realm of quantum mechanics altogether and entering the realm of cosmology, the study of the origins, evolution, and large-scale structure of the universe, and reality, as a whole, it is thought that the amount of space in the universe beyond that which we can detect, due to the light from there not having had sufficient time to reach us yet, might be infinite, or finite, but very large. If so, then, as there are a finite number of ways that particles can be arranged to form objects, this would entail that any possible scenario would be able to occur in some region of space. This has led to the formulation of another multiverse theory, known as the cosmological or spatial multiverse model. This model postulates that, in the regions of space beyond that from which light has had sufficient time to reach us, known as our Hubble volume, if you were to travel far enough, by the pure laws of chance and probability, you would eventually come across numerous other Milky Way Galaxies, numerous other Solar Systems like ours within them, and numerous other Earths within them, but each one would be slightly different from ours, in some ways. For example, on some of these other Earths, situations and characters that are part of fiction in our own Hubble volume would actually be real. There could be a Jurassic Park Universe in which Isla Nublar and Isla Sorna exist, and a company called InGen actually cloned dinosaurs and placed them on the islands, a Full House and Family Matters Universe in which these shows and the characters within them are real (these shows must take place in the same universe, as Steve Urkel from Family Matters once made a cameo appearance on Full House), even a Land Before Time Universe in which dinosaurs' neurological and throats anatomy evolved in such a way that allowed them to evolve the ability to speak, and the characters and situations from that series are real. The suggestion has been made, and I make it again here, that both of these types of multiverse models -- the one derived from the weird probability superpositions of quantum mechanics, and the one derived from the inferred vastness of space -- might, in fact, be one and the same. In this way, the quantum mechanical superposition of probabilities would constitute a description of all of the copies or versions of an object under measurement, as they exist in separate Hubble volumes, separated by vast expanses of space. The probabilistic nature of the measurement, then, would come about as a result of the mathematical Schrödinger equation and the wave function contained within it not being able to tell you which Hubble volume the observer performing the measurement happens to be situated within. I find this merging of these two varieties of multiverses to be quite an elegant theory, indeed, and it has the additional benefit of being more parsimonious than proposing two different types of multiverse that contain pretty much largely the same content. The fact that the same wave function would describe these various particles, in different Hubble volumes of space, would entail that they would be entangled. Entanglement entails some kind of method for the various copies in different Hubble volumes to be able to communicate information with each other nearly instantaneously, regardless of the vastness of the intervening distance. I here propose a solution that has already been proposed by others: namely, that tiny wormholes could connect entangled particles. This conjecture has been termed the ER=EPR model. Here, I put it into the context of the quantum/cosmological-combined multiverse model. In this model, these tiny wormholes would connect different versions of an object in different universes, allowing quantum entanglement to exist between them. I take it a step further, and propose another, more controversial idea; combining retrocausality and backwards time travel with the ER=EPR model. Other experiments have hypothesized that quantum entanglement could be explained by signals traveling backwards in time to a time when the two entangled particles were closer together, and could thus transmit information easily. I find this an elegant solution, as, even with the addition of the tiny wormholes, the action could not be instantaneous--as nothing can travel faster than the speed of light, all travel through a wormhole would do is considerably shorten the journey needed to be taken by a signal from one particle to reach the other, but that would still only shorten the journey, not make it instantaneous, as is observed in quantum entanglement. Allowing backward causation would explain this seemingly instantaneous action at a distance, as, then, the connection would have already been made in the past, prior to the measurements being performed on the entangled particles. I propose that, in a standard quantum mechanical experiment described by the Schrödinger equation and its wavefunction, the probabilistic superposition of states represents all of the versions of a particle existing in different Hubble volumes, separated by vast expanses of space. They are, therefore, entangled. These entangled particles would be able to transmit information between each other, and, thus, have the ability to be instantly affected by measurements performed upon their counterparts. A possible explanation for their entanglement is that they are connected by miniature wormholes, which connect back in time to a time period in the past, perhaps very early on in the universe's history, shortly after the Big Bang, when these particles, or the matter that would later go on to become them, were situated close enough to each other that normal signal transmission between them could occur easily. This would mean that the connection between them could be maintained, as, no matter how far apart the particles would have drifted, the signal could always go back to a time when they were close enough through a miniscule wormhole. After a signal from one particle is sent through the wormhole back in time to the other particle in the past, perhaps the other particle could subsequently retain the information from the signal as it travels into the future, meaning that, by the time it is separated by the vast expanses of space between Hubble volumes that not even light has yet been able to traverse, it would retain information about its -- now quite far-away -- counterpart. My new model combines the Many Worlds Interpretation of quantum mechanics, the multiverse model containing numerous Hubble volumes, the ER=EPR model of tiny wormholes linking quantum-entangled particles, and retrocausality & backwards time travel into one model that I feel comprehensively explicates both many of the mysteries of quantum mechanics, including probabilities, superpositions, and entanglement, as well as the mysteries of the multiverses. This hypothesis of mine is by no means confirmed, and is still tentative, but I can only hope that further discoveries and experimentally-obtained evidence in the future might, perhaps, be able to corroborate it. Any constructive criticism or suggestions for improving this model, which I term the Quantum Hubble Volumes Temporal Wormhole Model, would be highly appreciated. Saturday, March 18, 2017 In Response To Michael L. Woodruff On Bacterial Sentience Michael L. Woodruff wrote and published an article in the journal Animal Sentience criticizing the idea that sentience exists in bacteria. Woodruff cites two reasons: first, that the processes often cited as showcasing bacterial sentience are not homologous to those thought to control sentience in multicellular neuronal organisms, and second, that aforementioned processes can be explained in terms of purely biochemical interactions, with no need to invoke sentience as an explanation for them. Here, I will respond to both of Woodruff's arguments. The objection is raised that the genes coding for the chemotaxis system of bacteria are different from those coding for biological sensitivity in multicellular organisms with nervous systems. The bacterial chemotactic systemic genes "do not demonstrate broad species continuity". I fail to see how this has any bearing at all on the question of sentience in bacteria. Convergent evolution is a well-known phenomenon in organismic biology, so why can't it apply to sentience, as well? Why couldn't bacteria and multicellular, neuronal organisms have independently evolved sentience, from different genes? Woodruff then states that, as the chemotaxis process in bacteria is carried out by a series of biochemical processes and interactions, it is unnecessary to "admit sentience as an explanatory variable to explain" it. But is this not true of even human neurological processes and interactions? After all, is not the indubitably sentient decision, by a human, to open a door merely sensory nerves in the skin communicating with neurons in the brain, and those neurons in the brain then communicating with muscles in the hand, using action potentials (electrical signals) and chemical neurotransmitters? The process of a human opening a door involves touch nerve receptors, which communicate the touch to the brain, which then sends a signal to the muscles in the hand to open the door. Likewise, ligands (chemicals that bond to other chemicals) in a bacterium's environment are sensed by the externally-protruding domains of its sensory proteins, which sends a chemical signal – a protein termed CheY – to bind to a rotor of the flagellum, and, thus, control the flagellar, and, in turn, the bacterium's, direction of motion. After all, even in humans, such processes as thought and emotion are thought to be mediated by neurotransmitters, including dopamine and glutamate, and electrical signals. One could easily invoke Occam's Razor to claim that human behaviour, being, as it is, controlled by the transmission of electrical and chemical signals between neurons, can be sufficiently explicated without inferring the presence of sentience. Just as a human opening a door occurs through neurons in the hand, after sensing the environment, sending electrical signals and chemicals to the brain, which then sends those aforementioned signals to the muscles in the hand, ordering them to move and open the door, likewise, a bacterium's tumbling occurs through external sensory protein domains, after sensing the environment, sending a CheW chemical signal to the CheA protein, which, in turn, sends a CheY chemical signal, across the cytoplasm, to the protein that controls the direction of rotation of the flagellum, FLiM. Once CheY binds to the flagellar rotor, it induces the bacterium to tumble and to change its direction of motion. Notice the similarities? In both the human's case and the bacterium's case, the actions of opening a door and reversing swimming direction, respectively, can be adequately and satisfactorily explained with molecular processes and signal transmissions. What Woodruff said about the bacterium's case applies just as well to the human's case. In both cases, however, there still lies the question of "why?" Why, in the human's case, did the brain, after processing the information about the external environment from the sensory nerves, decide to send signals to the muscles telling them to move? And why, in the bacterium's case, did CheA, after receiving the information about the external environment from the sensory protein domains, decide to send CheY to the flagellum, instructing it to modify its direction of movement? I propose, here, that, in both organisms' cases, the fact that a decision to initiate a behavior was made upon retrieval of and processing of cues from the external environment could, perhaps, be indicative of conscious sentience being a factor in the neurotransmitter and action potential-mediated interneuronal interactions of multicellular neuronal organisms, as well as the chemical and enzyme-mediated intermolecular interactions of unicellular organisms, respectively. Woodruff, Michael L. (2016) "Bacteria and the cellular basis of consciousness: Commentary on Reber on Origins of Mind". Animal Sentience, 126. ( Friday, March 17, 2017 Sasquatch Habitat And Population Size: Some Calculations While I wrote an article that was skeptical about Sasquatches, as well as Yetis, quite recently, by no means does that entail that I blindly accept all arguments offered by skeptics against the existence of these creatures. One argument that I have been thinking about lately is the argument that, as large mammals require a large home range that is proportional to their body mass, and there is little forest habitat in the Pacific Northwest of Northwestern North America, this means that Sasquatch would have either been discovered long ago, or does not exist, as there is not sufficient forest to allow a breeding population of these creatures to remain hidden until now. This has spurred me to carry out my own calculations to determine how capable the forest habitat of the Pacific Northwest really is of supporting a viable breeding population of these hypothetical animals. Never content to just accept whatever information I read without subjecting it to some critical analysis and skeptical scientific scrutiny, I decided to test this claim made by critics of Sasquatch's putative existence. The correlation between an animal's body mass and the size of its home range is furnished by the following formula: Home Range = 0.024 * Body Mass^1.38. I was not able to find, in any sources, the answer to the question nagging me: Does this formula refer to the kilometers and kilograms of the metric system, or to the miles and pounds of the imperial system? In any case, as miles are larger than kilometers and pounds are smaller than kilograms, utilizing miles and pounds would have the effect that the area of the home range would be represented by a smaller number, and the mass of the animal would be represented by a larger number. Therefore, this would make the calculated plausibility of Sasquatch lower than if kilometers and kilograms had been utilized in their stead. Since I am trying to stay as conservative and critical as I can possibly be (for reasons I will state at the end of this post), I decided to plug in the numbers that would render it the least likely that a viable Sasquatch population could exist in the Pacific Northwest, meaning that I decided to use miles and pounds as units. Additionally, while there are varying hypothetical speculations about the body mass of Sasquatch in the literature, I decided to go with 1,000 pounds, reportedly the highest end of the range, according to a Bigfoot research group. Meanwhile, according to the World Wildlife Fund, also known as the Worldwide Fund For Nature, there are 114,000 square miles of forest in the Pacific Northwest. I then plugged 1,000 pounds and 114,000 square miles into the equation relating home range to body mass: HR = 0.0024 x 1,000^1.38 1,000^1.38 = 13,803.8426 HR = 0.0024 x 13,803.8426 HR = 331.29222 m.^2 So I got the result that the home range for one 1,000-pound Sasquatch would be 331.29222 square miles. Then, I divided this number by the estimated number of square miles of forest in the Pacific Northwest, about 114,000 miles, to get the estimated population of Sasquatches that could inhabit this region. Pop. = 114,000/331.29222 Pop. = 344.1070826 individuals My calculated result was that there is a population of about 344 Sasquatches in the Pacific Northwest. Now the question arises: Is even this estimate, which I tried to lowball as much as I could, enough to constitute a viable breeding population of animals? Well, considering the fact that many species and subspecies of large-bodied mammals are currently so endangered that their populations are far smaller than this estimate, the South China Tiger offering just one example, I would say yes. Indeed, according to the Encyclopedia Britannica, a general rule of thumb is that 50 is a minimum number of individuals needed for a genetically viable breeding population. Sasquatch, according to my calculations, would be well over 300 individuals. Whether or not that population is large enough to furnish enough genetic diversity to sustain the population for long periods of time into the future in a world in which the effects of human activity run rampant throughout the biosphere is a different story. Indeed, if Sasquatch exists, it may be that their population was once higher in the past, and has now declined as a result of human encroachment onto their habitats, in which case, if it is ever discovered, it would likely be classified as an endangered species and enjoy the full protection of the law. And now I come to the reason why I intentionally tried to lowball the estimates as much as I could. And that is to demonstrate that, even in the "worst-case" scenario for Sasquatch's existence/"best-case" scenario for its non-existence, the calculations would still permit a viable breeding population of Sasquatches to exist in the Pacific Northwest of North America. It may very well be that Sasquatch weighs far less than 1,000 pounds, or that this formula is in the context of using metric units of measure, rather than imperial ones (indeed, considering that metric units tend to be far more often utilized as the standard units of measure in the sciences, I think the latter is actually quite likely). Now, keeping the body mass of the animal constant, I will calculate the estimated viable population size using the aforementioned metric units. In metric units, 1,000 pounds gets converted to 453,592 kilograms, while 114,000 square miles gets converted into 295,258.645 square kilometers. HR = 0.024 x 453.592^1.38 453.592^138 = 4,636.585077 HR = 0.024 x 4,636.585077 HR = 111.27804185 km.^2 Now, I, once again, divide this number by the area of forests in the Pacific Northwest to get an estimated viable population size. Pop. = 295,258.645/111.27804185 Pop. = 2,653.3414867 individuals See how much of a difference that made? Now we have a population of over 2,000 individuals, close to 3,000. This is roughly comparable to what is thought to be the population of remaining wild tigers in the entire world. So, to recap: Am I a believer in Bigfoot? No. I do not have belief or faith in cryptids, and I go where my evidence, calculations, and logic lead me. And my calculations lead me to the conclusion that, despite the paucity of scientific evidence that withstands the scientific criteria for proving the existence of a given species beyond reasonable doubt, the argument against the possible existence of these creatures from the ecological body size to home range ratio can be safely ruled out. References/Works Cited: • du Toit, J.T. December 1990. "Home range – body mass relations: a field study on African browsing ruminants". Oecologia. • Vath, Carrie L. and Robinson, Scott K. 9 December 2015. "Minimum viable population (MVP)". Encyclopedia Britannica. • Parker, Edward. "Pacific Temperate Rainforests". World Wildlife Fund/Worldwide Fund For Nature. Saturday, March 4, 2017 A "Nanobrain" For Unicellular Organisms Via A System Of Interconnected Signal-Transducing Proteins I mentioned earlier that some studies are starting to show evidence of cognition in unicellular organisms, including slime molds and bacteria, that lack brains or nervous systems. However, there perhaps exists an alternative plausible mechanism explaining how these attributes could exist in these brainless creatures. This is the fact that, in every unicellular organism, the transmission of signals between components within the cell occurs regularly. There is a network of proteins that constitute the medium through which these signals are conveyed, with each protein assuming the same role as a neuron in an organism with a nervous system, and the ends of proteins, referred to as structural domains, assuming the same role as the ends of neurons, with both the structural domains of proteins and the ends of neurons transmitting and receiving signals to and from other proteins and neurons, respectively. We know that the phenomenon of convergent evolution, in which different biological approaches to the same function arise in disparate taxa, is a common aspect of the evolutionary landscape. I find it plausible that a system of proteins through which signal transuction occurs, forming the equivalent of a "nanobrain" which is analagous to the brains of multicellular organisms, has allowed unicellular microorganisms to evolve the same functions of cognition, communication, and possibly consciousness, sentience, and self-awareness, as well as multicellular neuronal organisms. Marks, Friedrichs; Klingmüller, Ursula; Müller-Decker, Karen. Cellular Signal Processing: An Introduction to the Molecular Mechanisms of Signal Transduction. Garland Science, Taylor and Francis Group, LLC. Print. ( No, Tetragametic Chimerism Poses No Threat To The Individuality Of Early Embryos In addition to the twinning argument, one additional argument sometimes utilized to deny the individuality of early embryos is the fact that two embryos are capable of fusing together to form a single organism. This process is known as tetragametic chimerism, and the resulting individual is referred to as a tetragametic chimera, or simply a chimera. They are called tetragametic because they originated from four gametes, twice the number as someone who is not a chimera. The argument asserts that, as two embryos have the potential to become one individual, this means that, before fusion, each embryo cannot be regarded as a single individual in its own right. However, I find this argument to be as jejune and flawed as the twinning objection, and I will elucidate why I think so. Just like how I mentioned that the twinning argument is rendered absurd by the fact that any adult animal could potentially be cloned, which is basically delayed monozygotic twinning, and, in fact, has even been referred to as such in the peer-reviewed scientific literature, as shown in the example cited below, I think that the chimerism argument is rendered absurd by the fact that organ transplants between adult animals are, in fact, not just theoretically possible, but already happen quite routinely. As a hypothetical gedankenexperiment, let us envision a scenario wherein half of one adult human's organs are defective, and urgently need to be replaced. Now let us say that half of the organs from another adult human's body are removed, killing the unfortunate donor in the process, and transplanted into the recipient, with the result that the recipient now has half of the organs in their body originating from someone else, and comprised of cells with a different genome, rendering them a postnatally-derived tetragametic chimera. In this scenario, no one would deny that, prior to the fusion, there existed two distinct individual adult organisms. Likewise, the same would hold when this process occurs involving a pair of early embryos coalescing into a singleton. While, for ethical reasons, such a scenario is obviously unlikely to happen, it still means that, at least in principle, it is possible to form tetragametic chimeras in adulthood via the process of organ transplantation, just as, at least in principle, it is possible to form monozygotic twins in adulthood via the process of cloning. Therefore, just as the fact that cloning is hypothetically possible at any age of postnatal life renders the argument that the ability of a single embryo to split into twins during the process of monozygotic twinning means it is not yet an individual absurd, so, too, does the fact that extensive organ transplantation is hypothetically possible at any age of postnatal life render the argument that the ability of more than one embryo to combine into one during the process of tetragametic chimerism means that neither are yet individuals absurd. Med Wieku Rozwoj. "Human clone or a delayed twin?" 2001;5(1 Suppl 1):39-43. ( Saturday, February 25, 2017 A Review Of The Nessie Chapter In Abominable Science!: Origins of the Yeti, Nessie, and other famous cryptids by Daniel Loxton and Donald R. Prothero I just finished reading the chapter about the Loch Ness Monster in the skeptical cryptozoology book Abominable Science! by Daniel Loxton and Donald R. Prothero. I will review it here. Overall, the chapter makes a decent analysis of several of the evidence marshaled to support the existence of the Loch Ness cryptid, including the Surgeon's Photo taken by Dr. Robert Kenneth Wilson, who was really a gynaecologist, rather than a surgeon, but, hey, I guess most people don't think of the Loch Ness Monster, but something else entirely, when they hear the phrase "Gynaecologist's Photo". I agree with the chapter's conclusions that the Surgeon's Photo is likely to be a hoax, although I am still open to the possibility that it shows either a bird or an otter, as well as the same conclusion with regard to the Stuart Photo. I should note that when I first set eyes on both of these pictures as a child, they looked off to me, in some way. I suppose my intuition wasn't too far off the mark. I also found the connection drawn between King Kong and the sighting by the Spicers enlightening, and I am inclined to think that this is quite a plausible suggestion. I think it is quite plausible that the release of the movie King Kong created an atmosphere during the time of the Great Depression which made prospective witnesses more likely to interpret sightings of common animals and disturbances of water in the loch in the light of the film, causing it to morph into a sauropod- or plesiosaur-like entity. I might opine here that the Spicer sighting could have been a group of otters seen crossing the road, which they interpreted as a sauropod-like beast since they might have been driving home groggily after seeing the movie. These are the good parts of this chapter, in my opinion. Overall, I found the analysis of evidence, such as photos and videos, to be mostly rational and cogent, with one exception. The digital enhancement of the Rines flipper photograph was emphasized, and the original, unenhanced version was shown next to the enhanced version, in an attempt to show how a plesiosaur-like flipper was detectable in the enhanced version, but not in the unenhanced version. However, with me, this juxtaposition of the images had the exact opposite effect as that which was intended. Indeed, I could still clearly make out the shape of a flipper, even in the original, unenhanced version, and it is much too clear to me, I think, to be a case of pareidolia on my part. But when it came to the evaluation of the plesiosaur hypothesis and the possible entry of prospective Nessies into the loch from the ocean, I was left somewhat disappointed. I did not find the argument put forth against a plesiosaur identity being a possible one for a prospective unknown creature in Loch Ness convincing. This is because the argument overlooked key fossil finds and paleontological studies, overlooked possibilities for plesiosaur behavior and physiology which seem plausible in light of those of relatives known to be extant, and flatly contradicted other portions of the same chapter on the issue of entry into Loch Ness from the sea. It is stated that "They [plesiosaurs] were tropical animals, unsuited for the cold waters of the loch—and most plesiosaurs were marine animals, unsuited for freshwater in general". Yet a study published three years prior to this book found evidence that plesiosaurs likely were in possession of endothermy, colloquially referred to as "warm-bloodedness". And the claim that plesiosaurs were "tropical animals" is just false. Indeed, plesiosaur fossils have been found in several Upper Cretaceous formations in Antarctica. And while it is true that Antarctica in the Upper Cretaceous was warmer than it is today, it still had a climate not too dissimilar to Southern South America today, as one article covering an Antarctic plesiosaur fossil find noted. Considering the southern tip of South America, Tierra del Fuego, lies at a latitude that is more southerly than Loch Ness is northerly, I doubt that a plesiosaur adapted to the cold climate of Late Cretaceous Antarctic waters would have much difficulty adapting to the cold climate of Holocene Loch Ness waters. And plesiosaur fossils have also been found in regions indicative of them having lived in a freshwater environment. Indeed, considering that numerous modern species which spend some or much of their life in marine environments, ranging from seals to cetaceans to Bull sharks to both saltwater crocodiles (Crocodylus porosus) and American crocodiles (Crocodylus acutus), have been known to inhabit freshwater environments, as well as saltwater environments, it seems rather dogmatic to me to state that plesiosaurs could not have done the same. It is also stated that "Finally, plesiosaurs were air breathers. Any plesiosaurs in Loch Ness could be photographed several times an hour, each time they surfaced to breathe." This argument is stating that, as plesiosaurs were air-breathers, they would be regularly seen far more often breaking the surface of the water to take a breathe, rendering it unlikely that they would be able to remain inconspicuous for long in a lake such as Loch Ness. However, the idea has been previously brought forth that plesiosaurs might have evolved snorkel-like appendages on their heads that they might protrude above the surface of the water to take a breathe, which would not be as conspicuous. And while it is argued  that such snorkels would, nevertheless, still be detected, another option awaits in the wings. And that is the aquatic cutaneous diffusion method of respiration. Whether plesiosaurs were entirely air-breathers, or whether they respired through water, is not something that can be directly ascertained from the fossil evidence at hand. It is, in fact, entirely plausible that plesiosaurs could have been able to supplement their oxygen intake by aquatic cutaneous diffusion of oxygen -- i.e., absorbing molecules of oxygen directly from the water through their skin. Indeed, some turtles are known to respire in this way nowadays, and it is worth noting that, additionally, all humans once respired in this manner, as well, in utero, prior to their birth. If plesiosaurs were able to respire in such a manner, it would render them far more adapted to an aquatic lifestyle and ecological niche. Indeed, considering that extant turtles, which are less aquatic than plesiosaurs probably were (there is evidence that plesiosaurs were viviparous, giving birth at sea, constituting evidence that they were supremely adapted to a nearly completely aquatic existence), have evolved this ability, it would be surprising if plesiosaurs did not, likewise, do the same. A plesiosaur respiring through water via cutaneous diffusion of oxygen would not have a pressing or urgent need to routinely come to the surface to breathe air, meaning that it could conceivably remain hidden in a freshwater lake for a long stretch of time. When discussing possible entry of the unidentified animals into Loch Ness from the ocean, it is stated, as well, that "The rivers and canals that flow into Loch Ness can be confidently ruled out as commuter routes for large monsters, broken up by shipping locks, or some combination." While it is true that, past a certain upper limit on size, an oceangoing creature would encounter considerable difficulty in navigating these pathways to the loch, it is worth noting that it is a confirmed fact that animals as substantially-sized as seals and porpoises have managed to do so. Indeed, it strikes me as rather perplexing that the author(s) spent so much of the rest of the chapter emphasizing the fact that these known marine animals have been known to make their way into Loch Ness previously with the purpose of using their presence in the loch to explain Nessie sightings. So why the double standard here? If porpoises and seals can swim into Loch Ness from the Moray Firth through the River Ness or the Caledonian Canal, why not putative Nessies, as well? The statement about "large monsters" not being able to enter the loch is a red herring, as it is by no means a prerequisite that the creatures must already be large at the time that they enter the loch. The creatures could have made their way into the loch from the ocean when they were juveniles, perhaps no larger than salmon, or even smaller, and remained in the loch until they grew larger, rendering them trapped in the loch. Indeed, this allows me to segue into another issue brought up in this chapter, that of the need to maintain a breeding population of creatures in the loch for eons. It is asserted that, to have a population large enough to breed, it would necessarily follow that there would not be enough food in the loch to sustain them, and the population would be too large for them to be able to remain hidden. However, it is entirely possible that, rather than a breeding population of creatures having been extant in Loch Ness since the end of the Pleistocene, occasional vagrants have navigated their way into the loch from the ocean, and remained trapped there for a generation or two, before dying out. This would have the additional advantage of explaining why sightings seem to pique in some years in comparison with others. This hypothesis has come to be referred to as the 'Rogue Nessie' hypothesis, and it is covered delightfully well by writer Kurt Burchfiel in this article for StrangeMag magazine: Finally, it is stated repeatedly that there were no sightings of a strange, unidentified creature in the same vein as Nessie at Loch Ness prior to the 1930s in the decimal Gregorian calendar. Yet this, too, is demonstrably false. Indeed, a newspaper report from the 19th century of the decimal Gregorian calendar reporting on a sighting of what seemed to the locals to be an anomalous large fish in Loch Ness stated that the locals had been inclined to think of the existence of such a besst in the loch as a reality for years, indicating that there was already a tradition of reported sightings of strange creatures in Loch Ness by this time. And, even if it were true that Nessie sightings made their debut in the 1930s, this would not be a big deal, as, with the Rogue Nessie hypothesis, which postulates that Nessie is an oceangoing creature which occasionally swims into the loch from the open ocean, it is entirely plausible that a small population of these creatures could have entered the loch for the first time in the 1930s. Overall, the chapter on Nessie, the Loch Ness Monster, the fourth chapter of Abominable Science!, contributes a decent analysis of much of the evidence purported to support this alleged cryptid, while having some deficiencies in the theoretical realms, in particular, when it comes to the arguments presented against a plesiosaur identity for Nessie and those presented against the creatures being able to remain undiscovered in Loch Ness. The truth is that the palaeontological evidence from peer-reviewed scientific journals is, at worst, indifferent to the question of whether or not a plesiosaur identity is plausible for lake monsters in general, and the Rogue Nessie hypothesis shows that the objections with regard to population size and detectability can be surmounted by certain scenarios, the plausibility of which has been borne out by documented cases of marine animals making the switch to freshwater habitats. It is worth noting at this juncture that all of the evidence and reasoning presented here applies to most reported lake cryptids, such as Champ of Lake Champlain, Ogopogo of Lake Okanagan, Storsjoodjuret or Storsie of Lake Storsjon, Selma of Lake Seljordsvatnet or Lake Seljord, Nahuelito of Lake Nahuel Huapi, etc. References/External Links: Endothermy in Plesiosaurs: Polar Plesiosaurs: Freshwater Plesiosaurs: Saturday, February 18, 2017 An Additional Note On Monozygotic Twinning And Individuality In Embryos I mentioned earlier that it now appears that, when monozygotic twinning occurs, an original embryo is formed at time of egg-sperm fusion, and then some of its cells break off at the blastula stage to form a second embryo, while the original embryo continues to exist, and can regenerate its missing cells. Even if this picture turns out to be erroneous, and it turns out that monozygotic twinning erases the existence of the original embryo, and leaves two new embryos in its wake, this would still not prove that, before the twinning event occurred, there was not one individual embryo. As an analogy to help demonstrate this clearly, let us consider the fact that, in principle, every single cell could be taken from an adult animal, such as an adult human's, body and a clone made from each one of them. This would have the result that there would be trillions of clones of the original adult, while the original adult would cease to exist. But by no means does this, somehow, retroactively negate the existence of the original adult as one individual organism, as opposed to merely a not-yet-individuated clump of cells, prior to its dismantlement and concurrent cloning. When it is realized that, in any case, regardless of what happens to the original embryo when it splits to form identical twins, triplets, quadruplets, etcetera in the monozygotic twinning process, the exact same process could theoretically happen to an adult, as well, the legitimacy of this argument against the individuality of early embryos during the stage in which monozygotic twinning is possible gets effectively flushed down the toilet. Cryptozoology And The Whole Science Vs. Pseudoscience Debacle It is often claimed that cryptozoology is a pseudoscience. I have written on this topic before, but I feel the need to do so once more right now, as I have encountered arguments that have made me come to realize that it would be germane of me to do so. First, we need to define "science" and "pseudoscience". Science is a means of obtaining information by formulating ideas called hypotheses, testing them to see whether or not they match the reality at hand, and keeping or discarding them based on how well they conform to the physical evidence at hand. This process should usually be able to be repeated by others. Pseudoscience is something that has a superficial veneer of being scientific, but does not meet the key criteria of being scientific. While it is still somewhat debated what those criteria are, the two dominant schools of thought are the logical positivist, or verificationist, philosophy of science, and the falsificationist philosophy of science, particularly the latter. Verificationism means that a hypothesis needs to be able to be proven, or verified, by obtaining sufficient evidence for it to be scientific, while falsificationism means that a hypothesis needs to be able to be disproven, or falsified, by obtaining sufficient evidence against it to be scientific. Cryptozoological assertions meet both of those criteria. If I assert that "a large undiscovered hominoid species is inhabiting North America", this could potentially be verified by finding a body of this hypothetical unknown hominoid. Meanwhile, it could also potentially be falsified by painstakingly searching every square centimeter of North America and failing to find one scrap of evidence, one measly little body part, to support the assertion. There is the issue that many self-proclaimed cryptozoologists insert intrinsically unfalsifiable supernatural assertions into the field, such as asserting that a given cryptid is a noncorporeal entity, such as a ghost or a phantom. Indeed, critics of cryptozoology often use the ubiquity of such supernatural-seeming reports of cryptids in the archives of cryptozoology to imply that the cryptids in question are inherently connected to the supernatural, and, thus, it makes sense to lump in cryptozoology with the study of paranormal phenomena, such as parapsychology. Yet this is a grave error. This is because many known animals have been associated with supernatural phenomena, as well, just as frequently, if not more so, than cryptids. From superstitions of black cats being associated with bad luck to reports of spectral hounds to reports of cows being abducted by aliens, all of the same criticisms that are leveled at reported hypothetical unknown species investigated by cryptozoology could equally be applied to known species whose existence is unquestioned, and, thus, render the entire field of zoology pseudoscientific due to its association with the supernatural. So cryptozoology deals in hypotheses that are potentially both verifiable and falsifiable, and the association of the reported creatures it investigates with the supernatural does not render it pseudoscientific any more so than the association of other, known animals with the supernatural renders "mainstream" zoology pseudoscientific. One more argument commonly leveled in favor of classifying cryptozoology as a pseudoscience is that it has not had any successes thus far. While this statement is certainly questionable, and, indeed, I highly doubt its veracity and deem it untrue, even assuming that it was true, this would not render cryptozoology a pseudoscience any more than the fact that no extraterrestrial life has yet been discovered outside of Earth renders astrobiology (the study of life, including extraterrestrial life, throughout the Universe) a pseudoscience. Indeed, many of the same claims regarding cryptozoology being pseudoscientific could equally be applied to astrobiology. Yet astrobiology is widely recognized as a legitimate branch of biology, as opposed to a pseudoscience. So what gives? Why the apparent double standard here? I honestly think the reason as to why cryptozoology is widely panned as pseudoscientific is because it has been marred by association with poorly-done versions of it that actually are pseudoscientific in the popular media. From true believers who fail to think critically and investigate what evidence they think they have managed to obtain to those who assert a supernatural origin for certain cryptids, it is true that most of what masquerades as cryptozoology to much of the population is, indeed, pseudoscience. Much of the real scientific work going on in cryptozoology -- such as the peer-reviewed articles in the Journal of Cryptozoology, the studies of potentially undiscovered large marine species by Naish, Shanahan, and Paxton et al., the studies of reported Yeti hairs that found them to belong to bears by Bryan Sykes, Karl Shuker's books, The Cryptozoologicon, etc. -- is obscure, and does not receive as much attention as the pseudoscience that surrounds it. As cryptozoology is not, inherently, pseudoscientific, by bringing the actual science going on in it to the forefront and drawing more attention to it, hopefully, its reputation among the scientific community can be salvaged, and serious scientific investigations of reported cryptids can occur on a wider scale than they currently are. Wednesday, February 8, 2017 Epithelial Tissues: An Arbitrary & Artificial Grouping That Ought To Be Split Up Histology is the study of the bodily tissues of organisms and their cellular structure. In histology, animal tissues are conventionally divided into no more than four main types: Muscle Tissues, Connective Tissues, Nervous Tissues, and Epithelial Tissues. Muscle Tissues constitute muscles, which allow an organism to move. Connective Tissues are tissues that connect body parts to other body parts, and include bone, cartilage, and blood. Nervous Tissues constitute the nervous system, including the brain, spinal cord, and peripheral nerves, and are utilized by organisms to sense and be cognizant of their environments. It is often asserted that these four tissue types are natural groupings that arise from common shared characteristics of the tissues grouped within them. While this appears to be the case for Muscle, Nervous, and possibly Connective Tissues, I think it is not true for Epithelial Tissues. I think Epithelial Tissues are an arbitrary and artificial grouping of several disparate tissue types that humans have lumped together, without good cytological or ontogenetic justification. This article will explore Epithelial Tissues in depth, and arrive at an explanation as to why I propose that this unnatural grouping ought to be split into several different tissue types. To start out, it shall be noted that all tissues in an adult animal are ultimately derived from one of three original germ layers that develop in an embryo during a process known as gastrulation: the Ectoderm, the Mesoderm, and the Endoderm. If two or more tissues in the adult were derived from the same embryonic germ layer, then this furnishes a natural basis for them to be grouped together. Indeed, analogously to phylogeny, if two or more adult tissues share a common ancestor, so to speak, in an embryonic germ layer, this is the ontogenetic equivalent of sharing a common ancestor in phylogenetics, and, thus, provides good reason to group them together, with the resultant tissue group being the equivalent of a monophyletic group in phylogeny. On the contrary, if two or more adult tissues do not derive from the same embryonic germ layer, then grouping them together would be analagous to grouping together two or more species that do not share a most recent common ancestor together in phylogeny, rendering the resultant group the equivalent of a polyphyletic group in phylogeny. A notable example of such a polyphyletic grouping is Pachydermata, including usually large mammals with thick skin such as rhinoceroses, hippopotamuses, and elephants. Pachydermata, as a group, has now been abandoned by those who study the phylogenetic relationships of these mammals, as it has now been demonstrated that elephants actually share a more recent common ancestor with manatees and hyraxes than with either of the other two, hippopotamuses share a more recent common ancestor with cetaceans than with either of the other two, and rhinoceroses share a more recent common ancestor with horses than with either of the other two. Now here's the kicker. While all tissues classified as Muscle Tissues are derived from the mesoderm, all tissues classified as Connective Tissues are, likewise, derived from the mesoderm, and all tissues classified as Nervous Tissues are derived from the ectoderm, tissues classified as Epithelial Tissues are derived from all three of the germ layers, endoderm, mesoderm, and ectoderm, with different subcategories of Epithelial Tissues being derived from different germ layers. This makes Epithelial Tissues analogous to a polyphyletic phylogenetic grouping, such as Pachydermata. Just as polyphyletic groupings have now largely fallen by the wayside in favor of the more natural monophyletic groupings in taxonomy, likewise, it makes sense for groupings naturally derived from shared ontogenetic provenance from one of the embryonic germ layers to take precedence over artificially-derived arbitrary groupings of disparate tissues from different embryonic germ layers in histology. Additionally, it shall be noted that at least Nervous Tissues and Muscle Tissues share common aspects of physical appearance. For example, although the exact specifications may vary between different locations in the nervous system, all Nervous Tissues are composed of the same type of cells, neurons. Meanwhile, while there is variation between striated, smooth, and cardiac types of muscles, all muscle tissue, likewise, is comprised of cells that have an appearance and structure that is, overall, mostly similar. The same cannot be said for Epithelial Tissues. There are numerous variegated types of Epithelial Tissues, and the cells present wildly varying morphologies. Epithelial Tissues are currently divided into seven subcategories based upon the shape and configuration of their constituent cells: simple squamous, simple cuboidal, simple columnar, stratified squamous, stratified cuboidal, pseudostratified columnar, and transitional. As shown in the juxtapositions of Figure I, Figure II, and Figure III below, these different subcategories of Epithelial Tissues look vastly different, as opposed to the subcategories of Muscle Tissues and Nervous Tissues, which, overall, present a pretty similar appearance. Additionally, unlike Muscle Tissues, which are all universally internal, and Nervous Tissues, which are all universally internal, as well, Epithelial Tissues are found both externally and internally. The tissue on such widely separated locations in the body as the epidermis of the skin and the lining of the gastrointestinal tract is said to consist of Epithelial Tissues, for example. An often-asserted commonality shared by all Epithelial Tissues is that their job is to protect the body from external substances in the environment. However, this seems like a rather arbitrarily-chosen criterion to me. For example, adipose tissue, or fat, is classified as one of the Connective Tissues, yet it also plays a role in protecting the body from various putative threats in the environment, including trauma from impacts and cold, to name two. Yet it is classified among the Connective Tissues, rather than among the Epithelial Tissues. This shows that this shared characteristic of function is not enough to group the widely differing varieties of tissues grouped under the name of Epithelial Tissues into such a broad, overarching category. Overall, to recap, Epithelial Tissues are derived from all three of the embryonic germ layers, meaning that they lack common ontogenetic provenance, unlike the other principal tissue types, they present a wide variety of cell structures and configurations, unlike the other principal tissue types, and the proposed criterion of common function is not enough to salvage the grouping, as, if applied logically and consistently, this same criterion would subsume other tissues that are not classified as Epithelial Tissues into the category, as well. This is why I propose that, since Epithelial Tissues seem to me to be an arbitrary and artificial grouping of several unrelated tissues together by humans, it would be beneficial for histology to drop this grouping, and split it into several different groupings, with the result that there would be more than four principal types of tissues present in animals' bodies, just as phylogeneticists have now dropped arbitrary, artificial polyphyletic groupings in favor of natural monophyletic groupings. Fig. I: The three primary types of neurons, cells that constitute what is classified as Nervous Tissue. Fig. II: The three types of Muscle Tissue and their characteristics and functions. Fig. III: The seven recognized types of tissue currently classified under the label of "Epithelial Tissues", and the characteristic shapes of the cells that comprise them. Friday, February 3, 2017 Why Time Travel Does Not Violate The First Law Of Thermodynamics Time Travel And Conservation Of Energy/Mass/Matter: The possibility of time travel, particularly to the past, has had numerous objections raised to it over time. Perhaps one of the most seemingly difficult to grasp is the objection that time travel, particularly to the past, violates the First Law of Thermodynamics, also known as the Law of Conservation of Energy and Mass/Matter (as energy and mass are equivalent, as shown by Albert Einstein's famous equation e=mc^2). This law states that energy can never be created nor destroyed, but can only be changed from one form to another. The reason some have equated this to ruling out time travel is the following: You are probably aware that you existed in the past, for example, one week ago. Even prior to your conception, although you were not alive, the particles that would later make up your body still existed, but were just scattered around in various places until they later coalesced to form you. So every person comes from matter that already existed, and has since the beginning of the Universe. Let's say you time traveled to the Late Jurassic period. Even though it is at least 144 million years before your conception, the energy that would later constitute your body exists, as tiny particles scattered throughout the world (and possibly throughout the universe -- who knows if some of the particles that would later make up your body came to Earth from outer space?). This, according to some, constitutes a violation of the First Law of Thermodynamics, since you now coexist in the same time period alongside the particles in the past that would later form you, with the result that more energy is being added to the Late Jurassic, while energy is being simultaneously removed from the present Quaternary period, constituting a violation of conservation of energy. This is the crux of the argument against time travel from violation of conservation of energy/mass. However, I disagree with this argument, and this article will refute this argument by probing more deeply into the logical underpinnings at work beneath it. The Law of Conservation of Energy simply states that, in a closed system, energy cannot be created or destroyed. A closed system is defined as a system in which no input from outside of the system is received by said system. The issue at relevance here is that different time periods are emphatically, demonstrably not closed systems, due to the simple fact that entities are always, constantly moving forward in time, and, therefore, entering new time periods. Someone inevitably entered Wednesday from the preceding Tuesday; they did not just magically, spontaneously pop into existence on Wednesday. Additionally, general relativity shows that space and time are inextricably woven together, as complementary components of a single, unified system known as spacetime. Therefore, since individual time periods are not closed systems, we do not have to apply the conservation law to particular periods of time, on their own. Considering the entire spacetime continuum, altogether, to constitute a closed system, someone popping into a past time prior to their conception, and existing alongside the particles that would later make up the ovum and spermatozoon that would eventually conceive them, would not be injecting more mass or energy into a closed system, as, without time travel into the past, both the putative time traveller and the particles in the past prior to the individual's conception that would later come to constitute their body already are coexisting in the spacetime continuum -- merely at different times. Travel to the past would merely bring their locations in spacetime into greater proximity with one another, as they are now at the same time, instead of at differing times. As a thought experiment, let us now envision a wormhole connecting the year 1733 to the year 1725, for example. A person conceived in 1721 who is twelve years old in 1733 and four years old in 1725 would exist in both time periods. Now let's say the twelve-year-old goes through the wormhole, and arrives back in time in 1725 from 1733. When this happens, the twelve-year-old disappears from 1733, and reappears in 1725. While, if we were to consider each of the times, 1733 and 1725, as closed systems, this would, indeed, be in violation of the First Law, since we know that they are not closed systems, we know that this is not a violation. If we are to consider the entire spacetime continuum, as a whole, to be a closed system, then, there is no violation of the First Law of Thermodynamics inherent in this situation, as the disappearance of the time traveller from 1733 is balanced out by his/her subsequent reappearance in 1725. It's just like how removing a peanut from a bag of peanuts does not violate the law of conservation of energy/mass, as the peanut bag is not a closed system, but, rather, part of a closed system. Energy/matter can, indeed, be displaced within a closed system. And being displaced is completely different from being destroyed or created. Energy can be displaced from one region of a closed system and arrive at another region in its stead. There is, theoretically, no reason that a person could not coexist at the same time as the particles which would later go on to constitute their physique, instead of existing at a different time from them. Only the location of the person along the time dimension would have changed, without creating any new energy, so this would not violate the Law of Conservation of Energy, and, by extension, of Mass and of Matter. Overall, this argument against time travel, particularly time travel to the past, seems compelling at first glance, but, upon closer examination, its faults become readily apparent. It shall be noted that one may feel tempted to accept arguments against the possibility of time travel due to the fact that time travel contradicts common sense. However, there are numerous statements made by science, some of which are facts, which contradict common sense. Common sense is not always necessarily an infallible arbiter of truth. One must always tread with caution, and think critically about any arguments one finds, and parse them logically, even if they seem to appeal to intuitive notions of common sense. This is how progress is made, and new discoveries that potentially overturn paradigms occur. Tuesday, January 31, 2017 Consciousness, Sentience, And Self-Awareness: An Overview Consciousness, sentience, and self-awareness are among the most contentious topics in biology, as well as in popular culture. In the past, it was commonly assumed by eminent philosophers that only humans were conscious and sentient, and no other animals, let alone non-animalian organisms, were. Additionally, even now, it is commonly believed that even some humans younger than a certain age, such as in the prenatal stages of life, are not capable of being in possession of these qualities. But a mass of scientific research, welling up to a profound crescendo which cannot be ignored, has been accumulating over the years that contradicts these assertions. No longer can we claim, while still remaining on solidly grounded scientific footing, that only postnatal Homo sapiens are conscious, sentient biological entities. In fact, one of the core assumptions accepted even by many in the scientific community now, that a brain, or, at the very least, a nervous system composed of neuronal cells, is necessary for consciousness, sentience, and self-awareness has now started to be persuasively challenged by the evidence. This is what is the primary focus of this present article. Firstly, we need to define these terms. Consciousness can be defined as an awareness of one's surroundings, sentience can be defined as an ability to perceive subjective states (i.e., "This situation is good for me", "This situation is bad for me", etc.), and self-awareness can be defined as awareness that one exists, and recognition of oneself, as an individual, distinct from others. Based on these very simple criteria, it shall be shown that the widely-accepted assertion that only humans, and only humans at a certain ontogenetic stage, at that, possess these qualities is simply not concordant with the evidence at hand presently. Let us start with the evidence from those creatures closest to home, so to speak, with members of the same species in which these qualities are accepted as existing, Homo sapiens, but at an ontogenetic stage where it is assumed not to possess them: neonatal and prenatal humans. It is very common to encounter statements that a fetus is not conscious, sentient, or self-aware. Some even go as far as saying that a newborn baby, after birth, does not yet have those qualities. Yet a cursory overview of the scientific literature on this subject reveals these assertions to be grounded more in preconceived notions than on fact. A study has shown that newborn babies can recognize the sound of their own cry when heard among the sounds of other babies' cries and the sounds of other animals, revealing a type of self-awareness at the neonatal stage of life. And this is purely anecdotal, and thus cannot count as empirical scientific data, but one of my own cousins once removed, at six months after her birth, has, according to her parents, already developed a preoccupation with her own reflection in mirrors, a preoccupation which she does not display when observing the reflections of other objects in mirrors, an indication of an awareness of a sense of self. A study by Umberto Castiello et al. has revealed that, at least as early as fourteen weeks in utero, twins have been observed touching each other. The first inclination of the reader would be to dismiss these motions as mere reflexes, but the authors point out that they seem purposeful and directed. This study examined five pairs of twins in utero, and all displayed this same behavior, with the authors therefore arriving at the conclusion that "These findings force us to predate the emergence of social behaviour". Let us now move on to the likely even more controversial portion of this article, that concerned with the research indicating the existence of these qualities, as well as numerous other cognitive capabilities, such as problem-solving and communication, in creatures completely lacking brains or nervous systems as we know them, such as plants, protozoa, and bacteria. Any mention of plant sentience, consciousness, or self-awareness is immediately marred by association with the pseudoscience that, sadly, cast a dark shadow over investigations into this subject decades ago, beginning with the publication of The Secret Life of Plants, a book which claimed that doing things to plants such as playing certain varieties of music to them would allow one to communicate telepathically with them and convey emotions, among other such mystical claims. This has led to the investigation of plant cognition being seen as taboo by serious botanists nowadays, a rather unfortunate reality, now that renewed research is beginning to show that this avenue of investigation is, indeed, worth pursuing. The work of scientists such as Stefano Mancuso, Richard Karban, and Monica Gagliano on plant communication and learning has spread shockwaves throughout the botanical community, bringing up memories of the not-too-pleasant specter of the pseudoscientific claims engendered by The Secret Life of Plants and its ilk. Yet this research cannot be ignored. It has been shown by the work of Karban and Mancuso that plants are capable of communicating to each other through chemical signaling, with some even likening the chemicals released after grass is cut that give it such a characteristic smell as "screaming" intended to warn surrounding plants of the impending danger. Additionally, experimental research carried out by Gagliano has shown that some plants are capable of learning that a given stimulus is harmless after being exposed to it repeatedly, while giving a defensive reaction, showing that they still suspect it might be harmful, once subjected to a different stimulus. This has led to the development of a nascent branch of botany known as plant neurobiology, which is a misnomer, as even the botanists who study it are aware that plants do not possess neurons, in the same way that animals do. While still an emerging field, it already has made promising progress, and many more insights into plant social behavior and cognition certainly await in the future. Let us now move on to the organisms that are commonly thought to lie at the very bottom of the Scala Naturae of old, the microbes and protozoa. Even these seemingly most unlikely of candidates for the presence of consciousness, sentience, and self-awareness have no shortage of studies expounding the evidence for the presence of these qualities in them. Some of the most persuasive evidence in this area has come from research on a certain species of Slime Mold, Physarum polycephalum. This slime mold has been shown to be capable of memorizing its history of spatial location, and of navigating a maze with such precision and ease that it would fill the most clever of human engineers with envy, as it would be comparable to their most carefully calculated efforts. In addition, bacteria offer an impressive reportoire of cognitive and social behaviors. Bacteria are capable of processing input from their environments and producing outputs in return based upon their computation of said information. They also possess an ability known as quorum sensing. That is the ability to detect when a group of their own species has reached a sufficient number to be able to carry out a certain operation, implying some degree of social awareness. According to a study by the late Eshel Ben Jacob et al., bacteria display some cognizance of the distinction between themselves and others, i.e., self-awareness. Indeed, the actions of bacteria within the bodies of host organisms, and their ongoing battle waged with said host organisms' immune systems, has been compared in its complexity to human guerilla warfare. Bacteria are also capable of genetic engineering, incorporating foreign DNA into their own genomes. In other words, bacteria have had the ability to genetically engineer for billions of years, while humans have now had it for less than a century. This evidence is too impelling to be ignored. Renowned bacterial geneticist James A. Shapiro states that "This remarkable series of observations requires us to revise basic ideas about biological information processing and recognize that even the smallest cells are sentient beings." I will be posting much more on this topic in the near future, but it shall suffice to say that we must be more open-minded about consciousness, sentience, and self-awareness in numerous varieties of creatures, from microbes to slime molds to plants, and, therefore, by extension, to zygotes, embryos, and fetuses of all animals. Sunday, January 15, 2017 Sasquatches And Yetis: An Overview Saturday, January 14, 2017 Epigenetics: An Overview In my articles on zygotes and embryos, I mentioned non-genetic factors that play crucial and significant roles in the development of individual organisms; one of those processes I mentioned was epigenetics, which I alluded to in one sentence. In reality, such a brisk glossing over does this very important and complex subject no justice, so I have decided to pen this present article to cover this topic, in particular. What is epigenetics? To understand, we need first to cover what genes and genomes are. Genes are portions of DNA (Deoxyribonucleic Acid), the nucleic acid macromolecule inherited from an organism's ancestors. Each individual gene is like an instruction to produce a particular characteristic, and the entire set of genes in the DNA, all taken together, is known as a genome. The process of how these instructions actually create the structures that they code for is known as gene expression. This is where epigenetics comes in. Epigenetics is the process of controlling and modfying how genes are expressed during the process of gene expression. This seemingly innocuous fact has wider implications, for it shows that, thanks to epigenetics, it is truly inaccurate to say that we, as individual organisms, are the products of our genes alone, and that our genes represent our destinies. In reality, we are the products of genes, as well as processes such as epigenetics, which result in non-genetic factors, including other components of the cell, such as cytoplasm, and external factors in the environments inhabited by us, playing a critical role in shaping who we are as individuals. Another important aspect of epigenetics to note is that it is, to some extent, heritable. At the time of fusion of the gametes, ovum and sperm, the resulting offspring inherits an epigenome (a set of epigenetic factors somewhat analagous to the genome, which is composed of genes, hence the name) from both of its parents. Yet another important aspect of epigenetics is that, in addition to being heritable, unlike genes (which generally remain fixed throughout an individual organism's life cycle), epigenetics can be altered by an individual's experiences in their life, and this altered epigenome can then subsequently be passed down to offspring at the time of reproduction. In other words, changes to the epigenome incurred during an organism's life are heritable, allowing them to be existent in the offspring of said organism from the time of said offspring's conception. An organism's epigenome is modified by its environment and experiences throughout its life, from the time it is conceived by the fusion of each of its parents' gametes to the time of its death. This process of changes to an individual's phenotype brought on by an individual's life experiences that are subsequently inherited by its offspring is quite reminiscent of Lamarckism, a hypothesis regarding how evolution worked proposed by Jean-Baptiste Lamarck, positing that, for example, a giraffe stretching its neck, lengthening it slightly, to reach the tallest leaves on a tree would bear children with slightly longer necks than it, and so on, until, over time, the giraffe population, as a whole, became long-necked. This hypothesis was adopted by many early proponents of evolutionary theory, including noted American paleontologist Edward Drinker Cope, but was generally discredited once Charles Darwin's theory of evolution by natural selection arrived on the scene. However, epigenetics has, in a sense, resurrected Neo-Lamarckism. In addition to noting this, it should also be noted that, according to recent discoveries, even phenomena normally thought to be entirely the providence of the nervous system, such as memories, might fall under the purview of epigenetics. I am planning to devote another full article to this later, but I think it shall suffice to say here that the existence of a phenomenon known as cellular memory, the ability of cells, including some besides those of the nervous system, to record information incurred during an organism's lifetime in the form of memories, has begun to be supported by studies. This means that experiences that were endured by an individual's ancestors and which left their imprints in said ancestor's cells were passed on to their descendants in the form of their gametes, meaning that even things such as memories could be, to some extent, heritable, in a sense, due to epigenetics. Overall, epigenetics is among the most fascinating frontiers in the field of developmental biology and genetics, and research on it is still in its early stages. In the future, more research could shed light on this wonderfully intriguing, and strikingly imperative, area of biology. Monday, January 9, 2017 Responses To More Claims About Zygotes And Embryos Here are some additional arguments I have encountered, in various sources, against zygotes and embryos being living individual organisms, which I will review and judge on their own merits here, as well. One of the most popular arguments, widely believed by many, including some in the scientific community, is that, prior to fourteen days after fertilization of the oocyte by the spermatozoon, the embryo is not yet an individual because there is the potential for monozygotic twinning to occur, causing there to be two individuals instead of one. This argument assumes that this split into two individuals erases the existence of the original embryo, leaving two progeny in its wake. However, in reality, it is thought that monozygotic twinning occurs at the blastocyst stage of embryonic development, in which the cells of the inner cell mass have separated from the cells on the outside of the embryo, which form a structure called the trophoblast. When monozygotic twinning occurs, part of the blastocyst separates from the rest of the embryo, splitting off and giving rise to a genetically identical clone, or twin. It is very important to note here that this process does not erase the existence of the original embryo, and, in fact, due to the embryo's amazing ability to heal its wounds and regenerate missing cells, it actually makes a pretty decent recovery afterwards. Neurobiologist Maureen L. Condic compared this process to the analogy of an adult human's arm being cut off and used to create a clone of itself, while the original is able to regenerate its missing arm afterwards. Indeed, the very mention of cloning allows me to segue into the mention of the fact that, as human cloning, by the merging of any reprogrammed somatic cell with an oocyte, is at least hypothethetically possible, the truth is that you or I have the potential to be cloned, which is basically the same process as monozygotic twinning, at any moment. Therefore, if we utilize the argument against individuality from twinning/cloning, then, no adult animals, including humans, are ever individuals, as they could, potentially, be cloned at any time. This is obviously an absurdity, which means the above argument must be, as well. Another argument, this one based more on lack of information than anything else, really, is that, as embryos are capable of being frozen and thawed back out many years later, emerging alive, while this has not been done to adult animals yet, this proves that embryos are less alive than adult animals. Yet a simple exploration of what actually happens during the embryo freezing process dispels this one entirely. During this process, water is expelled from the cells, as, since water forms sharp crystals that penetrate and kill cells when it freezes, it is dangerous and deadly to allow an organism to freeze without first doing so. Then, antifreeze is put into the cells in place of the expelled water. The only reason why this has, to date, been done successfully only on embryos, and not on adults, is simply because it is far less practically feasible to carry out this process on an adult organism, simply because the latter is so much larger than an embryo. It is only a matter of practicality based around physical size. There is nothing inherently different between an embryo and an adult that causes this difference. Who knows? Perhaps, in the future, preserved, frozen adult humans will be a reality, just like preserved, frozen embryonic humans are now. Lastly, there is the argument that, since the trophoblast forms what are commonly referred to as extraembryonic tissues, including the placenta and yolk sac, while the inner cell mass forms what is thought of as the embryo proper, before the separation of the trophoblast from the inner cell mass at the blastocyst stage, the embryo cannot yet be an individual. However, a closer examination of this argument reveals critical faults. The fact that the structures formed by the trophoblast are referred to as extraembryonic structures is rather misleading; in reality, they are, indeed, part of the embryo's body, just like what is thought of as the embryo proper. The fact that they are utilized only during the antenatal stage of life, and subsequently shed upon parturition, does not make them any less part of the embryo's body than the fact that milk teeth are utilized only during childhood, and are subsequently shed, makes them any less part of postnatal children's bodies. Overall, these three additional arguments against zygotes and early embryos being individual organisms can all be soundly rejected. The argument from twinning can be rejected due to the fact that twinning only produces a new embryo, while the original remains, and that any adult organism could potentially be cloned at any time, the argument from freezing and preservation can be rejected due to the fact that the difference between adults and embryos in this respect is only a matter of size, and nothing more fundamental than that, and the argument from extraembryonic structures can be rejected due to the fact that these are, indeed, parts of the embryo's body, which are subsequently shed after birth. Condic, Maureen L. (2014). Totipotency: What It Is And What It Is Not. (
d4afa72eec852c21
Submicroscopic Mechanics Submicroscopic mechanics [1,2,3,4,5,6] describes the behaviour of the canonical particle in the real physical space constructed as the tessel-lattice of primary topological balls. The size of a cell in such mathematical lattice is identified with Planck's fundamental length $l_{\rm f} =\sqrt{\hbar G/c^3} \sim 10^{-35}$ m. The motion in the tessel-lattice is only deterministic, because a particle coming between the tessel-lattice's cells must interact with them and hence its path is traced. And at the same time we can calculate the particle's parameters (the kinetic energy, velocity, momentum, etc.) at any point of the particle's trajectory. The notion of the particle is exactly defined: it appears from an ordinary cell of the tessel-lattice when dimensional changes locally occur. In other words, the cell experiences fractal volumetric and surface deformations, which represent its mass and charge respectively. A canonical particle is accompanied by its deformation coat in which oscillations of cells take place. The deformation coat can be simulated by a crystallite with the same radius $\lambda_{\rm Com} = h/(m c)$, i.e. the crystallite whose nodes are occupied by the same massive particles. The total mass of all these model particles is equal to the mass of the central particulate cell, $m_0$; these particles are found in a vibratory state, such that they can be described by the Lagrangian [3] \begin{align} L=\frac 1{2}\sum_{\vec n, \beta}\mu_{\vec n}\dot\zeta^2_{\vec n \beta} -\frac 1{2} \sum_{\vec n,\beta\beta^{\prime}} \gamma_{\beta \beta^{\prime}} (\zeta_{\vec n\beta} - \zeta_{\vec n-\vec a, \beta})^2 \end{align} where $\mu_{\vec n}$ is the mass of a particle located in the point $\vec n$ of the crystallite, $\zeta_{\vec n \beta}$, $(\beta =1,2,3)$ are three components of the shift of the particle from its equilibrium position $\vec n$, $\vec a$ and $\gamma_{\beta \beta^{\prime}}$ are the crystallite constant and the crystallite's force constant, respectively. The crystallite exists only in one excited state such that its unique mode is characterised by the vibrational energy $\hbar \omega_0 = m_0 c^2$. In this mode all particles vibrate in directions transferal to the vector of motion of the particle (along this vector vibrations are impossible owing to the migration of the crystallite as a whole along the mentioned vector). The motion of a particulate cell accompanied by its deformation coat looks as follows: at each next movement, the particulate cell moves on the crystallite constant $a$, which is practically identical with the size of cell of the tessel-lattice (i.e. the Planck's fundamental length $l_{\rm f}$), the crystallite mode $\hbar \omega$ attacks the particle, knocking a fragment of its deformation out of it, or in physical terms, a fragment of mass $\delta m$. The direction and the velocity of this elementary excitation called inerton poses the crystallite mode whose speed is identified with the velocity of light $c$; if $\upsilon$ is the velocity of the particle, then the direction and value of the $i$-inerton is found from the vector sum of velocities, such that the inerton velocity is $c_{\rm inert} = \sqrt{c^2 + \upsilon_i^2 }$. Ejected inertons must turn back to the particle, because otherwise the particle would loose its velocity (and also mass) will eventually stop. The number $N_{\rm inertons}$ of ejected inertons can be associated with a number of collisions of the particle with adjoining cells, which takes place in the section $\lambda /2$ where the velocity of the particle drops from the value $\upsilon$ to zero. For instance, in the case of an electron in the hydrogen atom $N_{\rm inertons} \approx \lambda /(2 l_{\rm f}) \sim 10^{25}$. Inertons ejected from the particle come back to it reflecting from the tessel-lattice. Returned inertons bring the velocity and mass back to the particle and, hence, they guide it in the next section $\lambda/2$ of the particle path. Such periodical motion can be described by equation \begin{align} \mu d^2 r /dt^2 =- \gamma r . \end{align} The maximal distance which the particle's inertons reach, the amplitude of the inerton cloud, is $r|_{\rm max} = \Lambda$. Thus inertons periodically project out of the particle and then return. The motion of the particle and the inerton cloud enclosing it can be described by the Lagrangian (here it is simplified) \begin{align} L= \frac 12 m {\dot x}^2 + \frac 12 \mu {\dot \chi}^2 - \frac {\pi}{T} \sqrt{m\mu} \ {\dot x} \chi \end{align} where $T$ is the free run time of the particle between its collisions with the inerton cloud; then $1/T$ is the frequency of collisions. The solutions for the particle \begin{align} {\dot x} = \upsilon_0 \cdot (1- |\sin(\pi t/T)|); \end{align} \begin{align} x (t) = \upsilon_0 t + \lambda \cdot \{ (-1)^{[t/T]} \cos(\pi t/T) - (1+2[t/T]) \} \end{align} show oscillations of the particle's parameters: the velocity periodically changes from $\upsilon$ to zero, $\upsilon_0 \rightarrow 0 \rightarrow \upsilon_0$ in each section $\lambda$ of the particle's path. Therefore, the section $\lambda$ is the spatial amplitude of the particle. Analogously for the particle's inertons: \begin{align} \chi = \frac {\Lambda}{\pi} |\sin(\pi t / T) |; \end{align} \begin{align} \dot \chi = c (-1)^{[t/T]} \cos (\pi t / T), \end{align} that is, the inerton cloud periodically leaves the particle and comes back and the parameter $\Lambda$ appears as the amplitude of oscillations of the inerton cloud. The following relationships hold: \begin{align} 1/T = \upsilon_0 / \lambda = c / \Lambda. \end{align} Figures: Motion of the particle is associated with the ejection and reabsorption of its inerton cloud and shows an oscillation of its parameters; in particular, the velocity of the particle gradually decreases from the initial value $\underline{\upsilon_0}$ to zero and then increases again to $\underline{\upsilon_0}$ in each section $\underline{\lambda}$ of the particle's path. With the use of the transformation \begin{align} \dot \kappa = \dot \chi - \pi \sqrt {m / \mu} \ x / T \end{align} we can obtain the Hamiltonian that describes the motion of a particle as to the centre of inertia of the system 'particle-inerton cloud': \begin{align} H = \frac 12 \frac {p^2}{M} + \frac 12 M (2\pi /2T)^2 x^2 . \end{align} However, this is the Hamiltonian of harmonic oscillator and hence such motion of the particle can be written in the form of the Hamilton-Jakobi equation for the shortened action $S_1$ \begin{align} \frac {1}{2m} (\frac {\partial S_1}{\partial x})^2 + \frac 12 m (2\pi /2T)^2 x^2 = E \end{align} where $E$ is the energy of the moving particle. Introducing variables action-angle we obtain an increment of the action per cycle $T$: \begin{align} \delta S_1 =\int p dX = E \cdot 2T. \end{align} This equation can be rewritten though the frequency $\nu = 1/2T$. At the same time $1/T$ is the collision frequency of the particle with its inerton cloud. Taking into account that $E= m\upsilon^2 / 2$ we also can write (below $p_0 =m \upsilon_0$ is the initial momentum) \begin{align} \delta S_1 = m \upsilon_0 \cdot \upsilon_0 T = p_0 \lambda. \end{align} Identifying two left hand sides of equations (12) and (13), i.e. the increment of the action $\delta S_1$ per the period $T$, with Planck's constant $h$, we get two basic relationships of quantum mechanics \begin{align} E=h\nu \ \ \ {\rm and } \ \ \ \lambda = h/p_0. \end{align} These two de Broglie's relationships enable us to derive the Schrödinger equation (see de Broglie [7]). Thus the spatial amplitude $\lambda$, which has been introduced above, can be set equal to the de Broglie wavelength of a particle. The availability of correlations $\Lambda =\lambda c/ \upsilon_0$ and $\lambda_{\rm Com} = h/(mc)$ and the de Broglie wavelength $\lambda =h/ (m \upsilon)$ allow us to deduce the very interesting relationship: \begin{align} \Lambda = \lambda_{\rm Com} c^2 / \upsilon^2_0, \end{align} which connects the amplitude of the inerton cloud $\Lambda$ with the size of the deformation coat (crystallite) $\lambda_{\rm Com}$. From relationship (15) one can see that in the case of a small velocity of the particle, $\upsilon^2_0 / c^2 << 1$, the amplitude of the inerton cloud is significantly larger than the range of the deformation coat: $\Lambda >> \lambda_{\rm Com}$. The inerton cloud carries the kinetic energy of a particle and a detector will record the particle with the energy $E= m \upsilon^2 / 2$. Therefore, in this case for the description of such particle we have to use Schrödinger's formalism. When the velocity of a particle is close to the velocity of light, $\upsilon_0 \sim c$, the amplitude of the inerton cloud comes very close to the range of the deformation coat, $\Lambda \sim \lambda_{\rm Com}$. But the deformation coat together with the kernel (particulate cell) is specified by the total energy of the canonical particle and the detector will record the particle just with this energy, $E = m_0 c^2 / \sqrt {1- \upsilon_0^2 /c^2}$. Because of that, for the description of the particle in this situation we have to use the Dirac formalism. The analysis above shows that it is the deformation coat that causes a peculiar phase transition from the Schrödinger formalism to the Dirac formalism when the particle's velocity $\upsilon$ approaches the speed of light $c$. Moreover, in addition the particle features also an inner motion (asymmetrical pulsations), which is mapped on the formalism of quantum mechanics as the particle's spin. The inerton cloud is expanded up to the distance $\lambda /2$ along the particle path and occupies a band width $2 \Lambda$ in transversal directions. The formalism of quantum mechanics does not take the reality of the inerton cloud into consideration but fills a range around the particle with an abstract wave $\psi$-function. The results stated in this article enable us to reveal the true physical interpretation of the wave function $\psi$ as the particle's field of inertia. Then the expression the "material wave" acquires a real sense, because now behind this term we see not an abstract probability $| \psi (\vec r) |$, but the material field of inertia of the particle and inertons become carriers of this field. Such an interpretation of the physical nature of the wave $\psi$-function completely satisfies those conditions that Louis de Broglie laid down, namely: There should be another solution for the Schrödinger equation and the wave function should have a true causal physical meaning and not statistical. It is interesting to note that this allegedly abstract function $\psi$ was directly observed in experiment [8] (so it is not so abstract!). The researchers put in the title of their paper: "Looking at electronic wave functions…". The inerton field has been detected in our experiments. Furthermore, submicroscopic mechanics is a starting point for an understanding and the derivation of Newton's law of universal gravitation, the problem of quantum gravity and the nuclear forces. [1] V. Krasnoholovets, D. Ivanovsky, Motion of a particle and the vacuum, Physics Essays 6, no. 4, pp. 554-563 (1993) (also [2] V. Krasnoholovets, Motion of a relativistic particle and the vacuum, Physics Essays 10, no. 3, pp. 407-416 (1997) (also [4] V. Krasnoholovets, Space structure and quantum mechanics, Spacetime & Substance 1, no. 4, 172-175 (2000) (also e-print archive [5] V. Krasnoholovets, Submicroscopic deterministic quantum mechanics, International Journal of Computing Anticipatory Systems 11, pp. 164-179 (2002) (also [6] V. Krasnoholovets, Gravitation as deduced from submicroscopic quantum mechanics, [7] L. de Broglie, Les incertitudes d'Heisenberg et l'interpretation probabiliste de la mechanique ondulatoire (Gauthier-Villars, Bordas, Paris, 1982), ch. 2, sect. 4. Russian translation: Соотношения неопределенностей Гейзенберга и вероятностная интерпретация волновой механики /Heisenberg’s uncertainty relations and the probabilistic interpretation of wave mechanics/ (Mir, Moscow, 1986), pp. 50-52. [8] G. Briner, Ph. Hofmann, M. Doering, H. P. Rust, A. M. Bradshaw, L. Petersen, Ph. Sprunger, E. Laegsgaard, F. Besenbacher and E. W. Plummer, Looking at electronic wave functions on metal surfaces, Europhysics News 28, 148-152 (1997).
88f3cd79b0ddf8b9
Citation for this page in APA citation style.           Close Mortimer Adler Rogers Albritton Alexander of Aphrodisias Samuel Alexander William Alston Louise Antony Thomas Aquinas David Armstrong Harald Atmanspacher Robert Audi Alexander Bain Mark Balaguer Jeffrey Barrett William Belsham Henri Bergson George Berkeley Isaiah Berlin Richard J. Bernstein Bernard Berofsky Robert Bishop Max Black Susanne Bobzien Emil du Bois-Reymond Hilary Bok Laurence BonJour George Boole Émile Boutroux Michael Burke Joseph Keim Campbell Rudolf Carnap Ernst Cassirer David Chalmers Roderick Chisholm Randolph Clarke Samuel Clarke Anthony Collins Antonella Corradini Diodorus Cronus Jonathan Dancy Donald Davidson Mario De Caro Daniel Dennett Jacques Derrida René Descartes Richard Double Fred Dretske John Dupré John Earman Laura Waddell Ekstrom Herbert Feigl John Martin Fischer Owen Flanagan Luciano Floridi Philippa Foot Alfred Fouilleé Harry Frankfurt Richard L. Franklin Michael Frede Gottlob Frege Peter Geach Edmund Gettier Carl Ginet Alvin Goldman Nicholas St. John Green H.Paul Grice Ian Hacking Ishtiyaque Haji Stuart Hampshire Sam Harris William Hasker Georg W.F. Hegel Martin Heidegger Thomas Hobbes David Hodgson Shadsworth Hodgson Baron d'Holbach Ted Honderich Pamela Huby David Hume Ferenc Huoranszki William James Lord Kames Robert Kane Immanuel Kant Tomis Kapitan Jaegwon Kim William King Hilary Kornblith Christine Korsgaard Saul Kripke Andrea Lavazza Keith Lehrer Gottfried Leibniz Jules Lequyer Michael Levin George Henry Lewes David Lewis Peter Lipton C. Lloyd Morgan John Locke Michael Lockwood E. Jonathan Lowe John R. Lucas Alasdair MacIntyre Ruth Barcan Marcus James Martineau Storrs McCall Hugh McCann Colin McGinn Michael McKenna Brian McLaughlin John McTaggart Paul E. Meehl Uwe Meixner Alfred Mele Trenton Merricks John Stuart Mill Dickinson Miller Thomas Nagel Otto Neurath Friedrich Nietzsche John Norton Robert Nozick William of Ockham Timothy O'Connor David F. Pears Charles Sanders Peirce Derk Pereboom Steven Pinker Karl Popper Huw Price Hilary Putnam Willard van Orman Quine Frank Ramsey Ayn Rand Michael Rea Thomas Reid Charles Renouvier Nicholas Rescher Richard Rorty Josiah Royce Bertrand Russell Paul Russell Gilbert Ryle Jean-Paul Sartre Kenneth Sayre Moritz Schlick Arthur Schopenhauer John Searle Wilfrid Sellars Alan Sidelle Ted Sider Henry Sidgwick Walter Sinnott-Armstrong Saul Smilansky Michael Smith Baruch Spinoza L. Susan Stebbing Isabelle Stengers George F. Stout Galen Strawson Peter Strawson Eleonore Stump Francisco Suárez Richard Taylor Kevin Timpe Mark Twain Peter Unger Peter van Inwagen Manuel Vargas John Venn Kadri Vihvelin G.H. von Wright David Foster Wallace R. Jay Wallace Ted Warfield Roy Weatherford William Whewell Alfred North Whitehead David Widerker David Wiggins Bernard Williams Timothy Williamson Ludwig Wittgenstein Susan Wolf Michael Arbib Walter Baade Bernard Baars Gregory Bateson John S. Bell Charles Bennett Ludwig von Bertalanffy Susan Blackmore Margaret Boden David Bohm Niels Bohr Ludwig Boltzmann Emile Borel Max Born Satyendra Nath Bose Walther Bothe Hans Briegel Leon Brillouin Stephen Brush Henry Thomas Buckle S. H. Burbury Donald Campbell Anthony Cashmore Eric Chaisson Jean-Pierre Changeux Arthur Holly Compton John Conway John Cramer E. P. Culverwell Charles Darwin Richard Dawkins Terrence Deacon Lüder Deecke Richard Dedekind Louis de Broglie Max Delbrück Abraham de Moivre Paul Dirac Hans Driesch John Eccles Arthur Stanley Eddington Gerald Edelman Paul Ehrenfest Albert Einstein Hugh Everett, III Franz Exner Richard Feynman R. A. Fisher Joseph Fourier Philipp Frank Lila Gatlin Michael Gazzaniga GianCarlo Ghirardi J. Willard Gibbs Nicolas Gisin Paul Glimcher Thomas Gold Brian Goodwin Joshua Greene Jacques Hadamard Patrick Haggard Stuart Hameroff Augustin Hamon Sam Harris Hyman Hartman John-Dylan Haynes Donald Hebb Martin Heisenberg Werner Heisenberg John Herschel Art Hobson Jesper Hoffmeyer E. T. Jaynes William Stanley Jevons Roman Jakobson Pascual Jordan Ruth E. Kastner Stuart Kauffman Martin J. Klein Simon Kochen Hans Kornhuber Stephen Kosslyn Ladislav Kovàč Leopold Kronecker Rolf Landauer Alfred Landé Pierre-Simon Laplace David Layzer Benjamin Libet Seth Lloyd Hendrik Lorentz Josef Loschmidt Ernst Mach Donald MacKay Henry Margenau James Clerk Maxwell Ernst Mayr John McCarthy Ulrich Mohrhoff Jacques Monod Emmy Noether Abraham Pais Howard Pattee Wolfgang Pauli Massimo Pauri Roger Penrose Steven Pinker Colin Pittendrigh Max Planck Susan Pockett Henri Poincaré Daniel Pollen Ilya Prigogine Hans Primas Adolphe Quételet Juan Roederer Jerome Rothstein David Ruelle Erwin Schrödinger Aaron Schurger Claude Shannon David Shiang Herbert Simon Dean Keith Simonton B. F. Skinner Roger Sperry John Stachel Henry Stapp Tom Stonier Antoine Suarez Leo Szilard Max Tegmark William Thomson (Kelvin) Giulio Tononi Peter Tse Vlatko Vedral Heinz von Foerster John von Neumann John B. Watson Daniel Wegner Steven Weinberg Paul A. Weiss John Wheeler Wilhelm Wien Norbert Wiener Eugene Wigner E. O. Wilson H. Dieter Zeh Ernst Zermelo Wojciech Zurek Konrad Zuse Fritz Zwicky Free Will Mental Causation James Symposium Decoherence is the study of interactions between a quantum system (generally a very small number of microscopic particles like electrons, photons, atoms, molecules, etc. - often just a single particle) and the larger macroscopic environment, which is normally treated "classically," that is, by ignoring quantum effects, but which decoherence theorists study quantum mechanically. Decoherence theorists attribute the absence of macroscopic quantum effects like interference (which is a coherent process) to interactions between a quantum system and the larger macroscopic environment. They maintain that no system can be completely isolated from the environment. The decoherence (which accounts for the disappearance) of macroscopic quantum effects is shown experimentally to be correlated with the loss of isolation. Niels Bohr maintained that a macroscopic apparatus used to "measure" quantum systems must be treated classically. John von Neumann, on the other hand, assumed that everything is made of quantum particles, even the mind of the observer. This led him and Werner Heisenberg to say that a "cut" must be located somewhere between the quantum system and the mind, which would operate in a sort of "psycho-physical parallelism." A main characteristic of quantum systems is the appearance of wavelike interference effects. These only show up in large numbers of repeated identical experiments that make measurements on single particles at a time. Interference is never directly "observed" in a single experiment. When interference is present in a system, the system is called "coherent." Decoherence then is the loss or suppression of that interference. Interference experiments require that the system of interest is extremely well isolated from the environment, except for the "measurement apparatus." This apparatus must be capable of recording the information about what has been measured. It can be a photographic plate or an electron counter, anything capable of registering a quantum level event, usually by releasing a cascade of metastable processes that amplify the quantum-level event to the macroscopic "classical" world, where an "observer" can see the result. This does not mean that specific quantum level events are determined by that observer (as noted by several of the great quantum physicists - Max Born, Pascual Jordan, Erwin Schrödinger, Paul Dirac, and textbook authors Landau and Lifshitz, Albert Messiah, and Kurt Gottfried, among others). Quantum processes are happening all the time. Most quantum events are never observed, though they can be inferred from macroscopic phenomenological observations. The "decoherence program" of H. Dieter Zeh, Erich Joos, Wojciech Zurek, John Wheeler, Max Tegmark, and others has multiple aims - 1. to show how classical physics emerges from quantum physics. They call this the "quantum to classical transition." 2. to explain the lack of macroscopic superpositions of quantum states (e.g., Schrödinger's Cat as a superposition of live and dead cats). 3. in particular, to identify the mechanism that suppresses ("decoheres") interference between states as something involving the "environment" beyond the system and measuring apparatus. 4. to explain the appearance of particles following paths (they say there are no "particles," and maybe no paths). 5. to explain the appearance of discontinuous transitions between quantum states (there are no "quantum jumps" either) 6. to champion a "universal wave function" (as a superposition of states) that evolves in a "unitary" fashion (i.e., deterministically) according to the Schrödinger equation. 7. to clarify and perhaps solve the measurement problem, which they define as the lack of macroscopic superpositions. 8. to explain the "arrow of time." 9. to revise the foundations of quantum mechanics by changing some of its assumptions, notably challenging the "collapse" of the wave function or "projection postulate." Decoherence theorists say that they add no new elements to quantum mechanics (such as "hidden variables") but they do deny one of the three basic assumptions - namely Dirac's projection postulate. This is the method used to calculate the probabilities of various outcomes, which probabilities are confirmed to several significant figures by the statistics of large numbers of identically prepared experiments. They accept (even overemphasize) Dirac's principle of superposition. Some also accept the axiom of measurement, although some of them question the link between eigenstates and eigenvalues. The decoherence program hopes to offer insights into several other important phenomena: 1. What Zurek calls the "einselection" (environment-induced superselection) of preferred states (the so-called "pointer states") in a measurement apparatus. 2. The role of the observer in quantum measurements. 3. Nonlocality and quantum entanglement (which is used to "derive" decoherence). 4. The origin of irreversibility (by "continuous monitoring"). 5. The approach to thermal equilibrium. The decoherence program finds unacceptable these aspects of the standard quantum theory: 1. Quantum "jumps" between energy eigenstates. 2. The "apparent" collapse of the wave function. 3. In particular, explanation of the collapse as a "mere" increase of information. 4. The "appearance" of "particles." 5. The "inconsistent" Copenhagen Interpretation - quantum "system," classical "apparatus." 6. The "insufficient" Ehrenfest Theorems. Decoherence theorists admit that some problems remain to be addressed: 1. The "problem of outcomes." Without the collapse postulate, it is not clear how definite outcomes are to be explained. As Tegmark and Wheeler put it: The main motivation for introducing the notion of wave-function collapse had been to explain why experiments produced specific outcomes and not strange superpositions of is embarrassing that nobody has provided a testable deterministic equation specifying precisely when the mysterious collapse is supposed to occur. Some of the controversial positions in decoherence theory, including the denial of collapses and particles, come straight from the work of Erwin Schrödinger, for example in his 1952 essays "Are There Quantum Jumps?" (Part I and Part II), where he denies the existence of "particles," claiming that everything can be understood as waves. Other sources include: Hugh Everett III and his "relative state" or "many world" interpretations of quantum mechanics; Eugene Wigner's article on the problem of measurement; and John Bell's reprise of Schrödinger's arguments on quantum jumps. Decoherence advocates therefore look to other attempts to formulate quantum mechanics. Also called "interpretations," these are more often reformulations, with different basic assumptions about the foundations of quantum mechanics. Most begin from the "universal" applicability of the unitary time evolution that results from the Schrödinger wave equation. They include: • The DeBroglie-Bohm "pilot-wave" or "hidden variables" formulation. • The Everett-DeWitt "relative-state" or "many worlds" formulation. • The Ghirardi-Rimini-Weber "spontaneous collapse" formulation. Note that these "interpretations" are often in serious conflict with one another. Where Erwin Schrödinger thinks that waves alone can explain everything (there are no particles in his theory), David Bohm thinks that particles not only exist but that every particle has a definite position that is a "hidden parameter" of his theory. H. Dieter Zeh, the founder of decoherence, sees one of two possibilities: a modification of the Schrödinger equation that explicitly describes a collapse (also called "spontaneous localization") or an Everett type interpretation, in which all measurement outcomes are assumed to exist in one formal superposition, but to be perceived separately as a consequence of their dynamical autonomy resulting from decoherence. It was John Bell who called Everett's many-worlds picture "extravagant," The Information Interpretation of quantum mechanics also has explanations for the measurement problem, the arrow of time, and the emergence of adequately, i.e., statistically determined classical objects. However, I-Phi does it while accepting the standard assumptions of orthodox quantum physics. See below. We briefly review the standard theory of quantum mechanics and compare it to the "decoherence program," with a focus on the details of the measurement process. We divide measurement into several distinct steps, in order to clarify the supposed "measurement problem" (mostly the lack of macroscopic state superpositions) and perhaps "solve" it. The most famous example of probability-amplitude-wave interference is the two-slit experiment. Interference is between the probability amplitudes whose absolute value squared gives us the probability of finding the particle at various locations behind the screen with the two slits in it. Finding the particle at a specific location is said to be a "measurement." However, if the system is prepared in an arbitrary state ψa, it can be represented as being in a linear combination of the system's basic energy states φn. ψa = Σ cn | n >. cn = < ψa | φn >. It is said to be in "superposition" of those basic states. The probability Pn of its being found in state φn is Pn = < ψa | φn >2 = cn2 . Between measurements, the time evolution of a quantum system in such a superposition of states is described by a unitary transformation U (t, t0) that preserves the same superposition of states as long as the system does not interact with another system, such as a measuring apparatus. As long as the quantum system is completely isolated from any external influences, it evolves continuously and deterministically in an exactly predictable (causal) manner. Whenever the quantum system does interact however, with another particle or an external field, its behavior ceases to be causal and it evolves discontinuously and indeterministically. This acausal behavior is uniquely quantum mechanical. Nothing like it is possible in classical mechanics. Most attempts to "reinterpret" or "reformulate" quantum mechanics are attempts to eliminate this discontinuous acausal behavior and replace it with a deterministic process. We must clarify what we mean by "the quantum system" and "it evolves" in the previous two paragraphs. This brings us to the mysterious notion of "wave-particle duality." In the wave picture, the "quantum system" refers to the deterministic time evolution of the complex probability amplitude or quantum state vector ψa, according to the "equation of motion" for the probability amplitude wave ψa, which is the Schrödinger equation, iℏδψa/δt = H ψa. The probability amplitude looks like a wave and the Schrödinger equation is a wave equation. But the wave is an abstract quantity whose absolute square is the probability of finding a quantum particle somewhere. It is distinctly not the particle, whose exact position is unknowable while the quantum system is evolving deterministically. It is the probability amplitude wave that interferes with itself. Particles, as such, never interfere (although they may collide). Note that we never "see" the superposition of particles in distinct states. There is no microscopic superposition in the sense of the macroscopic superposition of live and dead cats (See Schrödinger's Cat). When the particle interacts, with the measurement apparatus for example, we always find the whole particle. It suddenly appears. For example, an electron "jumps" from one orbit to another, absorbing or emitting a discrete amount of energy (a photon). When a photon or electron is fired at the two slits, its appearance at the photographic plate is sudden and discontinuous. The probability wave instantaneously becomes concentrated at the location of the particle. There is now unit probability (certainty) that the particle is located where we find it to be. This is described as the "collapse" of the wave function. Where the probability amplitude might have evolved under the unitary transformation of the Schrödinger equation to have significant non-zero values in a very large volume of phase space, all that probability suddenly "collapses" (faster than the speed of light, which deeply bothered Albert Einstein) to the location of the particle. Einstein said that some mysterious "spooky action-at-a-distance" must act to prevent the appearance of a second particle at a distant point where a finite probability of appearing had existed just an instant earlier. Animation of a wave function collapsing - click to restart Whereas the abstract probability amplitude moves continuously and deterministically throughout space, the concrete particle moves discontinuously and indeterministically to a particular point in space. For this collapse to be a "measurement," the new information about which location (or state) the system has collapsed into must be recorded somewhere in order for it to be "observable" by a scientist. But the vast majority of quantum events - e.g., particle collisions that change the particular states of quantum particles before and after the collision - do not leave an indelible record of their new states anywhere (except implicitly in the particles themselves). We can imagine that a quantum system initially in state ψa has interacted with another system and as a result is in a new state φn, without any macroscopic apparatus around to record this new state for a "conscious observer." H. D. Zeh describes how quantum systems may be "measured" without the recording of information. It is therefore a plausible experimental result that the interference disappears also when the passage [of an electron through a slit] is "measured" without registration of a definite result. The latter may be assumed to have become a "classical fact" as soon as the measurement has irreversibly "occurred". A quantum phenomenon may thus "become a phenomenon" without being observed. This is in contrast to Heisenberg's remark about a trajectory coming into being by its observation, or a wave function describing "human knowledge". Bohr later spoke of objective irreversible events occurring in the counter. However, what precisely is an irreversible quantum event? According to Bohr this event can not be dynamically analyzed. Analysis within the quantum mechanical formalism demonstrates nonetheless that the essential condition for this "decoherence" is that complete information about the passage is carried away in some objective physical form. This means that the state of the environment is now quantum correlated (entangled) with the relevant property of the system (such as a passage through a specific slit). This need not happen in a controllable way (as in a measurement): the "information" may as well form uncontrollable "noise", or anything else that is part of reality. In contrast to statistical correlations, quantum correlations characterize real (though nonlocal) quantum states - not any lack of information. In particular, they may describe individual physical properties, such as the non-additive total angular momentum J2 of a composite system at any distance. The Measurement Process In order to clarify the measurement process, we separate it into several distinct stages, as follows: • A particle collides with another microscopic particle or with a macroscopic object (which might be a measuring apparatus). • In this scattering problem, we ignore the internal details of the collision and say that the incoming initial state ψa has changed asymptotically (discontinuously, and randomly = wave-function collapse) into the new outgoing final state φn. • [Note that if we prepare a very large number of identical initial states ψa, the fraction of those ending up in the final state φn is just the probability < ψa | φn >2] • The information that the system was in state ψa has been lost (its path information has been erased; it is now "noise," as Zeh describes it). New information exists (implicitly in the particle, if not stored anywhere else) that the particle is in state φn. • If the collision is with a large enough (macroscopic) apparatus, it might be capable of recording the new system state information, by changing the quantum state of the apparatus into a "pointer state" correlated with the new system state. "Pointers" could include the precipitated silver-bromide molecules of a photographic emulsion, the condensed vapor of a Wilson cloud chamber, or the cascaded discharge of a particle detector. • But this new information will not be indelibly recorded unless the recording apparatus can transfer entropy away from the apparatus greater than the negative entropy equivalent of the new information (to satisfy the second law of thermodynamics). This is the second requirement in every two-step creation of new information in the universe. • The new information could be useful (it is negative entropy) to an information processing system, for example, a biological cell like a brain neuron. The collision of a sodium ion (Na+) with a sodium/potassium pump (an ion channel) in the cell wall could result in the sodium ion being transported outside the cell, resetting conditions for the next firing of the neuron's action potential, for example. • The new information could be meaningful to an information processing agent who could not only observe it but understand it. Now neurons would fire in the mind of the conscious observer that John von Neumann and Eugene Wigner thought was necessary for the measurement process to occur at all. Von Neumann (perhaps influenced by the mystical thoughts of Neils Bohr about mind and body as examples of his "complementarity.") saw three levels in a measurement; 1. the system to be observed, including light up to the retina of the observer. 2. the observer's retina, nerve tracts, and brain 3. the observer's abstract "ego." • John Bell asked tongue-in-cheek whether no wave function could collapse until a scientist with a Ph.D. was there to observe it. He drew a famous diagram of what he called von Neumann's "shifty split." Bell shows that one could place the arbitrary "cut" (Heisenberg called it the "Schnitt") at various levels without making any difference. But an "objective" observer-independent measurement process ends when irreversible new information has been indelibly recorded (in the photographic plate of Bell's drawing). Von Neumann's physical and mental levels are better discussed as the mind-body problem, not the measurement problem. The Measurement Problem So what exactly is the "measurement problem?" For decoherence theorists, the unitary transformation of the Schrödinger equation cannot alter a superposition of microscopic states. Why then, when microscopic states are time evolved into macroscopic ones, don't macroscopic superpositions emerge? According to H. D. Zeh: Because of the dynamical superposition principle, an initial superposition Σ cn | n > does not lead to definite pointer positions (with their empirically observed frequencies). If decoherence is neglected, one obtains their entangled superposition Σ cn | n > | Φn >, that is, a state that is different from all potential measurement outcomes. And according to Erich Joos, another founder of decoherence: It remains unexplained why macro-objects come only in narrow wave packets, even though the superposition principle allows far more "nonclassical" states (while micro-objects are usually found in energy eigenstates). Measurement-like processes would necessarily produce nonclassical macroscopic states as a consequence of the unitary Schrödinger dynamics. An example is the infamous Schrödinger cat, steered into a superposition of "alive" and "dead". The fact that we don't see superpositions of macroscopic objects is the "measurement problem," according to Zeh and Joos. An additional problem is that decoherence is a completely unitary process (Schrödinger dynamics) which implies time reversibility. What then do decoherence theorists see as the origin of irreversibility? Can we time reverse the decoherence process and see the quantum-to-classical transition reverse itself and recover the original coherent quantum world? To "relocalize" the superposition of the original system, we need only have complete control over the environmental interaction. This is of course not practical, just as Ludwig Boltzmann found in the case of Josef Loschmidt's reversibility objection. Does irreversibility in decoherence have the same rationale - "not possible for all practical purposes" - as in classical statistical mechanics? According to more conventional thinkers, the measurement problem is the failure of the standard quantum mechanical formalism (Schrödinger equation) to completely describe the nonunitary "collapse" process. Since the collapse is irreducibly indeterministic, the time of the collapse is completely unpredictable and unknowable. Indeterministic quantum jumps are one of the defining characteristics of quantum mechanics, both the "old" quantum theory, where Bohr wanted radiation to be emitted and absorbed discontinuously when his atom jumpped between staionary states, and the modern standard theory with the Born-Jordan-Heisenberg-Dirac "projection postulate." To add new terms to the Schrödinger equation in order to control the time of collapse is to misunderstand the irreducible chance at the heart of quantum mechanics, as first seen clearly, in 1917, by Albert Einstein. When he derived his A and B coefficients for the emission and absorption of radiation, he found that an outgoing light particle must impart momentum hν/c to the atom or molecule, but the direction of the momentum can not be predicted! Neither can the theory predict the time when the light quantum will be emitted. But the inability to predict both the time and direction of light particle emissions, said Einstein in 1917, is "a weakness in the theory..., that it leaves time and direction of elementary processes to chance (Zufall, ibid.)." It is only a weakness for Einstein, of course, because his God does not play dice. Decoherence theorists too appear to have what William James called an "antipathy to chance." In the original "old" quantum mechanics, Neils Bohr made two assumptions. One was that atoms could only be found in what he called stationary energy states, later called eigenstates. The second was that the observed spectral lines were discontinuous sudden transitions of the atom between the states. The emission or absorption of quanta of light with energy equal to the energy difference between the states (or energy levels) with frequency ν was given by the formula E2 - E1 = h ν, where h is Planck's constant, derived from his radiation law that quantized the allowed values of energy. In the now standard quantum theory, formulated by Werner Heisenberg, Max Born, Pascual Jordan, Erwin Schrödinger, Paul Dirac, and others, three foundational assumptions were made: the principle of superposition, the axiom of measurement, and the projection postulate. Since decoherence challenges some of these ideas, we review the standard definitions. The Principle of Superposition The fundamental equation of motion in quantum mechanics is Schrödinger's famous wave equation that describes the evolution in time of his wave function ψ, i δψ/δt - Hψ. For a single particle in idealized complete isolation, and for a Hamiltonian H that does not involve magnetic fields, the Schrödinger equation is a unitary transformation that is time-reversible (the principle of microscopic reversibility) Max Born interpreted the square of the absolute value of Schrödinger's wave function as providing the probability of fi nding a quantum system in a certain state ψn. The quantum (discrete) nature of physical systems results from there generally being a large number of solutions ψn (called eigenfunctions) of the Schrödinger equation in its time independent form, with energy eigenvalues En. Hψn = Enψn, The discrete energy eigenvalues En limit interactions (for example, with photons) to the energy di fferences En - Em, as assumed by Bohr. Eigenfunctions ψn are orthogonal to one another, < ψn | ψm > = δnm, where δnm is the Dirac delta-function, equal to 1 when n = m, and 0 otherwise. The sum of the diagonal terms in the matrix < ψn | ψm >, when n = m, must be normalized to 1 to be meaningful as Born rule probabilities. Σ Pn = Σ < ψn | ψn >2 = 1. The off-diagonal terms in the matrix, < ψn | ψm >, are interpretable as interference terms. When the matrix is used to calculate the expectation values of some quantum mechanical operator O, the off-diagonal terms < ψn | O | ψm > are interpretable as transition probabilities - the likelihood that the operator O will induce a transition from state ψn to ψm. The Schrödinger equation is a linear equation. It has no quadratic or higher power terms, and this introduces a profound - and for many scientists and philosophers a disturbing - feature of quantum mechanics, one that is impossible in classical physics, namely the principle of superposition of quantum states. If ψa and ψb are both solutions of the equation, then an arbitrary linear combination of these, ψ = caψa + cbψb; with complex coefficients ca and cb, is also a solution. Together with Born's probabilistic interpretation of the wave function, the principle of superposition accounts for the major mysteries of quantum theory, some of which we hope to resolve, or at least reduce, with an objective (observer-independent) explanation of information creation during quantum processes (which can often be interpreted as measurements). The Axiom of Measurement The axiom of measurement depends on the idea of "observables," physical quantities that can be measured in experiments. A physical observable is represented as a Hermitean operator A that is self-adjoint (equal to its complex conjugate, A* = A). The diagonal elements < ψn | A | ψn > of the operator's matrix are interpreted as giving the expectation value for An (when we make a measurement). The off -diagonal n, m elements describe the uniquely quantum property of interference between wave functions and provide a measure of the probabilities for transitions between states n and m. It is these intrinsic quantum probabilities that provide the ultimate source of indeterminism, and consequently of irreducible irreversibility, as we shall see. The axiom of measurement is then that a large number of measurements of the observable A, known to have eigenvalues An, will result in the number of measurements with value An being proportional to the probability of finding the system in eigenstate ψn with eigenvalue An. The Projection Postulate The third novel idea of quantum theory is often considered the most radical. It has certainly produced some of the most radical ideas ever to appear in physics, in attempts to deny it (as the decoherence program appears to do, as do also Everett relative-state interpretations, many worlds theories, and Bohm-de Broglie pilot waves). The projection postulate is actually very simple, and arguably intuitive as well. It says that when a measurement is made, the system of interest will be found in one of the possible eigenstates of the measured observable. We have several possible alternatives for eigenvalues. Measurement simply makes one of these actual, and it does so, said Max Born, in proportion to the absolute square of the probability amplitude wave function ψn. In this way, ontological chance enters physics, and it is partly this fact of quantum randomness that bothered Albert Einstein ("God does not play dice") and Schrödinger (whose equation of motion is deterministic). When Einstein derived the expressions for the probabilities of emission and absorption of photons in 1917, he lamented that the theory seemed to indicate that the direction of an emitted photon was a matter of pure chance (Zufall), and that the time of emission was also statistical and random, just as Rutherford had found for the time of decay of a radioactive nucleus. Einstein called it a "weakness in the theory." What Decoherence Gets Right Allowing the environment to interact with a quantum system, for example by the scattering of low-energy thermal photons or high-energy cosmic rays, or by collisions with air molecules, surely will suppress quantum interference in an otherwise isolated experiment. But this is because large numbers of uncorrelated (incoherent) quantum events will "average out" and mask the quantum phenomena. It does not mean that wave functions are not collapsing. They are, at every particle interaction. Decoherence advocates describe the environmental interaction as "monitoring" of the system by continuous "measurements." Decoherence theorists are correct that every collision between particles entangles their wave functions, at least for the short time before decoherence suppresses any coherent interference effects of that entanglement. But in what sense is a collision a "measurement." At best, it is a "pre-measurement." It changes the information present in the wave functions before the collision. But the new information may not be recorded anywhere (other than being implicit in the state of the system). All interactions change the state of a system of interest, but not all leave the "pointer state" of some measuring apparatus with new information about the state of the system. So environmental monitoring, in the form of continuous collisions by other particles, is changing the specific information content of both the system, the environment, and a measuring apparatus (if there is one). But if there is no recording of new information (negative entropy created locally), the system and the environment may be in thermodynamic equilibrium. Equilibrium does not mean that decoherence monitoring of every particle is not continuing. It is. There is no such thing as a "closed system." Environmental interaction is always present. If a gas of particles is not already in equilibrium, they may be approaching thermal equilibrium. This happens when any non-equilibrium initial conditions (Zeh calls these a "conspiracy") are being "forgotten" by erasure of path information during collisions. Information about initial conditions is implicit in the paths of all the particles. This means that, in principle, the paths could be reversed to return to the initial, lower entropy, conditions (Loschmidt paradox). Erasure of path information could be caused by quantum particle-particle scattering (our standard view) or by decoherence "monitoring." How are these two related? The Two Steps Needed in a Measurement that Creates New Information More than the assumed collapse of the wave function (von Neumann's Process 1, Pauli's measurement of the first kind) is needed. Indelibly recorded information, available for "observations" by a scientist, must also satisfy the second requirement for the creation of new information in the universe. Everything created since the origin of the universe over ten billion years ago has involved just two fundamental physical processes that combine to form the core of all creative processes. These two steps occur whenever even a single bit of new information is created and survives in the universe. • Step 1: A quantum process - the "collapse of the wave function." If the probability amplitude wave function did not collapse, unitary evolution would simply preserve the initial information. The two physical processes in the creative process, quantum physics and thermodynamics, are somewhat daunting subjects for philosophers, and even for many scientists, including decoherence advocates. Quantum Level Interactions Do Not Create Lasting Information The overwhelming number of collisions of microscopic particles like electrons, photons, atoms, molecules, etc, do not result in observable information about the collisions. The lack of observations and observers does not mean that there have been no "collapses" of wave functions. The idea that the time evolution of the deterministic Schrödinger equation continues forever in a unitary transformation that leaves the wave function of the whole universe undecided and in principle reversible at any time, is an absurd and unjustified extrapolation from the behavior of the ideal case of a single perfectly isolated particle. The principle of microscopic reversibility applies only to such an isolated particle, something unrealizable in nature, as the decoherence advocates know with their addition of environmental "monitoring." Experimental physicists can isolate systems from the environment enough to "see" the quantum interference (but again, only in the statistical results of large numbers of identical experiments). The Emergence of the Classical World In the standard quantum view, the emergence of macroscopic objects with classical behavior arises statistically for two reasons involving large numbers: 1. The law of large numbers (from probability and statistics) • When a large number of material particles is aggregated, properties emerge that are not seen in individual microscopic particles. These properties include ponderable mass, solidity, classical laws of motion, gravity orbits, etc. • When a large number of quanta of energy (photons) are aggregated, properties emerge that are not seen in individual light quanta. These properties include continuous radiation fields with wavelike interference. 2. The law of large quantum numbers (Bohr Correspondence Principle). Decoherence as "Interpreted" by Standard Quantum Mechanics Can we explain the following in terms of standard quantum mechanics? 1. the decoherence of quantum interference effects by the environment 2. the measurement problem, viz., the absence of macroscopic superpositions of states 3. the emergence of "classical" adequately determined macroscopic objects 4. the logical compatibility and consistency of two dynamical laws - the unitary transformation and the "collapse" of the wave function 5. the entanglement of "distant" particles and the appearance of "nonlocal" effects such as those in the Einstein-Podolsky-Rosen experiment Let's consider these point by point. 1. The standard explanation for the decoherence of quantum interference effects by the environment is that when a quantum system interacts with the very large number of quantum systems in a macroscopic object, the averaging over independent phases cancels out (decoheres) coherent interference effects. 2. In order to study interference effects, a quantum system is isolated from the environment as much as possible. Even then, note that microscopic interference is never "seen" directly by an observer. It is inferred from probabilistic theories that explain the statistical results of many identical experiments. Individual particles are never "seen" as superpositions of particles in different states. When a particle is seen, it is always the whole particle and nothing but the particle. The absence of macroscopic superpositions of states, such as the infamous linear superposition of live and dead Schrödinger Cats, is therefore no surprise. 3. The standard quantum-mechanical explanation for the emergence of "classical" adequately determined macroscopic objects is that they result from a combination of a) Bohr's correspondence principle in the case of large quantum numbers. together with b) the familiar law of large numbers in probability theory, and c) the averaging over the phases described in point 1. Heisenberg indeterminacy relations still apply, but the individual particles' indeterminacies average out, and the remaining macroscopic indeterminacy is practically unmeasurable. 4. Perhaps the two dynamical laws would be inconsistent if applied to the same thing at exactly the same time. But the "collapse" of the wave function (von Neumann's Process 1, Pauli's measurement of the first kind) and the unitary transformation that describes the deterministic evolution of the probability amplitude wave function (von Neumann's Process 2) are used in a temporal sequence. first a wave of possibilities, then an actual particle. The first process describes what happens when quantum systems interact, in a collision or a measurement, when they become indeterministically entangled. The second then describes their deterministic evolution (while isolated) along their mean free paths to the next collision or interaction. One dynamical law applies to the particle picture, the other to the wave picture. 5. The paradoxical appearance of nonlocal "influences" of one particle on an entangled distant particle, at velocities greater than light speed, are a consequence of a poor understanding of both the wave and particle aspects of quantum systems. The confusion usually begins with a statement such as "consider a particle A here and a distant particle B there." When entangled in a two-particle probability amplitude wave function, the two identical particles are "neither here nor there," just as the single particle in a two-slit experiment does not "go through" the slits. It is the single-particle probability amplitude wave that must "go through" both slits if it is to interfere. For a two-particle probability amplitude wave that starts its deterministic time evolution when the two identical particles are produced, it is only the probability of finding the particles that evolves according to the unitary transformation of the Schrödinger wave equation. It says nothing about where the particles "are." Now if and when a particle is measured somewhere, we can then label it particle A. Conservation of energy and momentum tell us immediately that the other identical particle is now symmetrically located on the other side of the central source of particles. If the particles are electrons (as in David Bohm's version of EPR), conservation of spin tells us that the now distant particle B must have its spin opposite to that of particle A is they were produced with a total spin of zero. Nothing is sent from particle A to B. The deduced properties are the consequence of conservation laws that are true for much deeper reasons than the puzzles of nonlocal entanglement. The mysterious instantaneous values for the properties is exactly the same mystery that bothered Einstein about a single-particle wave function having values all over a photographic screen at one instant, then having values only at the position of the located particle in the next instant, apparently violating special relativity. To summarize: Decoherence by interactions with environment can be explained perfectly by multiple "collapses" of the probability amplitude wave function during interactions with environment particles. Microscopic interference is never "seen" directly by an observer, therefore we do not expect ever to "see" macroscopic superpositions of live and dead cats. The "transition from quantum to classical" systems is the consequence of laws of large numbers. The quantum dynamical laws necessarily include two phases, one needed to describe the continuous deterministic motions of probability amplitude waves and the other the discontinuous indeterministic motions of physical particles. The mysteries of nonlocality and entanglement are no different from those of standard quantum mechanics as seen in the two-slit experiment. It is just that we now have two identical particles and their wave functions are nonseparable . For Teachers The Role of Decoherence in Quantum Mechanics, Stanford Encyclopedia of Philosophy For Scholars Part Three - Value Part Five - Problems Normal | Teacher | Scholar
fd8e53d90255f8ce
Take the 2-minute tour × I'm an aspiring physicist who wants to self study some Quantum Physics. My thirst for knowledge is unquenchable and I can not wait 2 more years until I get my first quantum physics class in university, so I want to start with a self study. I am enrolled in a grammar school and the most gifted among the gifted (not my description, mind you, I hate coming off as cocky, sorry) are enrolled in a special 'project'. We are allowed to take 3 school hours a week off in order to work on a project, which can be about anything you want, from music to mathematics. On the 4th of April we have to present our projects. Last year an acquaintance of mine did it about university level mathematics, so I thought, why not do it about university level physics? It is now the 3rd of October so I have half a year. My question is, where can I conduct a self study of quantum physics? Starting from scratch? And is it possible for me to be able to use and understand the Schrödinger equation by April? What are good books, sites, etc. that can help me? My goal is to have a working knowledge of BASIC quantum physics and I would like to understand and be able to use the Schrödinger equation. Is this possible? What is needed for these goals? share|improve this question Do you have any experience with linear algebra, calculus or differential equations? –  DJBunk Oct 3 '12 at 15:58 None with linear algebra, but I do with calculus. –  kamal Oct 3 '12 at 16:03 I would say it depends on how ambitious you are in general learning a subject, but I really doubt 3 hours a week will do it. With some effort you might be able to learn some neat qualitative things, but I highly doubt you will be solving the Schrodinger eqn etc by April. I suggest doing something more specific like learning about things like the double slit experiment and the photoelectric effect. Those types of things you can start with Wikipedia to see if it interests. Don't let me discourage you though! –  DJBunk Oct 3 '12 at 16:14 Of course those 3 hours a week are only during school time, I expect to spend around ~10 hours a week for this, some weeks more and some less, but at least 10 hours, that I know. I already have a working knowledge of the double slit experiment and photoelectric effect, so I think I am ready for the next step (although I am not certain what that might be). –  kamal Oct 3 '12 at 16:17 show 2 more comments 4 Answers Just pick up Dirac's book "The Principles of Quantum Mechanics" and read it in conjunction with "The Feynman Lectures on Physics Vol III". Don't waste time with linear algebra, the entire content of the undergraduate courses can be learned in half a day. Don't worry about the infinite dimensional nature of the thing, just reduce all the spaces to finite dimensions. Also, be aware that "gifted" is a political label that has nothing to do with you, it's just a way for schools to segregate students by their future social class. It's not the analog of special needs, because the students in gifted classes are no different from the students in usual classes, except that they are given a slightly better education. Don't be fooled by a label into thinking you are somehow special, everyone is ordinary, including Einstein and Dirac. One has to do good work despite this, and those folks show it is possible by assiduous effort. share|improve this answer Trouble is you're seeing things from the way you did things, and not how they can be done today using what's available. Have you seen Susskind's QM video lectures for example? Don't you think watching videos while taking notes is more productive? I'm with you and Howard Gardner on "giftedness" –  Larry Harson Oct 4 '12 at 2:13 @LarryHarson: I agree that I'm out of date, but it cannot be overemphasized how important it is to read the classics. Dirac's book is timeless, it is lucid, it is brief, it starts with first principles, and its mathematics is self contained. It's path of development is unique and very illuminating, being independent of both Schrodinger and Bohr. Susskind's videos I am sure are excellent, but I have a soft spot for Dirac, who was one of my closest friends throughout adolescence. As for giftedness, it is worst for the "gifted", who are made cocky and incapable of the humility required for study –  Ron Maimon Oct 4 '12 at 3:04 I wonder how you think that ''Don't worry about the infinite dimensional nature of the thing, just reduce all the spaces to finite dimensions.'' can be done without some understanding of linear algebra.... –  Arnold Neumaier Oct 4 '12 at 15:05 @ArnoldNeumaier: Because I didn't study linear algebra and I read Dirac and had no trouble. –  Ron Maimon Oct 4 '12 at 16:09 @kamal: Yes, it's a waste of time, but it was always a waste of time, it was a high-class marker to know Latin (you must be living in some former European colony to have such an education, class-markers were very important under colonialism). High-class markers (King's English, Queen's accent, a Rolex, high-status position) are always extremely time-consuming to acquire (or else they wouldn't work to mark high-classes), and this is why science is always done by low-class people who hate Latin and dress like slobs. The ancient stuff can be useful for Marlowe/Shakespeare, that's about all. –  Ron Maimon Oct 27 '12 at 12:43 show 6 more comments Without having understood matrices and their interpretation as linear mappings (operators) it is very difficult to get a reasonable understanding of quantum mechanics. So you should spend some time on elementary linear algebra. Wikipedia is not bad on this, so you could pick up most from there. (To start with. For basic math, Wikipedia is almost completely reliable, which is not the case for more specialized topics. In case of doubt, cross check with other sources.) Today, the shortest road to quantum mechanics is probably quantum information theory. For online introductory lecture notes see, e.g., The following lecture notes start from scratch (use Wikipedia for the math not explained there): This one might also be useful: In quantum information theory, all Hilbert spaces are finite-dimensional, wave functions are just complex vectors, and the Schroedinger equation is just a linear differential equation with constant coefficients. So you also need to learn a little bit about ordinary differential equations and how linear systems behave. Again, this can be picked up from Wikipedia. In more traditional quantum mechanics, the Schroedinger equation is a partial differential equations, and wave functions are complex function depending on one or more position coordiates. On this level, you need to understand what partial derivatives are and have some knowledge about Fourier transforms. Again, this can be picked up from Wikipedia. Then you might start with You may also wish to try my online book http://lanl.arxiv.org/abs/0810.1019 It assumes some familiarity with linear algebra and of partial derivatives, but little else. Some basic questions are also answered in my theoretical physics FAQ at http://www.mat.univie.ac.at/~neum/physfaq/physics-faq.html share|improve this answer +1 these are nice sources if you get stuck on linear algebra, but I never got stuck on the linear algebra, rather the sticking points were the partial differential equations and the path integral. –  Ron Maimon Oct 4 '12 at 18:17 @RonMaimon: kamal doesn't want to understand the path integral by April. And one needs very little from PDE as long as one doesn't want to solve numerically a real problem. Thus if he has no trouble with the linear algebra and with Fourier transforms, he'll have no trouble at all! –  Arnold Neumaier Oct 4 '12 at 18:20 He should be more ambitious then--- the speed with which one can self study has increased tenfold in the last decade. –  Ron Maimon Oct 4 '12 at 18:21 @RonMaimon What would you suggest being good for me to set as a goal? You seem like a very informed man and I would like to ask for your personal advice. Of course I am also busy with sports, and I'm starting to learn LaTeX, so I'd say I spend 10 hours a week on this. –  kamal Oct 27 '12 at 12:18 @kamal: The only goal is to understand what has been done and push it forward, like everyone else tries to do. For this, you can follow a sequence more or less like Dirac/Feynman/Onsager/Landau/Gell-Mann/Anderson/Mandelstam/Polyakov/Parisi/'tHoo‌​ft/Scherk/Schwarz/Susskind/Witten (with about two dozen more authors I left out, sorry). I gave a simple but flashy thing which can be tackled after understanding basic QM here: physics.stackexchange.com/questions/41780/… (your question). Maybe read Nielson and Chuang, learn complexity classes. –  Ron Maimon Oct 27 '12 at 12:31 add comment You can watch videos from here and lectures from here (first two atleast). share|improve this answer add comment If you want to understand quantum physics, you have to understand Fourier series and Fourier transforms. The best introductory text ever, is the book Who Is Fourier?. Do not be fooled by its cartoonish appearance, this is a serious book as can be demonstrated by the fact that the name on the top of the list of advisers is Yoichiro Nambu who is the 2008 Nobel prize co-winner: Then I would work to gain an understanding of the Heat Equation. The Schrodinger equation can be described as the quantum version of the heat equation (except, what is diffusing is probability). Fourier developed the Fourier series in order to solve the question of how heat diffuses in a material. If you understand these things, you can understand quantum mechanics within a few months. share|improve this answer For fourier analysis, Koerner is a great source, with both accurate historical material and fascinating applications, including primes in arithmetic progression and an alternate RW proof of Picard's theorem: amazon.com/Fourier-Analysis-T-246-rner/dp/0521389917. I didn't read the cartoon book, but I doubt it has the same depth as Koerner, which is one of the great pedagogical mathematics books, along with Davenport's number theory. These were thankfully used by the mathematics professors I had as an undergraduate, and they were very good folks. –  Ron Maimon Oct 4 '12 at 18:19 @RonMaimon Thanks, I will see if I can pick up a copy, it looks pretty cool from the excerpts on amazon –  Hal Swyers Oct 4 '12 at 18:30 @Hal Swyers thank you for giving insight into importance of heat equation in understanding the Schrodinger equation. I wish I could get free e-copy of this book "who is fourier" else i will try buying it. –  baalkikhaal Oct 27 '12 at 10:33 add comment Your Answer
1c99331defb6fa42
Bose–Einstein condensate From Wikipedia, the free encyclopedia   (Redirected from Bose-Einstein condensation) Jump to: navigation, search Schematic Bose-Einstein Condensation versus temperature and the energy diagram A Bose–Einstein condensate (BEC) is a state of matter of a dilute gas of bosons cooled to temperatures very close to absolute zero (that is, very near 0 K or 273.14 °C[1]). Under such conditions, a large fraction of bosons occupy the lowest quantum state, at which point macroscopic quantum phenomena become apparent. This state was first predicted, generally, in 1924–25 by Satyendra Nath Bose and Albert Einstein. Velocity-distribution data (3 views) for a gas of rubidium atoms, confirming the discovery of a new phase of matter, the Bose–Einstein condensate. Left: just before the appearance of a Bose–Einstein condensate. Center: just after the appearance of the condensate. Right: after further evaporation, leaving a sample of nearly pure condensate. Bose first sent a paper to Einstein on the quantum statistics of light quanta (now called photons). Einstein was impressed, translated the paper himself from English to German and submitted it for Bose to the Zeitschrift für Physik, which published it. (The Einstein manuscript, once believed to be lost, was found in a library at Leiden University in 2005.[2]). Einstein then extended Bose's ideas to matter in two other papers.[3] The result of their efforts is the concept of a Bose gas, governed by Bose–Einstein statistics, which describes the statistical distribution of identical particles with integer spin, now called bosons. Bosons, which include the photon as well as atoms such as helium-4 (4He), are allowed to share a quantum state. Einstein proposed that cooling bosonic atoms to a very low temperature would cause them to fall (or "condense") into the lowest accessible quantum state, resulting in a new form of matter. In 1938 Fritz London proposed BEC as a mechanism for superfluidity in 4He and superconductivity.[4][5] In 1995 the first gaseous condensate was produced by Eric Cornell and Carl Wieman at the University of Colorado at Boulder NISTJILA lab, in a gas of rubidium atoms cooled to 170 nanokelvin (nK).[6] Shortly thereafter, Wolfgang Ketterle at MIT demonstrated important BEC properties. For their achievements Cornell, Wieman, and Ketterle received the 2001 Nobel Prize in Physics.[7] Many isotopes were soon condensed, then molecules, quasi-particles, and photons in 2010.[8] Critical temperature[edit] This transition to BEC occurs below a critical temperature, which for a uniform three-dimensional gas consisting of non-interacting particles with no apparent internal degrees of freedom is given by: T_c=\left(\frac{n}{\zeta(3/2)}\right)^{2/3}\frac{2\pi \hbar^2}{ m k_B} \approx 3.3125 \ \frac{\hbar^2 n^{2/3}}{m k_B} \,T_c  is  the critical temperature, \,n  is  the particle density, \,m  is  the mass per boson, \hbar  is  the reduced Planck constant, \,k_B  is  the Boltzmann constant, and \,\zeta  is  the Riemann zeta function; \,\zeta(3/2)\approx 2.6124. [9] Interactions shift the value and the corrections can be calculated by mean-field theory. Einstein's non-interacting gas[edit] Consider a collection of N noninteracting particles, which can each be in one of two quantum states, \scriptstyle|0\rangle and \scriptstyle|1\rangle. If the two states are equal in energy, each different configuration is equally likely. If we can tell which particle is which, there are 2^N different configurations, since each particle can be in \scriptstyle|0\rangle or \scriptstyle|1\rangle independently. In almost all of the configurations, about half the particles are in \scriptstyle|0\rangle and the other half in \scriptstyle|1\rangle. The balance is a statistical effect: the number of configurations is largest when the particles are divided equally. If the particles are indistinguishable, however, there are only N+1 different configurations. If there are K particles in state \scriptstyle|1\rangle, there are N − K particles in state \scriptstyle|0\rangle. Whether any particular particle is in state \scriptstyle|0\rangle or in state \scriptstyle|1\rangle cannot be determined, so each value of K determines a unique quantum state for the whole system. Suppose now that the energy of state \scriptstyle|1\rangle is slightly greater than the energy of state \scriptstyle|0\rangle by an amount E. At temperature T, a particle will have a lesser probability to be in state \scriptstyle|1\rangle by e^{-E/kT}. In the distinguishable case, the particle distribution will be biased slightly towards state \scriptstyle|0\rangle. But in the indistinguishable case, since there is no statistical pressure toward equal numbers, the most-likely outcome is that most of the particles will collapse into state \scriptstyle|0\rangle. In the distinguishable case, for large N, the fraction in state \scriptstyle|0\rangle can be computed. It is the same as flipping a coin with probability proportional to p = exp(−E/T) to land tails. In the indistinguishable case, each value of K is a single state, which has its own separate Boltzmann probability. So the probability distribution is exponential: P(K)= C e^{-KE/T} = C p^K. For large N, the normalization constant C is (1 − p). The expected total number of particles not in the lowest energy state, in the limit that \scriptstyle N\rightarrow \infty, is equal to \scriptstyle \sum_{n>0} C n p^n=p/(1-p) . It does not grow when N is large; it just approaches a constant. This will be a negligible fraction of the total number of particles. So a collection of enough Bose particles in thermal equilibrium will mostly be in the ground state, with only a few in any excited state, no matter how small the energy difference. Consider now a gas of particles, which can be in different momentum states labeled \scriptstyle|k\rangle. If the number of particles is less than the number of thermally accessible states, for high temperatures and low densities, the particles will all be in different states. In this limit, the gas is classical. As the density increases or the temperature decreases, the number of accessible states per particle becomes smaller, and at some point, more particles will be forced into a single state than the maximum allowed for that state by statistical weighting. From this point on, any extra particle added will go into the ground state. To calculate the transition temperature at any density, integrate, over all momentum states, the expression for maximum number of excited particles, p/(1 − p): N = V \int {d^3k \over (2\pi)^3} {p(k)\over 1-p(k)} = V \int {d^3k \over (2\pi)^3} {1 \over e^{k^2\over 2mT}-1} p(k)= e^{-k^2\over 2mT}. When the integral is evaluated with factors of kB and restored by dimensional analysis, it gives the critical temperature formula of the preceding section. Therefore, this integral defines the critical temperature and particle number corresponding to the conditions of negligible chemical potential. In Bose–Einstein statistics distribution, μ is actually still nonzero for BEC's; however, μ is less than the ground state energy. Except when specifically talking about the ground state, μ can be approximated for most energy or momentum states as μ ≈ 0. Bogoliubov theory for weakly interacting gas[edit] Bogoliubov considered perturbations on the limit of dilute gas,[10] finding a finite pressure at zero temperature and positive chemical potential. This leads to corrections for the ground state. The Bogoliubov state has pressure(T=0): P = g/2 n^2. The original interacting system can be converted to a system of non-interacting particles with a dispersion law. Gross–Pitaevskii equation[edit] In some simplest cases, the state of condensed particles can be described with a nonlinear Schrödinger equation, also known as Gross-Pitaevskii or Ginzburg-Landau equation. The validity of this approach is actually limited to the case of ultracold temperatures, which fits well for the most alkali atoms experiments. This approach originates from the assumption that the state of the BEC can be described by the unique wavefunction of the condensate \psi(\vec{r}). For a system of this nature, |\psi(\vec{r})|^2 is interpreted as the particle density, so the total number of atoms is N=\int d\vec{r}|\psi(\vec{r})|^2 Provided essentially all atoms are in the condensate (that is, have condensed to the ground state), and treating the bosons using mean field theory, the energy (E) associated with the state \psi(\vec{r}) is: Minimizing this energy with respect to infinitesimal variations in \psi(\vec{r}), and holding the number of atoms constant, yields the Gross–Pitaevski equation (GPE) (also a non-linear Schrödinger equation): i\hbar\frac{\partial \psi(\vec{r})}{\partial t} = \left(-\frac{\hbar^2\nabla^2}{2m}+V(\vec{r})+U_0|\psi(\vec{r})|^2\right)\psi(\vec{r}) \,m  is the mass of the bosons, \,V(\vec{r})  is the external potential, \,U_0  is representative of the inter-particle interactions. In the case of zero external potential, the dispersion law of interacting Bose-Einstein-condensed particles is given by so-called Bogoliubov spectrum (for \ T= 0): {\omega _p} = \sqrt {\frac{{{p^2}}}{{2m}}\left( {\frac{{{p^2}}}{{2m}} + 2{U_0}{n_0}} \right)} The Gross-Pitaevskii equation (GPE) provides a relatively good description of the behavior of atomic BEC's. However, GPE does not take into account the temperature dependence of dynamical variables, and is therefore valid only for \ T= 0. It is not applicable, for example, for the condensates of excitons, magnons and photons, where the critical temperature is up to room one. Weaknesses of Gross–Pitaevskii model[edit] The Gross–Pitaevskii model of BEC is a physical approximation valid for certain classes of BECs. By construction, the GPE uses the following simplifications: it assumes that interactions between condensate particles are of the contact two-body type and also neglects anomalous contributions to self-energy.[11] These assumptions are suitable mostly for the dilute three-dimensional condensates. If one relaxes any of these assumptions, the equation for the condensate wavefunction acquires the terms containing higher-order powers of the wavefunction. Moreover, for some physical systems the amount of such terms turns out to be infinite, therefore, the equation becomes essentially non-polynomial. The examples where this could happen are the Bose–Fermi composite condensates,[12][13][14][15] effectively lower-dimensional condensates,[16] and dense condensates and superfluid clusters and droplets.[17] However, it is clear that in a general case the behaviour of Bose–Einstein condensate can be described by coupled evolution equations for condensate density, superfluid velocity and distribution function of elementary excitations. This problem was in 1977 by Peletminskii et al. in microscopical approach. The Peletminskii equations are valid for any finite temperatures below the critical point. Years after, in 1985, Kirkpatrick and Dorfman obtained similar equations using another microscopical approach. The Peletminskii equations also reproduce Khalatnikov hydrodynamical equations for superfluid as a limiting case. Superfluidity of BEC and Landau criterion[edit] The phenomena of superfluidity of a Bose gas and superconductivity of a strongly-correlated Fermi gas (a gas of Cooper pairs) are tightly connected to Bose-Einstein condensation. Under corresponding conditions, below the temperature of phase transition, these phenomena were observed in helium-4 and different classes of superconductors. In this sense, the superconductivity is often called the superfluidity of Fermi gas. In the simplest form, the origin of superfluidity can be seen from the weakly interacting bosons model. Experimental observation[edit] Superfluid He-4[edit] In 1938, Pyotr Kapitsa, John Allen and Don Misener discovered that helium-4 became a new kind of fluid, now known as a superfluid, at temperatures less than 2.17 K (the lambda point). Superfluid helium has many unusual properties, including zero viscosity (the ability to flow without dissipating energy) and the existence of quantized vortices. It was quickly believed that the superfluidity was due to partial Bose–Einstein condensation of the liquid. In fact, many properties of superfluid helium also appear in gaseous condensates created by Cornell, Wieman and Ketterle (see below). Superfluid helium-4 is a liquid rather than a gas, which means that the interactions between the atoms are relatively strong; the original theory of Bose–Einstein condensation must be heavily modified in order to describe it. Bose–Einstein condensation remains, however, fundamental to the superfluid properties of helium-4. Note that helium-3, a fermion, also enters a superfluid phase at low temperature, which can be explained by the formation of bosonic Cooper pairs of two atoms (see also fermionic condensate). The first "pure" Bose–Einstein condensate was created by Eric Cornell, Carl Wieman, and co-workers at JILA on 5 June 1995. They cooled a dilute vapor of approximately two thousand rubidium-87 atoms to below 170 nK using a combination of laser cooling (a technique that won its inventors Steven Chu, Claude Cohen-Tannoudji, and William D. Phillips the 1997 Nobel Prize in Physics) and magnetic evaporative cooling. About four months later, an independent effort led by Wolfgang Ketterle at MIT condensed sodium-23. Ketterle's condensate had a hundred times more atoms, allowing important results such as the observation of quantum mechanical interference between two different condensates. Cornell, Wieman and Ketterle won the 2001 Nobel Prize in Physics for their achievements.[18] A group led by Randall Hulet at Rice University announced a condensate of lithium atoms only one month following the JILA work.[19] Lithium has attractive interactions, causing the condensate to be unstable and collapse for all but a few atoms. Hulet's team subsequently showed the condensate could be stabilized by confinement quantum pressure for up to about 1000 atoms. Various isotopes have since been condensed. Velocity-distribution data graph[edit] In the image accompanying this article, the velocity-distribution data indicates the formation of a Bose–Einstein condensate out of a gas of rubidium atoms. The false colors indicate the number of atoms at each velocity, with red being the fewest and white being the most. The areas appearing white and light blue are at the lowest velocities. The peak is not infinitely narrow because of the Heisenberg uncertainty principle: spatially confined atoms have a minimum width velocity distribution. This width is given by the curvature of the magnetic potential in the given direction. More tightly confined directions have bigger widths in the ballistic velocity distribution. This anisotropy of the peak on the right is a purely quantum-mechanical effect and does not exist in the thermal distribution on the left. This graph served as the cover design for the 1999 textbook Thermal Physics by Ralph Baierlein.[20] Bose–Einstein condensation also applies to quasiparticles in solids. Magnons, Excitons, and Polaritons have integer spin and form condensates. Magnons, electron spin waves, can be controlled by a magnetic field. Densities from the limit of a dilute gas to a strongly interacting Bose liquid are possible. Magnetic ordering is the analog of superfluidity. In 1999 condensation was demonstrated in antiferromagnetic TlCuCl3,[21] at temperatures as large as 14 K. The high transition temperature (relative to atomic gases) is due to the magnons small mass (near an electron) and greater achievable density. In 2006, condensation in a ferromagnetic Yttrium-iron-garnet thin film was seen even at room temperature,[22][23] with optical pumping. Excitons, electron-hole pairs, were predicted to condense at low temperature and high density by Boer et al. in 1961. Bilayer system experiments first demonstrated condensation in 2003, by Hall voltage disappearance.. Fast optical exciton creation was used to form condensates in sub-Kelvin Cu2O in 2005 on. Polariton condensation was detected in a 5K quantum well microcavity. Peculiar Properties[edit] As in many other systems, vortices can exist in BECs. These can be created, for example, by 'stirring' the condensate with lasers, or rotating the confining trap. The vortex created will be a quantum vortex. These phenomena are allowed for by the non-linear |\psi(\vec{r})|^2 term in the GPE. As the vortices must have quantized angular momentum the wavefunction may have the form \psi(\vec{r})=\phi(\rho,z)e^{i\ell\theta} where \rho, z and \theta are as in the cylindrical coordinate system, and \ell is the angular number. This is particularly likely for an axially symmetric (for instance, harmonic) confining potential, which is commonly used. The notion is easily generalized. To determine \phi(\rho,z), the energy of \psi(\vec{r}) must be minimized, according to the constraint \psi(\vec{r})=\phi(\rho,z)e^{i\ell\theta}. This is usually done computationally, however in a uniform medium the analytic form \phi=\frac{nx}{\sqrt{2+x^2}}, where: \,n^2  is  density far from the vortex, \,x = \frac{\rho}{\ell\xi}, \,\xi  is  healing length of the condensate. demonstrates the correct behavior, and is a good approximation. A singly charged vortex (\ell=1) is in the ground state, with its energy \epsilon_v given by \epsilon_v=\pi n where \,b is the farthest distance from the vortex considered.(To obtain an energy which is well defined it is necessary to include this boundary b.) For multiply charged vortices (\ell >1) the energy is approximated by \epsilon_v\approx \ell^2\pi n which is greater than that of \ell singly charged vortices, indicating that these multiply charged vortices are unstable to decay. Research has, however, indicated they are metastable states, so may have relatively long lifetimes. Closely related to the creation of vortices in BECs is the generation of so-called dark solitons in one-dimensional BECs. These topological objects feature a phase gradient across their nodal plane, which stabilizes their shape even in propagation and interaction. Although solitons carry no charge and are thus prone to decay, relatively long-lived dark solitons have been produced and studied extensively.[24] Attractive interactions[edit] Experiments led by Randall Hulet at Rice University from 1995 through 2000 showed that lithium condensates with attractive interactions could stably exist up to a critical atom number. Quench cooling the gas, they observed the condensate to grow, then subsequently collapse as the attraction overwhelmed the zero-point energy of the confining potential, in a burst reminiscent of a supernova, with an explosion preceded by an implosion. Further work on attractive condensates was performed in 2000 by the JILA team, of Cornell, Wieman and coworkers. Their instrumentation now had better control so they used naturally attracting atoms of rubidium-85 (having negative atom–atom scattering length). Through Feshbach resonance involving a sweep of the magnetic field causing spin flip collisions, they lowered the characteristic, discrete energies at which rubidium bonds, making their Rb-85 atoms repulsive and creating a stable condensate. The reversible flip from attraction to repulsion stems from quantum interference among wave-like condensate atoms. When the JILA team raised the magnetic field strength further, the condensate suddenly reverted to attraction, imploded and shrank beyond detection, then exploded, expelling about two-thirds of its 10,000 atoms. About half of the atoms in the condensate seemed to have disappeared from the experiment altogether, not seen in the cold remnant or expanding gas cloud.[18] Carl Wieman explained that under current atomic theory this characteristic of Bose–Einstein condensate could not be explained because the energy state of an atom near absolute zero should not be enough to cause an implosion; however, subsequent mean field theories have been proposed to explain it. Most likely they formed molecules of two rubidium atoms.,[25] energy gained by this bond imparts velocity sufficient to leave the trap without being detected. Current research[edit] List of unsolved problems in physics How do we rigorously prove the existence of Bose-Einstein condensates for general interacting systems? Compared to more commonly encountered states of matter, Bose–Einstein condensates are extremely fragile. The slightest interaction with the outside world can be enough to warm them past the condensation threshold, eliminating their interesting properties and forming a normal gas.[citation needed] Nevertheless, they have proven useful in exploring a wide range of questions in fundamental physics, and the years since the initial discoveries by the JILA and MIT groups have seen an explosion in experimental and theoretical activity. Examples include experiments that have demonstrated interference between condensates due to wave–particle duality,[26] the study of superfluidity and quantized vortices, the creation of bright matter wave solitons from Bose condensates confined to one dimension, and the slowing of light pulses to very low speeds using electromagnetically induced transparency.[27] Vortices in Bose–Einstein condensates are also currently the subject of analogue gravity research, studying the possibility of modeling black holes and their related phenomena in such environments in the lab. Experimenters have also realized "optical lattices", where the interference pattern from overlapping lasers provides a periodic potential. These have been used to explore the transition between a superfluid and a Mott insulator,[28] and may be useful in studying Bose–Einstein condensation in fewer than three dimensions, for example the Tonks–Girardeau gas. Bose–Einstein condensates composed of a wide range of isotopes have been produced.[29] Cooling fermions to extremely low temperatures has created degenerate gases, subject to the Pauli exclusion principle. To exhibit Bose–Einstein condensation, the fermions must "pair up" to form bosonic compound particles (e.g. molecules or Cooper pairs). The first molecular condensates were created in November 2003 by the groups of Rudolf Grimm at the University of Innsbruck, Deborah S. Jin at the University of Colorado at Boulder and Wolfgang Ketterle at MIT. Jin quickly went on to create the first fermionic condensate composed of Cooper pairs.[30] In 1999, Danish physicist Lene Hau led a team from Harvard University which slowed a beam of light to about 17 meters per second.[clarification needed], using a superfluid.[31] Hau and her associates have since made a group of condensate atoms recoil from a light pulse such that they recorded the light's phase and amplitude, recovered by a second nearby condensate, in what they term "slow-light-mediated atomic matter-wave amplification" using Bose–Einstein condensates: details are discussed in Nature.[32] Researchers in the new field of atomtronics use the properties of Bose–Einstein condensates when manipulating groups of identical cold atoms using lasers.[33] Further, BECs have been proposed by Emmanuel David Tannenbaum for anti-stealth technology.[34] The effect has mainly been observed on alkaline atoms which have nuclear properties particularly suitable for working with traps. As of 2012, using ultra-low temperatures of 10−7 K or below, Bose–Einstein condensates had been obtained for a multitude of isotopes, mainly of alkaline, alkaline earth, and lanthanoid atoms (7Li, 23Na, 39K, 41K, 85Rb, 87Rb, 133Cs, 52Cr, 40Ca, 84Sr, 86Sr, 88Sr, 174Yb, 164Dy, and 168Er ). Research was finally successful in hydrogen with aid of special methods. In contrast, the superfluid state of 4He below 2.17 K is not a good example, because the interaction between the atoms is too strong. Only 8% of atoms are in the ground state near absolute zero, rather than the 100% of a true condensate. The bosonic behavior of some of these alkaline gases appears odd at first sight, because their nuclei have half-integer total spin. It arises from a subtle interplay of electronic and nuclear spins: at ultra-low temperatures and corresponding excitation energies, the half-integer total spin of the electronic shell and half-integer total spin of the nucleus are coupled by a very weak hyperfine interaction. The total spin of the atom, arising from this coupling, is an integer value. The chemistry of systems at room temperature is determined by the electronic properties, which is essentially fermionic, since room temperature thermal excitations have typical energies much higher than the hyperfine values. See also[edit] 1. ^ Arora, C. P. (2001). Thermodynamics. Tata McGraw-Hill. p. 43. ISBN 0-07-462014-2. , Table 2.4 page 43 2. ^ "Leiden University Einstein archive". 27 October 1920. Retrieved 23 March 2011.  3. ^ Clark, Ronald W. (1971). Einstein: The Life and Times. Avon Books. pp. 408–409. ISBN 0-380-01159-X.  4. ^ London, F. (1938). "The λ-Phenomenon of Liquid Helium and the Bose–Einstein Degeneracy". Nature 141 (3571): 643–644. Bibcode:1938Natur.141..643L. doi:10.1038/141643a0.  5. ^ London, F. Superfluids Vol.I and II, (reprinted New York: Dover 1964) 6. ^ "New State of Matter Seen Near Absolute Zero". NIST.  7. ^ Levi, Barbara Goss (2001). "Cornell, Ketterle, and Wieman Share Nobel Prize for Bose–Einstein Condensates". Search & Discovery. Physics Today online. Archived from the original on 24 October 2007. Retrieved 26 January 2008.  8. ^ Klaers, Jan; Schmitt, Julian; Vewinger, Frank; Weitz, Martin (2010). "Bose–Einstein condensation of photons in an optical microcavity". Nature 468 (7323): 545–548. arXiv:1007.4088. Bibcode:2010Natur.468..545K. doi:10.1038/nature09567. PMID 21107426.  9. ^ (sequence A078434 in OEIS) 10. ^ N. N. Bogoliubov (1947). "On the theory of superfluidity.". J. Phys. (USSR), 11:23.  11. ^ Beliaev, S. T. Zh. Eksp. Teor. Fiz. 34, 418–432 (1958); ibid. 433–446 [Soviet Phys. JETP 3, 299 (1957)]. 12. ^ Schick, M. (1971). "Two-Dimensional System of Hard-Core Bosons". Physical Review A 3 (3): 1067. Bibcode:1971PhRvA...3.1067S. doi:10.1103/PhysRevA.3.1067.  edit 13. ^ Kolomeisky, E.; Straley, J. (1992). "Renormalization-group analysis of the ground-state properties of dilute Bose systems in d spatial dimensions". Physical Review B 46 (18): 11749. Bibcode:1992PhRvB..4611749K. doi:10.1103/PhysRevB.46.11749.  edit 14. ^ Kolomeisky, E. B.; Newman, T. J.; Straley, J. P.; Qi, X. (2000). "Low-Dimensional Bose Liquids: Beyond the Gross-Pitaevskii Approximation". Physical Review Letters 85 (6): 1146–1149. arXiv:cond-mat/0002282. Bibcode:2000PhRvL..85.1146K. doi:10.1103/PhysRevLett.85.1146. PMID 10991498.  edit 15. ^ Chui, S.; Ryzhov, V. (2004). "Collapse transition in mixtures of bosons and fermions". Physical Review A 69 (4). Bibcode:2004PhRvA..69d3607C. doi:10.1103/PhysRevA.69.043607.  edit 16. ^ Salasnich, L.; Parola, A.; Reatto, L. (2002). "Effective wave equations for the dynamics of cigar-shaped and disk-shaped Bose condensates". Phys. Rev. A 65 (4): 043614. arXiv:cond-mat/0201395. Bibcode:2002PhRvA..65d3614S. doi:10.1103/PhysRevA.65.043614.  17. ^ Avdeenkov, A. V.; Zloshchastiev, K. G. (2011). "Quantum Bose liquids with logarithmic nonlinearity: Self-sustainability and emergence of spatial extent". J. Phys. B: At. Mol. Opt. Phys. 44 (19): 195303. arXiv:1108.0847. Bibcode:2011JPhB...44s5303A. doi:10.1088/0953-4075/44/19/195303.  18. ^ a b "Eric A. Cornell and Carl E. Wieman — Nobel Lecture" (PDF).  19. ^ Bradley, C. C.; Sackett, C. A.; Tollett, J. J.; Hulet, R. G. (1995). "Evidence of Bose-Einstein Condensation in an Atomic Gas with Attractive Interactions" (PDF). Physical review letters 75 (9): 1687–1690. doi:10.1103/PhysRevLett.75.1687. PMID 10060366.  edit 20. ^ Baierlein, Ralph (1999). Thermal Physics. Cambridge University Press. ISBN 0-521-65838-1.  21. ^ Nikuni, T.; Oshikawa, M.; Oosawa, A.; Tanaka, H. (1999). "Bose–Einstein Condensation of Dilute Magnons in TlCuCl3". Physical Review Letters 84 (25): 5868–71. arXiv:cond-mat/9908118. Bibcode:2000PhRvL..84.5868N. doi:10.1103/PhysRevLett.84.5868. PMID 10991075.  22. ^ Demokritov, S.O.; Demidov, VE; Dzyapko, O; Melkov, GA; Serga, AA; Hillebrands, B; Slavin, AN (2006). "Bose–Einstein condensation of quasi-equilibrium magnons at room temperature under pumping". Nature 443 (7110): 430–433. Bibcode:2006Natur.443..430D. doi:10.1038/nature05117. PMID 17006509.  23. ^ Magnon Bose Einstein Condensation made simple. Website of the "Westfählische Wilhelms Universität Münster" Prof.Demokritov. Retrieved 25 June 2012. 24. ^ Becker, Christoph; Stellmer, Simon; Soltan-Panahi, Parvis; Dörscher, Sören; Baumert, Mathis; Richter, Eva-Maria; Kronjäger, Jochen; Bongs, Kai; Sengstock, Klaus (2008). "Oscillations and interactions of dark and dark–bright solitons in Bose–Einstein condensates". Nature Physics 4 (6): 496–501. arXiv:0804.0544. Bibcode:2008NatPh...4..496B. doi:10.1038/nphys962.  25. ^ van Putten, M.H.P.M. (2010). "Pair condensates produced in bosenovae". Physics Letters A 374 (33): 3346. Bibcode:2010PhLA..374.3346V. doi:10.1016/j.physleta.2010.06.020.  26. ^ Gorlitz, Axel. "Interference of Condensates (BEC@MIT)". Retrieved 13 October 2009.  27. ^ Dutton, Zachary; Ginsberg, Naomi S.; Slowe, Christopher and Hau, Lene Vestergaard (2004). "The art of taming light: ultra-slow and stopped light" (PDF). Europhysics News 35 (2): 33. Bibcode:2004ENews..35...33D. doi:10.1051/epn:2004201.  28. ^ "From Superfluid to Insulator: Bose–Einstein Condensate Undergoes a Quantum Phase Transition". Retrieved 13 October 2009.  29. ^ "Ten of the best for BEC". 1 June 2005.  30. ^ "Fermionic condensate makes its debut". 28 January 2004.  31. ^ Cromie, William J. (18 February 1999). "Physicists Slow Speed of Light". The Harvard University Gazette. Retrieved 26 January 2008.  32. ^ Ginsberg, N. S.; Garner, S. R.; Hau, L. V. (2007). "Coherent control of optical information with matter wave dynamics". Nature 445 (7128): 623–626. doi:10.1038/nature05493. PMID 17287804.  edit 33. ^ Weiss, P. (12 February 2000). "Atomtronics may be the new electronics". Science News Online 157 (7): 104. doi:10.2307/4012185. Retrieved 12 February 2011.  34. ^ Tannenbaum, Emmanuel David (1970). "Gravimetric Radar: Gravity-based detection of a point-mass moving in a static background". arXiv:1208.2377 [physics.ins-det].  Further reading[edit] External links[edit]
6630f13c2b8e4187
tagInterracial LoveMae's Revenge Ch. 01 Mae's Revenge Ch. 01 Karel, a 6'7" long, blonde, blue-eyed and muscular-built prototypical Dutch physics major in his early twenties, was looking for an affordable living place in Detroit, a must after he was chosen to be an research student at the Department of Physics of the Jesuit College of Detroit. Out of a host of reactions on his newspaper ad, one stood out: a small room plus sharing meals for just $300 per month, less than half an hour cycling away from his faculty building. A woman with a mellow, smoky voice picked up the phone. They agreed on a visit. He took an instant liking on the house lady, who introduced herself as Mae Johnson, a petite, middle-aged pitch black lady in her late forties with awesome body curves -- if your gaze managed to escape from her sparkling, intelligent eyes with a speck of sadness in them. Her discipline and will-power hid a warm, tender soul. It turned out she lived off a meager daytime job and needed an extra source of income to make ends meet. He liked the spacy room with a huge two-person bed and felt a little at home after Mae guided him around. "This was our bedroom, Williams and mine, before he ran away with our teenage neighbor girl," she said softly with a streak of sorrow in her voice. "I decided to move to our guest room because it feels terrible to sleep in this big bed here alone." "I understand." Karel answered. He saw a framed picture of William and Mae together and realized, it would be hard to find a man which looked more different from the small, wiry, and likewise ebony unfaithful William than he. She would not be in danger of being reminded of her run-away husband too often. She smiled and touched his arm lightly. He moved in. Every day he attended college and did his research at the faculty, while in the evening he shared meals and a deep talk with Mae. She was an arts teacher and managed to survive on a small part-time tuition job at a nearby college. Soon after he discovered another reason why Mae was so welcoming to her new occupant. The neighborhood obviously had a problem. A group of drug users had chosen Mae's ward as their daily meeting point and regularly rang Mae's door for money or to use her toilet - if they didn't use her garden for that purpose. They were in for an unpleasant surprise when Karel opened the door instead of the fragile Mae. It took an hour and a half of heated argument and a short fight with the rowdiest of the lot, including catapulting the hoodlum over the fence before the group decided to change their lair to a less hostile place. "Thank you, Karel," she whispered. "I was so scared. Finally someone dared to fight back." She hugged him and in an instant reaction he held the delicate lady tightly in his arms. Her feminine body scent entered his nostrils, her breath touched his neck. Mae gently tended to his wounds and caressed his hands after for a long time with her sleek, fine fingers. Suddenly her eyes met his, absorbing everything. His heart bounced. What was happening to him? "Come, let's make our garden a better place. You Dutch have green fingers, isn't it? I want to forget those guys." she said and they left in her old pickup to the nearby garden center. That evening he got lots of sights at her marvelous booty when they planted the flower and vegetable saplings and her delicate B-cup breasts when her sweat glued her T-shirt to her body. He tried to behave like a well-educated young man, looking anywhere than her body but failed miserably. It certainly didn't help that she kissed him briefly on his lips after they finished transforming her desolate garden into the beginnings of which would become a lustrous Kew franchise. He heard her sing when she took a shower afterward. He tried to think about something else, but images of her naked body continued to pop up in his brain. He was in love. He avoided her, tried to forget his feelings. Mae pinpointed the problem with ease. "I know why you are so shy with me, hon," she said when she served him the next dinner. Karel decided to tell the truth. Mae was way too intelligent to be fooled by amateurish lies. He mustered all his courage. "You are right, Mae. I am sorry, I... will leave your house." Mae came next to his chair, caressed his hair. Her small breasts with erect nipples were clearly visible through her transparent white blouse, close to his face. Her bittersweet body-odor, product of a weary day of lessons to unruly students in her college, made his mind spin and awoke his penis. "No hon, why, are you crazy? I never felt so great in my life as I do now. Please, stay with me." "But..." he stuttered. He looked up. Her intense look took away his breath. "I feel safe with you, hon. You are so responsible. You know, I know, it will not work out between us. You will not break this old ladies heart by filling it with foolish dreams, will you, Karel?" "I am sorry, Mae. I-don't-want-" They looked deep in each others eyes. He knew his feelings went not unanswered. "Don't be, dear boy." she whispered eventually with a slight tremble in her voice. "To live is to feel pain. I made the choice to live long ago. I am happy nonetheless, even while feeling the sweet pain you are giving me right now. I know I am too old, I never will be desired by such a handsome man like you again. Look, for instance my wrists. So... old. So... ugly." He took her hand, caressing her fingers lightly. He kissed her wrinkled wrist hungrily, again and again, then continued to fondle her hand. "I like your wrists." She breathed heavily. Tears sprang up in her eyes. "Karel, stop it!" "If you stop with depressing yourself with things that are not true, lovely beauty. Choose to live. Choose to fight back." She broke down. He stood up, caressing her feline, trembling body in his arms. She hugged him, shivered, pressed her face in his shoulder. She cried soundlessly. Her tears trickled through his shirt. Suddenly, she loosened her grip. "Never do this to me again! Never, do you hear!" Her bright eyes flamed. "Out of my sight! I want to be alone." He went to his room, tried to concentrate on his work. The sound of spattering water in the bathroom next to his room broke his concentration. He imagined pearls of water seeping through her helmet of Mae's curled hair, kissing her lips, exploring her neck, caressing her delicate breasts, covering her thighs, teasing her round butt, tickling her cunt while her tiny fingers massaged her belly, her bush. He would have to leave here soon, he knew. Mae could forgive everything and everyone, except the man who made her loose control. He started packing his cloths and books. He underestimated her determination. After an hour or so, someone opened his door. Mae entered his room, dressed in a summer gown. She looked fresh, radiant and energetic like a water nymph from some Micronesian island, fully aware of the devastating blow she dealt to his senses. She saw his opened suitcase and closed it resolutely. "Mister Karel van Doorn, you will stay." Surprised, he looked in her eyes. "I will help you hon. Find your true soul-mate." Like all of Detroit, the Jesuit College was clearly split between black and white students. Mae's neighborhood was almost exclusively black, while most of his fellow students rented their rooms in affluent white neighborhoods. He wondered how Mae could connect to a world a galaxy away, socially spoken. "What do you have in store, Mae?" He tried to smile. "By changing you a little, hon. Starting with dance. By the end of three months, you will be together with your true love." "It.. will be difficult. To put you out of my head, I mean." "I know, handsome young man." Mae kissed him briefly on his lips, enough to send his heart into a frenzy. "So I have this extra rule. You are not allowed to touch me or say anything nice about me or my body, except by my permission." Mae kept her promise. Every night, after their meal and dish washing, they trained dance for a while. He tried to control his urge to hold her delicate body to his and kiss her full lips. "Put your hand around my middle, hon. Or even better, feel my butt. Now. Don't be shy. Good dancers ain't shy." After a short hesitation, he explored her perfectly curved African buttocks. "You DO like my ass, eh?" His face colored red. "Yes," he breathed. Mae smiled deviously. "I don't mind, hon. Continue this way until you don't feel shy any more." She patted his buttocks. "I like your rear too, hon." After he heard her giggle on the phone with her friend Sally. Mae's shock therapy helped. He wasn't any more practicing dance, they were the dance. Their movements evolved into one organic whole. There was just the music from Mae's old hi-fi set, Mae's body against his, her arousing body scent, her sweet mellow voice when she sang along with the music. After a long exhaustive session, they sat together on the old coach in her living room. "What are you researching, hon?" "Quantum entanglement. If two very small particles are together for a while, they become like one. Even when they separate after, they can influence each other. Even if they are billions of light years apart. Most people consider it boring. I blew already three dates by talking about it." "On the contrary, hon. It sounds like love and magic are true," she whispered softly and looked deep in his blue eyes. "Tell me more. Mind the math, I am an fully-bred alpha mare." She leaned to his shoulder like a small girl. He did, totally absorbed in her warm, suddenly intense look, carefully avoiding things like Hermitian operators and Schrödinger equations. She laid her left hand down on his knee, looked at him with a slight pain in her eyes. Please make me feel better, hon, he understood wordlessly. Still he remembered their agreement. He took her hand gently into his, looked in her eyes. Wordlessly, she agreed. He brought her hand to his lips, kissed her delicious ebony wrist, then the slightly lighter interior of her hand. He explored the fine knuckles of her hand with his lips, kissed them lightly, then hungrily explored her bare arms up towards her elbows. "You are so beautiful and yet so real, hon," he whispered. "Like a supernaturally gifted artist crafted all of you out of precious ebony wood." She smiled like a shy schoolgirl, trying to control her tears. "Thanks, hon. I needed this." She nestled herself in his arms. "Now hold me. I had a terrible day today." He caressed her while he listened intensely to her sobs and her sad adventures with the overpaid school board after she tried to liven up her lessons a little. Without thinking he kissed her in her neck. She took his hand between hers and smiled like an angel. He kissed her again, and again. Suddenly she hugged him tightly, kissed him frenziedly. "Behave, hon," she whispered with shivering voice. "I cannot control myself any more, you must do it now for both of us." Wordlessly he obliged. Finally even the perfectionist Mae was content with his level of dancing skill. "And now, hon, join the student's dance club. Your first impression now will be great. The girls will vie for you." That was where he met Vivian. At 5 feet and nine inches, she was the wild dream of every high school boy. Long blonde hair up to her buttocks, breasts like melons and a tightly cut ass. What's more, she was captain of the cheerleader team. Needless to say, many boys swarmed around her, especially those from the college's football team. To no avail, when Vivian saw him dancing she didn't want him to let go. Much to the chagrin of the football players. There were just four weeks between now and the college gala, so they practiced with vigor. In short, she was the perfect means to get over his crush for Mae. They dated at her place, a student fraternity house of a sorority with a three Greek letters name. The other inhabitants giggled when they stumbled upstairs, to Vivian's bed room. Right after he closed the door, she tongue kissed him. He wanted to undress. She stopped him. "Not the first date. Just fooling around now. Come, I have some candy for you." She pulled out her shirt and enjoyed his horny looks. He kissed her nipples, then circled them around slowly with the tip of his tongue. It helped. He managed to put Mae out of his head. At least for a few seconds. He rode home on his bike. Mae smiled when he told he had a date. "I told you so, hon. Bring her over, I would like to meet her." He did. Vivian looked around with disgust when she parked the old sorority Buick in front of Mae's house. "How can you live in a dump like this." "Look at the flowers in the windows and the garden. Mae makes the best of it." Karel apologized. And indeed, since the departure of the junkies in March, they had transformed Mae's garden in an oasis of colors in the desolate neighborhood. "Mae this, Mae that." Vivian snubbed. "Why are you so obsessed with that old African-American?" Yuck, what a racist, Karel thought with himself. Mae opened the door and smiled. Once again, Karel knew why he was. "Hello Vivian, welcome." Her voice was warm. Karel could feel Mae's intense jealousy. "Hello Ms. Johnson." Vivian replied formally. "Thank you so much for teaching my dance partner how to dance." "You're welcome, Vivian. Do you like some coffee?" Mae's tone was far less welcoming. "No thanks, madam. But I like to see your dancing with Carl." And off they went. It went, as always, like a dream. Nothing else existed than Mae, her sparkling eyes, her gentle touch. Suddenly she put her face in his neck. He felt her soft lips part, kissing his neck, then the tip of her tongue licking his skin. So far for his attempt. "Carl, I am going. Now!" Vivian snubbed jealously. He woke up. "Vivian, eh.." "Go fuck that ugly old black broad. Brad is waiting for me. Don't think any decent girl want to come with you after this, European thrash." Vivian smashed the front door and took off with the old Buick, roaring in low gear. With his face filled with disgust, Karel made it to the bathroom. "Karel? Is everything alright, honey?" Mae stood in the bath room, behind the shower curtain. It wasn't. He hated Vivian more than he hated anyone in his life. Even after he cleaned his mouth several times and scrubbed his body he did still feel revulsion for every memory at Vivian's touch. He saw Mae's butt leaning to the shower curtain, which followed her delicious curves. He tried not to look at it, to no avail. Suddenly she opened the curtain, stepped into the shower water. They looked in each others eyes. She was fully naked. She was even more gorgeous than he imagined. A sip of water dripped between her firm breasts with erect nipples, meandered over her belly. Karel lost his breath, overwhelmed by her slim, but well-proportioned body. She turned her back to him, sat down, her face to the wall. "Leave me alone, Karel. Vivian was right. I am so ugly. I saw how shocked you were." He sat down behind her on the tiled floor, wrapped his arms around her shivering body, below her arms. The water drizzled over them. "Yes, I was shocked. By your beauty." She sniffed. "You lie, hon. Just want to make this old lonely gal happy again." He kissed her in her neck, wrapped his muscular legs around her. "I mean it, merry Mae. You are the most beautiful creature I ever laid my eyes on." She lifted her head. "No one ever said that to me. William called me the ugliest broad of the street. Swear to God, you are not fooling with me." "I wished I could," he whispered in her ear. "I did my best to forget you, you know that, even I dated that bitch, but..." She climbed on his lap, her legs spread, buried her face in his shoulder. Her ass rested on his swollen dick. "Hold me tight, hon, and say it again and again, until I believe you deep in my old lonely heart." He tucked her strongly in his arms, one arm around her shoulders, another around her ass, then kissed her incessantly, only interrupted by his song about the beauty of her eyes, her smile, her skin, her calves, her legs, her buttocks. After he kissed her shoulders, her temples, her forehead. She kissed him in the neck. "Did you ever have a gal before?" "I knew, because otherwise she never would let you go." "Now, repeat that again and again, Mae." "Mae will never ever let you go, hon. I wanted you since you came here first time." Then he understood. "You did it on purpose, Mae. You did exactly that which would infuriate her." She smiled. "That dumb bimbo asked for it. Do you mind, dear?" "Thanks." He said it without any sarcasm. "I mean it, Mae. I belong to you. Forever." She brought her lips to his ear and whispered. "I know, honey. God damn I know." There was fire in her eyes. "You will not stop until I end up in the madhouse, Mae." It was not entirely a joke. Mae smiled in a way which put Mona Lisa at a shame. "Learning to love is difficult, honey. It is painful, it is hard. You have learned almost everything you need to know for now." "I not want to learn, lovely black princess." he whispered, breathing heavily. "No more dates for me. I know you won't accept me, but..." Her eyes glittered like black whirlpools of stars. "All will be right, my dream prince. Only sweet lessons now. You and I. Forever. Your Mae promises you." And her big, soft lips exploded against his. And again. Before he knew what he was doing he kissed her hungrily. She parted her lips and let him in, then explored his mouth with her long, hungry tongue. This felt so right. He held her tiny body towards his bare breast, his hands ravishing her buttocks. His swollen dick was squeezed between her round ass cheeks. "Mae will make you hers now." she whispered. She spread her legs, then before he knew she embraced his phallus with her waiting, hot cunt. She was surprisingly tight. She slowly fucked him with wavy, tender movements. He breathed heavily. "Control yourself, hon. I want our first time to last long," she panted, deep and tense. Her tight, tender pelvis movements, the hot, hungry look in her eyes, the pressure of her big round butt on his loins made just that more and more difficult for him. "Sorry, love, I'm coming." he whispered. Gently, she pushed him back. He let himself slide on his back, ignoring the cold tiles touching his shoulder blades. He moved the curtain aside, wrapping one arm around her thin spine, another around her rhythmically contracting and relaxing booty. He slowly started moving his pelvis in opposition to her. "Let me do the work for now, dear," she whispered. "I know how to keep a man from coming." She looked deep in his eyes. Times after times, she seemed to sense when he was close to coming and then held down her pelvic movements, pressed the base of his penis with her thumb. He squeezed her buttocks with both hands, then tickled her inner thighs. She bit his shoulder, squeezed his loin with her thighs, then moaned. He exploded deep in her. "Sorry love," she whispered, panting. "So soon..." "Sorry? For the best experience ever in my life, Venus?" "Ditto, love god." She hugged him, nestled her face in his neck. "Please let your merry old Mae lie here for a while, feel her handsome young lover rustle deep inside her." He did, holding her delicious body tightly to his, feeling his penis rejuvenating in her fountain of life. He gently took her, again massaging her ass. She kissed him, slipped off him, then kissed his dick. "We will meet soon, little Karel. Just recuperate a little for tonight. Tell your boss to behave with this poor innocent elder lady." They rinsed each other under the still drizzling shower. Report Story byDutchdream© 2 comments/ 19552 views/ 4 favorites Share the love Report a Bug 2 Pages:12 Forgot your password? Please wait Change picture Your current user avatar, all sizes: You have a new user avatar waiting for moderation. Select new user avatar:
eae39132fd3bec43
1 Introduction In this paper we study a particle under the influence of a Lennard-Jones potential moving in a simple quantum wire modelled by the positive half-line. Despite its physical significance, this potential is only rarely studied in the literature and due to its singularity at the origin it cannot be considered as a standard perturbation of the one-dimensional Laplacian. It is therefore our aim to provide a thorough description of the full Hamiltonian in one dimension via the construction of a suitable quadratic form. Our results include a discussion of spectral and scattering properties which finally allows us to generalise some results from [Rob74] as well as [RS78]. On Lennard-Jones-type potentials on the half-line Federica Gregorio 1 Dep. of Information Engineering, Electrical Engineering and Applied Mathematics Università degli Studi di Salerno Fisciano (SA) Joachim Kerner 2 Department of Mathematics and Computer Science FernUniversität in Hagen 58084 Hagen 1 Introduction In this note we are concerned with the Schrödinger operator defined on the Hilbert space with some constants. The potential term in (1.1) is the Lennard-Jones potential, arguably one of the most important potentials in solid-state physics which is used to describe the interaction between two neutral atoms [Jon37, DJ57]. Its main application lies with the description of the crystallisation of noble gases such as, e.g., argon. Note that the second term in the potential is the attractive van der Waals interaction (dipole-dipole interaction) and the first term is due to the repulsion at small distances which itself is a consequence of the Pauli exclusion principle. Of course, using a separation of variables into relative and center-of-mass coordinates, it is natural to consider the operator (1.1) given one wanted to describe two neutral atoms on the full line . However, another interpretation is obtained by regarding as a quantum wire (or quantum graph) with a vertex at the origin at which one imagines an additional complex internal structure such as a quantum dot [HV16]. Here a quantum dot has to be thought of as a relatively small box with hard walls at which another particle is placed (“particle in the box”). If one then assumes to first approximation that the particle in the box is not excited through the interaction with the other particle moving in the wire, the potential experienced by the particle in the wire is exactly of the Lennard-Jones type. Also, from a mathematical point of view the Lennard-Jones potential is interesting due to its high degree of singularity at the origin which implies that it is not relatively bound with respect to the Laplacian, i.e., it is not of Kato class. This also implies that known results in the scattering theory for integrable and relatively bound potentials do not apply which then motivated Robinson to study highly singular potentials in more detail [Rob74]. In particular, he proved the existence and completeness of the wave operators for the Lennard-Jones potential in three dimensions [Theorem 5.6,[Rob74]]. It is one of the aims of this paper to generalise this result to one dimension and even generalising it by proving asymptotic completeness [BHE08], see Section 4. In addition, motivated by the classical -body problem, Radin and Simon characterised, for on with in the Kato class, subspaces which are invariant under the time-evolution operator which, in particular, implies explicit upper bounds on i.e., the expectation value of the mean distance from the origin showing that this norm remains finite in finite time. In Section 5 we will generalise this result to our setting, i.e., for a potential which is not of Kato class. In Section 2 we start by establishing a rigorous realisation of the operator (1.1) via the construction of a suitable quadratic form. We characterise the form as well as the operator domain explicitly, also proving -regularity and essential self-adjointness of the minimal operator. In Section 3 we then study spectral properties, characterising the discrete as well as the essential part of the spectrum. In addition, we prove that the essential part is purely absolutely continuous. We note that the treatment of singular potentials with quadratic form methods goes back to Simon’s thesis (1971); see also [NZ92] and references therein. 2 Formulation of the model We consider a (spinless) particle moving on the half-line under the influence of an external potential of the Lennard-Jones type. More explicitly, the Hamiltonian of the particle shall be given by with . The quadratic form formally associated with this Hamiltonian is given by Theorem 2.1. The form defined on is densely defined, closed and bounded from below on . Since is dense in , density follows readily. In a next step we realise that for all . Furthermore, for all which implies, setting , and hence the form is bounded from below. Now, let be a Cauchy sequence with respect to the form norm . Due to completeness of there exists a function such that in -norm. Furthermore, employing the Lemma of Fatou we obtain for , , since is Cauchy with respect to the form norm. Using (2.3) we readily conclude that and as . Since the form domain (2.2) is not very explicit, we aim to characterise it further. In a first result we show that all functions in this domain fulfil Dirichlet boundary conditions at zero. Proposition 2.2. For one has . We first note that due to the trace theorem [Dob05] for Sobolev functions. Now assume that : Since -functions are continuous, we conclude the existence of a small interval , , such that for all and some . This however immediately implies that and hence . ∎ By the representation theorem of quadratic forms [BHE08] there exists a unique self-adjoint operator with domain being associated with . As shown in the next statement, this operator is indeed given by (2.1). Lemma 2.3. The operator associated with coincides with the operator . According to the representation theorem of forms [BHE08] we have the abstract characterisation Now, let be given: Then (note that this follows, e.g., using the difference quotient technique [GT83, Dob05] which is the standard technique to show local -regularity ) and an integration by parts yields By (2.4) one then obtains and therefore by choosing and taking density of in into account. ∎ Remark 2.4. Based on Lemma 2.3 we set and write . Until now we have identified one self-adjoint realisation of through the construction of a suitable quadratic form. The following result shows that this is indeed the only one existing. Proposition 2.5. The operator is the unique self-adjoint extension of . This follows from the theory of Sturm-Liouville operators as presented in [Sch12]. In particular, [Propositions 15.11, 15.12,[Sch12]] show that is in the so-called limit point case at zero and at infinity. The statement then follows with [Theorem 15.10,[Sch12]] which shows that has deficiency indices . ∎ In a next step we characterise the domain of in more detail by proving that . In other words, we establish (global) -regularity [Gri85, GT83, Dob05]. We also prove that the derivative at zero vanishes, i.e., functions in the operator domain fulfil also Neumann boundary conditions. Theorem 2.6. One has . Furthermore, if then . We first prove that : We first note that every is locally in (see the proof of Lemma 2.3) and hence exists as a weak derivative. Proposition 2.5 then shows that with respect to the operator norm. Hence, for any there exists a sequence such that for any and large enough. Hence, by triangle inequality and hence . Note that can be shown using the methods of the proof of [Proposition 3.2,[MPSR05]]: for this, one considers the operator with shifted potential (note that this operator is self-adjoint on the same domain as ) such that for some and for an arbitrarily small constant and some constant . We now turn to the second part of the statement: We first observe that implies that due to standard Sobolev embeddings. Furthermore, one has . Now, assume that . Consequently, there exists a such that as well as for all which implies This is a contradiction to and hence proves the statement. ∎ 3 On the spectrum of In this section we characterise the spectrum of and in a first step we look at the essential part of the spectrum and prove that . Although this result is rather standard in the theory of Schrödinger operators [Sim00], we shall add a proof for the sake of completeness. Theorem 3.1 (Essential spectrum). We have We first show that : Let an arbitrary be given. We use the Weyl characterisation in the version of quadratic forms [Sto01] and in this context a suitable Weyl sequence is obtained by choosing to be the (normalised) ground state eigenfunction to the Dirichlet Laplacian on the interval , i.e., and for . Given that as we see that converges weakly to zero in . Furthermore and hence if since for large enough. To prove that no value is contained in the essential spectrum we use an operator bracketing argument [BHE08]. More explicitly, we construct the comparison operator defined on . In the sense of operators, this operator is smaller than which implies that Since has discrete spectrum only we conclude Finally, since as , we conclude the statement since can be chosen arbitrarily. ∎ We now turn attention towards the discrete part of the spectrum. Theorem 3.2 (Discrete spectrum). The number of negative eigenvalues is finite. The statement readily follows by [Theorem 5.1,[BS91]] taking Proposition 2.2 into account. In a next result we show that the discrete spectrum may indeed be empty for some choices of even though the potential has a negative part. Theorem 3.3 (Absence of discrete spectrum). Whenever then Proposition 2.2 allows us to apply [Theorem 5.1,[BS91]] which states that the number of eigenvalues is bounded from above by the integral . Evaluating this integral then yields the statement. ∎ In a final result we show that the essential spectrum is indeed purely absolutely continuous. Theorem 3.4 (Absolutely continuous spectrum). We have and, furthermore, the spectrum is purely absolutely continuous on . The fact that is purely absolutely continuous readily follows from [Theorem 1.3,[Rem98]] taking Proposition 2.2 into account. Furthermore, since the singular continuous spectrum cannot be supported on the single point , the statement will follow if we prove that zero is not an eigenvalue: hence assume that is a normalised (real-valued) eigenvector to the eigenvalue zero. Since there exists a number such that since there exists a local maximum. Reflecting across the point then yields an eigenfunction to eigenvalue zero of the self-adjoint operator where However, this operator is known to have no eigenvalue zero [Ram87]. ∎ 4 On the scattering theory: Existence and completeness of the wave operators In this section we discuss the scattering properties of the pair of self-adjoint Hamiltonians , being the self-adjoint one-dimensional Laplacian on with Dirichlet boundary conditions at zero. In particular, we want to establish the existence and completeness of the corresponding wave operators and , see [Kat66, Rob74, BHE08] for more details. As a matter of fact, since there doesn’t exist singular continuous spectrum due to Theorem 3.4, we will actually prove that the wave operators are asymptotically complete [BHE08]. Note that the existence and completeness of the wave operators for the Lennard-Jones potential in three dimensions has been established in [Theorem 5.6,[Rob74]]. Hence the following result generalises this statement to the one-dimensional setting. Theorem 4.1. The wave operators and exist and are complete. We will prove the statement using the Birman-Kuroda Theorem [BHE08]. Also, we restrict ourselves to , the other case being analogous. We introduce some comparison operators: Let denote the self-adjoint realisation of (2.1) on , as in the proof of Theorem 2.1, with Dirichlet boundary conditions at . Furthermore, we introduce as the self-adjoint realisation of (2.1) on with Dirichlet boundary conditions at and as the self-adjoint realisation of (2.1) on again with Dirichlet boundary condition at . Now, taking into account Krein’s resolvent formula (see, e.g., [Theorem 14.18,[Sch12]]) we directly conclude that is of finite rank and hence of trace class for . Furthermore, on , In the last step we used that has purely discrete spectrum. Now, due to the integrability of the potential in , standard results imply that the wave operators and hence are complete [Yaf92, Yaf10]. Finally, Krein’s resolvent formula implies that the wave operators are complete since the difference of the resolvents is of finite rank and hence of trace class. The statement then follows by the well-known chain rule for wave operators [Proposition 15.2.2,[BHE08]]. 5 On an invariant domain of the time-evolution operator and an estimate As in [RS78] we are interested in establishing an estimate on, i.e., the expectation value of the square of the position operator a time for arbitrary initial datum . We again stress that the results of [RS78] are not directly applicable since the potential is not contained in the Kato class. Note that we write for the restriction of Fourier transform of onto where . As a first result we establish the following, where we write . Lemma 5.1. For any there exist constants such that setting . Due to positivity of one has Now the statement readily follows from [eq.(4),[RS78]] which states that In fact, one can directly show the following. Theorem 5.2. Define the subspace Then maps onto as a bounded operator, i.e., for some constants . Due to Lemma 5.1 and its proof, estimate (5.1) follows from Hence is a bounded operator into . Now suppose that is not onto . Then there exists an element such that for all . However, choosing on arrives at a contradiction and the statement is proved. ∎ The authors are very happy to thank R. Weder (Universidad Nacional Autónoma de México) for many helpful comments on the manuscript. JK also wants to thank S. Egger (Technion, Israel) for helpful discussions. We also want to thank the referee for pointing out interesting references. 1. E-mail address: fgregorio@unisa.it 2. E-mail address: joachim.kerner@fernuni-hagen.de 1. J. Blank, M. Havliček, and P. Exner, Hilbert space operators in quantum physics, Springer, 2008. 2. F. A. Berezin and M. A. Shubin, The Schrödinger equation, Kluwer Academic, 1991. 3. E. R. Dobbs and G. O. Jones, Theory and properties of solid argon, Reports on Progress in Physics 20 (1957), no. 1, 516. 4. M. Dobrowolski, Angewandte Funktionalanalysis: Funktionalanalysis, Sobolev-Räume und Elliptische Differentialgleichungen, Springer, 2005. 5. P. Grisvard, Elliptic problems in nonsmooth domains, Monographs and Studies in Mathematics, vol. 24, Pitman, 1985. 6. D. Gilbarg and N. S. Trudinger, Elliptic partial differential equations of second order, Springer, 1983. 7. P. Harrison and A. Valavanis, Quantum wells, wires and dots, Wiley, 2016. 8. J. E. Lennard Jones, The equation of state of gases and critical phenomena, Physica IV 10 (1937). 9. T. Kato, Perturbation theory for linear operators, Springer, 1966. 10. G. Metafune, J. Prüss, R. Schnaubelt, and A. Rhandi, -regularity for elliptic operators with unbounded coefficients, Adv. Differential Equations 10 (2005), no. 10, 1131–1164. 11. H. Neidhardt and V. A. Zagrebnov, Regularization and convergence for singular perturbations, Comm. Math. Phys. 149 (1992), no. 3, 573–586. 12. A. G. Ramm, Sufficient conditions for zero not to be an eigenvalue of the Schrödinger operator, J. Math. Phys. 28 (1987), no. 6, 1341–1343. 13. C. Remling, The absolutely continuous spectrum of one-dimensional Schrödinger operators with decaying potentials, Communications in Mathematical Physics 193 (1998), no. 1, 151–170. 14. D. W. Robinson, Scattering theory with singular potentials. I. The two-body problem, Ann. Inst. H. Poincaré Sect. A (N.S.) 21 (1974), no. 3, 185–215. 15. C. Radin and B. Simon, Invariant domains for the time-dependent schrödinger equation, Journal of Differential Equations 29 (1978), no. 2, 289 – 296. 16. K. Schmüdgen, Unbounded self-adjoint operators on Hilbert space, vol. 265, Springer Science & Business Media, 2012. 17. B. Simon, Schrödinger operators in the twentieth century, J. Math. Phys. 41 (2000), no. 6, 3523–3555. MR 1768631 18. P. Stollmann, Caught by disorder: bound states in random media, vol. 20, Springer Science & Business Media, 2001. 19. D. R. Yafaev, Mathematical scattering theory, American Mathematical Society, Providence, RI, 1992, General theory. 20.  , Mathematical scattering theory, American Mathematical Society, Providence, RI, 2010, Analytic theory. Comments 0 Request Comment You are adding the first comment! How to quickly get a good reply: Add comment Loading ... This is a comment super asjknd jkasnjk adsnkj The feedback must be of minumum 40 characters The feedback must be of minumum 40 characters You are asking your first question! How to quickly get a good answer: • Keep your question short and to the point • Check for grammar or spelling errors. • Phrase it like a question Test description
ee9c118f887e21e5
{centering} Instability of hairy black holes in spontaneously-broken Einstein-Yang-Mills-Higgs systems E. Winstanley Dept. of Physics (Theoretical Physics), University of Oxford, 1 Keble Road, Oxford OX1 3NP, U.K. N.E. Mavromatos Laboratoire de Physique Thèorique ENSLAPP (URA 14-36 du CNRS, associée à l’ E.N.S de Lyon, et au LAPP (IN2P3-CNRS) d’Annecy-le-Vieux), Chemin de Bellevue, BP 110, F-74941 Annecy-le-Vieux Cedex, France. The stability of a new class of hairy black hole solutions in the coupled system of Einstein-Yang-Mills-Higgs is examined, generalising a method suggested by Brodbeck and Straumann and collaborators, and Volkov and Gal’tsov. The method maps the algebraic system of linearised radial perturbations of the various field modes around the black hole solution into a coupled system of radial equations of Schrödinger type. No detailed knowledge of the black hole solution is required, except from the fact that the boundary conditions at the physical space-time boundaries (horizons) must be such so as to guarantee the finiteness of the various expressions involved. In this way, it is demonstrated that the above Schrödinger equations have bound states, which implies the instability of the associated black hole solution. March 1995 On leave from P.P.A.R.C. Advanced Fellowship, Dept. of Physics (Theoretical Physics), University of Oxford, 1 Keble Road, Oxford OX1 3NP, U.K. Coupling gravity to non-linear systems, such as the non-Abelian Yang-Mills theory, or the non-linear -models etc., has led to interesting (classical) solutions with particle-like [1] or black-hole interpretation [2]. The interest in the latter type of solutions arises mainly from the fact that new types of classical hair have been shown to exist, contrary to the no-hair conjecture characterising purely gravitational or Abelian black holes [3]. This is so, because the no-hair theorems do not involve the issue of stability of the solutions in their proof, and therefore in this respect the above classical solutions may be considered as explicit counter-examples to these theorems. In view of this, it is natural to enquire into the stability of the above solutions, which would establish their physical significance. It has been shown that most of these systems, especially the ones admitting particle-like interpretation, are unstable under perturbations of the various field modes [4]. For the black hole solutions, a corresponding general proof was lacking so far, mainly due to the peculiar behaviour of the stability equations on the horizons. In some cases, however, like the Einstein-Yang-Mills-Higgs (EYMH) systems with a Higgs triplet, the Einstein-Skyrme (non-linear -model) system, and the Einstein-Yang-Mills-Dilaton theory (inspired from strings), linear stability of the hairy solutions is established [5], although non-linear stability remains an unsettled issue. An interesting class of classical hairy black holes has been found recently in connection with the -Einstein-Higgs system, with a Higgs doublet as in the standard model [6]. These black hole solutions resemble the sphaleron solutions in gauge theory and one would expect them to be unstable for topological reasons. Recently, an instability proof of sphaleron solutions for arbitrary gauge groups in the EYM system has been given [7, 8]. The method consists of studying linearised radial perturbations around an equilibrium solution, whose detailed knowledge is not necessary to establish stability. The stability is examined by mapping the system of algebraic equations for the perturbations into a coupled system of differential equations of Schrödinger type [7, 8]. As in the particle case of ref. [1], the instability of the solution is established once a bound state in the respective Schrödinger equations is found. The latter shows up as an imaginary frequency mode in the spectrum, leading to an exponentially growing mode. There is an elegant physical interpretation behind this analysis, which is similar to the Cooper pair instability of super-conductivity. The gravitational attraction balances the non-Abelian gauge field repulsion in the classical solution [1], but the existence of bound states implies imaginary parts in the quantum ground state which lead to instabilities of the solution, in much the same way as the classical ground state in super-conductivity is not the absolute minimum of the free energy. However, this method cannot be applied directly to the black hole case, due to divergences occuring in some of the expressions involved. This is a result of the singular behaviour of the metric function at the physical space-time boundaries (horizon) of the black hole. It is the purpose of this note to generalise the method of ref. [7] to incorporate the black hole solution of the EYMH system of ref. [6]. By constructing appropriate trial linear radial perturbations, following ref. [8, 9], we show the existence of bound states in the spectrum of the coupled Schrödinger equations, and thus the instability of the black hole. Detailed knowledge of the black hole solution is not actually required, apart from the fact that the existence of an horizon leads to modifications of the trial perturbations as compared to those of ref. [7, 8], in order to avoid divergences in the respective expressions [9]. We start by sketching the basic steps [7, 9] that will lead to a study of the stability of a classical solution with finite energy in a (generic) classical field theory. One considers small perturbations around , and specifies [7] the time-dependence as The linearised system (with respect to such perturbations), obtained from the equations of motion, can be cast into a Schrödinger eigenvalue problem where the operators , are assumed independent of the ‘frequency’ . As we shall show later on, this is indeed the case of our black hole solution of the EYMH system. In that case it will also be shown that is a self-adjoint operator with respect to a properly defined inner (scalar) product in the space of functions [7], and the matrix is positive definite, . A criterion for instability is the existence of an imaginary frequency mode in (2) This is usually difficult to solve analytically in realistic models, and usually numerical calculations are required [4]. A less informative method which admits analytic treatment has been proposed recently in ref. [7, 9], and we shall follow this for the purposes of the present work. The method consists of a variational approach which makes use of the following functional defined through (2): with a trial function. The lowest eigenvalue is known to provide a lower bound for this functional. Thus, the criterion of instability, which is equivalent to (3), in this approach reads The first of the above conditions implies that the operator is not positive definite, and therefore negative eigenvalues do exist. The second condition, on the finiteness of the expectation value of the operator , is required to ensure that lies in the Hilbert space containing the domain of . In certain cases, especially in the black hole case, there are divergences due to singular behaviour of modes at, say, the horizons, which could spoil these conditions (5). The advantage of the above variational method lies in the fact that it is an easier task to choose appropriate trial functions that satisfy (5) than solving the original eigenvalue problem (2). In what follows we shall apply this second method to the black hole solution of ref. [6]. We start by reviewing the basic formulas for a study of stability issues of spherically symmetric black hole solutions of the EYMH system [6]. The space-time metric takes the form [6] and we assume the following ansatz for the non-abelian gauge potential [6, 7] where and are functions of . The are appropriately normalised spherical generators of the SU(2) group in the notation of ref. [7]. The Higgs doublet assumes the form with the Higgs potential where denotes the v.e.v. of in the non-trivial vacuum. The quantities satisfy the static field equations where the prime denotes differentiation with respect to . For later use, we also mention that a dot will denote differentiation with respect to . If we choose a gauge in which , the linearised perturbation equations decouple into two sectors [7] . The first consists of the gravitational modes , , and and the second of the matter perturbations , and . In our analysis it will be sufficient to concentrate on the matter perturbations, setting the gravitational perturbations and to zero, because an instability will show up in this sector of the theory. The equations for the linearised matter perturbations take the form [7] and the components of are where the operator is Upon specifying the time-dependence (1) one arrives easily to an eigenvalue problem of the form (2), which can then be extended to the variational approach (5). To this end, we choose as trial perturbations the following expressions (c.f. [7]) where is a function of to be determined. One may define the inner product where is the position of the horizon of the black hole. The operator is then symmetric with respect to this scalar product. Following ref. [7], consider the expectation value which is clearly positive definite for real . Its finiteness will be examined later, and depends on the choice of the function . Next, we proceed to the evaluation of the expectation value of the Hamiltonian (); after a tedious calculation one obtains boundary terms where . The boundary terms will be shown to vanish so we omit them in the expression (). The final result is The first of these terms is manifestly negative. To examine the remaining two, we introduce the ‘tortoise’ co-ordinate defined by [9] and define a sequence of functions by [9] where , are arbitrary positive constants. Then, for each value of the vacuum expectation values of and are finite, , and , with , and all boundary terms vanish. This justifies a posteriori their being dropped in eq. (). The integrands in the second and third terms of eq. (17) are uniformly convergent and tend to zero as . Hence, choosing sufficiently large the dominant contribution in (17) comes from the first term which is negative. This confirms the existence of bound states in the Schrödinger equation (10), (2), and thereby the instability (5) of the associated black hole solution of ref. [6] in the coupled EYMH system. The above analysis reveals the existence of at least one negative odd-parity eigenmode in the spectrum of the EYMH black hole, which implies its instability. The exact number of such negative modes is an interesting question and we plan to investigate it in the near future. Recently, a method for determining the number of the sphaleron-like unstable modes has been applied by Volkov et al. [10] to the gravitating sphaleron case, and one might be able to extend it to the present EYMH black hole. According to the analysis of ref. [10], for EYM black holes, there are - unstable sphaleron-like modes under radial perturbations, where is the number of nodes of the equilibrium solution [1]. This number does not depend on the details of the equilibrium solution, such as the horizon geometry, size etc. This is due to the topological nature of the instabilities. In this respect, we mention that an interesting connection could be made with the global analysis of ref. [11], where catastrophe theory was invoked to provide a way of evaluating the number of unstable modes of certain (non-sphaleron) black hole solutions. From the global analysis of ref. [11] there are other non-sphaleronic types of non-Abelian black holes, whose ‘high entropy’ phase is stable. In our analysis, this would imply an extension of the variational approach to incorporate finite temperature effects for the matter perturbations in non-sphaleron black holes111For sphaleron-like black holes, the topological nature of the instability might complicate the connection with catastrophe theory if the number of unstable modes is independent of the details of the equilibrium solution, as appears to be the case for the EYM system [10].. The finite temperature would be a result of the existence of the horizon entropy associated with the black hole in a semi-classical analysis. It might well be that the number of unstable modes of these (non-sphaleron) black holes is somehow affected by the temperature, in the sense that above a ‘critical’ temperature (corresponding to a certain horizon size) the bound states of the Schrödinger equation (2) disappear, or their number is reduced. This would correspond to the high-entropy ‘stable’ black holes of ref. [11], in the sense of the catastrophe theory. At present, such issues remain open. We hope to come back to these in the near future. We thank K. Tamvakis and P. Kanti for discussions. One of us (E.W.) would like to thank CERN, Theory Division, for the hospitality during the initial stages of this work. She also thanks E.P.S.R.C. (U.K.) for a research studentship. The work of N.E.M. is supported by a EC Research Fellowship, Proposal Nr. ERB4001GT922259. • [1] R. Bartnik and J. McKinnon, Phys. Rev. Lett. 61 (1988), 141. • [2] P. Bizon, Phys. Rev. Lett. 64 (1990), 2644. • [3] C. Misner, K. Thorne and J.A. Wheeler, Gravitation (Freeman, San Francisco 1973); J. Bekenstein, Phys. Rev. D5 (1972), 1239; S. Adler and R. Pearson, Phys. Rev. D18 (1978), 2798. • [4] N. Straumann and Z.H. Zhou, Phys. Lett. B237 (1990), 353; ibid B243 (1991), 53; Nucl. Phys. B369 (1991), 180. • [5] See for instance: M. Heusler, N. Straumann, and Z.H. Zhou, Helv. Phys. Acta 66 (1993), 614; K-Y. Lee, V.P. Nair and E. Weinberg, Phys. Rev. Lett. 68 (1992), 1100; M.E. Ortiz, Phys. Rev. 45 (1992), R2586. P. Breitenholder, P. Forgács, and D. Maison, Nucl. Phys. B383 (1992), 357 ; E.E. Donets and D. Gal’tsov, Phys. Lett. B302 (1993), 411; P. Bizon, Act. Phys. Pol. B24 (1993), 1209. • [6] B.R. Greene, S.D. Mathur, C.M. O’ Neill, Phys. Rev. D47 (1993), 2242. • [7] P. Boschung, O. Brodbeck, F. Moser, N. Straumann, and M. Volkov, Phys. Rev. 50 (1994), 3842. • [8] O. Brodbeck and N. Straumann, Zürich ETH preprint ZU-TH 38/94 (1994): bulletin no: gr-qc/9411058 . • [9] M. Volkov, and D. Gal’tsov, Phys. Lett. B341 (1995), 279. • [10] M.S.Volkov, O.Brodbeck, G.Lavrelashvili, and N.Straumann, Zürich ETH preprint ZU-TH 3/95 (1995): bulletin no: hep-th/9502045 . • [11] K. Maeda, T. Tachizava, T. Torii, and T. Maki, Phys. Rev. Lett. 72 (1994), 450.
738631ac55acd002
2020 Semiconductor Physics Font size  SML Register update notification mail Add to favorite lecture list Academic unit or major Undergraduate major in Electrical and Electronic Engineering Miyajima Shinsuke  Nakagawa Shigeki  Course component(s) Mode of instruction Day/Period(Room No.) Tue3-4(S011)  Fri3-4(S011)   Course number Academic year Offered quarter Syllabus updated Lecture notes updated Language used Access Index Course description and aims Modern electronics is supported by semiconductor devices, the basic element of its members. A basic understanding of semiconductor physics is required to gain advanced specialized skills such as correctly understanding the operation of semiconductor devices and improving its performance, and designing devices with new electronic functions. This course includes lectures about the basic concepts of semiconductor physics as an introduction to semiconductor engineering. This course, set as an elementary course among the electrical major courses, is founded on such topics as mathematics, electromagnetism, and quantum physics. The learning process will be to visualize and carefully build basic understandings of electrical conduction, photoreaction, and other topics related to semiconductor physics. This course will include exercises as appropriate in addition to lectures. Through problem exercises, students will model the essence of the complex phenomena occurring inside semiconductors. Through forming and solving basic equations, they will gain a deep understanding of the logical sequence of the approach and master analysis techniques. Further, they will visualize the obtained numerical values to understand them, resulting in an intuitive understanding based on basic theories. This course covers the information described below. First, students will learn, based on quantum mechanics, that energy bands form in solids of atoms joined periodically. Then, from the fact that there is a limit to the density of electrons which can exist in it, they will learn about the concept of density of states. Further, they will apply the approach of thermal statistic distribution to obtain carrier density. From analyzing the motion of electrons in energy bands, students will learn about the concepts of electrons and electron holes, followed by the concepts of doping impurities and of n-type and p-type semiconductors. After understanding the potential distribution of p–n junctions, students will learn the concepts of drift, diffusion, and recombination, which will form a foundation for understanding electrical conduction in solids. Next, after having learned carrier continuity equations which are fundamental to analyzing electrical conduction in semiconductors, students will further their learning through the viewpoints of both analytic and numerical solutions using specific examples of application. Thus, they will gain the basic understanding of the current-voltage characteristics of p–n junctions, essential to understanding semiconductor devices. Student learning outcomes The goal of this course is that students will master the basic physical properties of semiconductors, the basis of semiconductor device engineering, by meeting the learning outcomes below steadily one by one. - Able to list and explain vital crystal structures used often in semiconductor engineering. - Able to express periodic structure in terms of unit cells and unit structures. - Able to specify lattice planes using Miller indices. - Able to explain the differences between metals and semiconductors (insulators) qualitatively based on atomic orbitals and the Pauli exclusion principle. - Able to analyze the motion of electrons in the step potential and the square-well potential using the Schrödinger equation and able to calculate energy levels and existence probabilities. - Able to calculate the transmission and reflection of electrons and explain the relationship with electric currents. - Able to explain the formation of an energy band from the existence conditions of the solution of the Schrödinger equation in the periodic potential. - Able to explain terms of allowable band, forbidden band, valence band, conduction band, and Brillouin zone. - Able to explain conductive carriers in conduction bands and valence bands. - Able to approximate the effective mass of electrons and electron holes based on the energy band. - Able to calculate the density of states in an isotropic three-dimensional solid. - Able to calculate the concentration of electrons and electron holes from the distribution law and the density of states, and to explain the temperature dependence of each. - Able to explain n-type and p-types due to impurity doping and the relationship to majority and minority carriers. - Able to explain drift current, mobility, diffusion current, and recombination. - Able to derive carrier continuity equations and solve them under several simple conditions including photoreaction. - Able to express the form of the potential distribution near the p–n junction boundary under the depletion approximation in a mathematical expression and to explain it in a diagram, and to find the junction capacitance based on the result. - Able to derive that the current–voltage characteristic of p–n junctions are exponential by solving carrier continuity equations based on the conduction model of diffusion and recombination. - Able to show the relationship between electron current and hole current at a p–n junction on a band diagram, and to explain its temperature characteristics. - Able to explain examples of applications of pn junction to optoelectronic devices. - Able to show a band diagram of a metal–semiconductor contact and explain its current–voltage characteristics. Corresponding educational goals are: (1) Specialist skills Fundamental specialist skills (4) Applied skills (inquisitive thinking and/or problem-finding skills) Organization and analysis (7) Skills acquiring a wide range of expertise, and expanding it into more advanced and other specialized areas Band structure of solids, square well potential, electronics states in periodic potential structures, effective mass, density of states of carriers, distribution function, intrinsic carrier concentration, doping, mobility, drift current, diffusion current, carrier recombination, band profile, carrier continuity equation, pn junction, metal-semiconductor junction Competencies that will be developed ・Applied specialist skills on EEE Class flow Lectures are provided based on Power-point presentation slides. Quizzes or exercise problems are assigned in the class. Course schedule/Required learning   Course schedule Required learning Class 1 Crystal structure of solids, unit cell, Miller index Draw lattice plane of given Miller index. Find Miller index of illustrated lattice plane. Class 2 Origin of energy band structure: from atoms to solids, metal/semiconductor/insulator Explain the origin of band structure from energy states of the atoms. Class 3 Fundamentals of quantum mechanics: Schrodinger equation, states of a confined electron in one-dimensional square well with infinite potential Show energy levels and wave functions of a confined electron in an infinite quantum-well. Class 4 Formation of band structure: Bloch's theorem, allowed band, forbidden band, effective mass, concept of electron and hole Explain the origin of band structure from periodic potential using Bloch's theorem. Class 5 Fundamentals of statistical mechanics: Density of states, Fermi-Dirac's distribution function, Fermi level, intrinsic carrier concentration Derive intrinsic carrier concentration of semiconductor. Class 6 Control of carrier concentration: impurity doping, p-type, n-type, electron and hole concentration, temperature dependence Show carrier concentration of n- and p-type semiconductor with impurity doping. Class 7 Summary of the first half of the course and exercise Explain the outline of the course Class 8 Fundamentals of electron transport and carrier continuity equation Explain drift-diffusion model of electron transport. Derive carrier continuity eq. Class 9 Carrier mobility, diffusion coefficient, diffusion length, and surface recombination Explain the concept of carrier mobility, diffusion coefficient, diffusion length, and surface recombination. Behavior of minority carriers in a semiconductor is also explained based on carrier continuity equation. Class 10 Formation of pn junction: band structure, junction capacitance Derive and draw band profile of pn junction. Class 11 Current voltage characteristics of pn junction: understanding from carrier continuity equation Explain current-voltage characteristics of pn junction by solving carrier continuity equation. Class 12 Current voltage characteristics of pn junction: temperature dependence, Applications of pn junctions Explain temperature dependence of pn junction, Show some examples of applications of pn junction. Class 13 Metal-Semiconductor contact: band profile, principle of electron transport, current-voltage characteristics Explain metal-semiconductor contact comparing with pn junction Class 14 Summary of the course and exercise Explain the outline of the course Kiyoshi Takahasi, Youichi Yamada, "Semiconductor engineering (3rd edition)", Morikita publishing Reference books, course materials, etc. Makoto Konagai, "Semiconductor physics", Baifukan Assessment criteria and methods Quizzes in class: 20%, Report after the 3rd lecture (20%), Report after the 7th lecture (20%), Report after the 10th lecture (20%), Report after the 14th lecture (20%), Related courses • LAS.C105 : Basic Quantum Chemistry • LAS.C107 : Basic Chemical Thermodynamics • LAS.P103 : Fundamentals of Electromagnetism 1 • LAS.P104 : Fundamentals of Electromagnetism 2 • EEE.D201 : Quantum Mechanics Fundamentals of Electromagnetism 1 (EEE.E201) is desirable (not mandatory). Page Top
25d466e57075c28b
Quantized Columns Where Quantum Probability Comes From There are many different ways to think about probability. Quantum mechanics embodies them all. An illustration of a pink hand reaching for quantum dice. James O’Brien for Quanta Magazine In A Philosophical Essay on Probabilities, published in 1814, Pierre-Simon Laplace introduced a notorious hypothetical creature: a “vast intelligence” that knew the complete physical state of the present universe. For such an entity, dubbed “Laplace’s demon” by subsequent commentators, there would be no mystery about what had happened in the past or what would happen at any time in the future. According to the clockwork universe described by Isaac Newton, the past and future are exactly determined by the present. Laplace’s demon was never supposed to be a practical thought experiment; the imagined intelligence would have to be essentially as vast as the universe itself. And in practice, chaotic dynamics can amplify tiny imperfections in the initial knowledge of the system into complete uncertainty later on. But in principle, Newtonian mechanics is deterministic. A century later, quantum mechanics changed everything. Ordinary physical theories tell you what a system is and how it evolves over time. Quantum mechanics does this as well, but it also comes with an entirely new set of rules, governing what happens when systems are observed or measured. Most notably, measurement outcomes cannot be predicted with perfect confidence, even in principle. The best we can do is to calculate the probability of obtaining each possible outcome, according to what’s called the Born rule: The wave function assigns an “amplitude” to each measurement outcome, and the probability of getting that result is equal to the amplitude squared. This feature is what led Albert Einstein to complain about God playing dice with the universe. Researchers continue to argue over the best way to think about quantum mechanics. There are competing schools of thought, which are sometimes referred to as “interpretations” of quantum theory but are better thought of as distinct physical theories that give the same predictions in the regimes we have tested so far. All of them share the feature that they lean on the idea of probability in a fundamental way. Which raises the question: What is “probability,” really? Like many subtle concepts, probability starts out with a seemingly straightforward, commonsensical meaning, which becomes trickier the closer we look at it. You flip a fair coin many times; whether it comes up heads or tails on any particular trial is completely unknown, but if we perform many trials we expect to get heads 50% of the time and tails 50% of the time. We therefore say that the probability of obtaining heads is 50%, and likewise for tails. We know how to handle the mathematics of probability, thanks to the work of the Russian mathematician Andrey Kolmogorov and others. Probabilities are real numbers between zero and one, inclusive; the probabilities of all independent events add up to one; and so on. But that’s not the same as deciding what probability actually is. There are numerous approaches to defining probability, but we can distinguish between two broad classes. The “objective” or “physical” view treats probability as a fundamental feature of a system, the best way we have to characterize physical behavior. An example of an objective approach to probability is frequentism, which defines probability as the frequency with which things happen over many trials, as in our coin-tossing example. Alternatively, there are “subjective” or “evidential” views, which treat probability as personal, a reflection of an individual’s credence, or degree of belief, about what is true or what will happen. An example is Bayesian probability, which emphasizes Bayes’ law, a mathematical theorem that tells us how to update our credences as we obtain new information. Bayesians imagine that rational creatures in states of incomplete information walk around with credences for every proposition you can imagine, updating them continually as new data comes in. In contrast with frequentism, in Bayesianism it makes perfect sense to attach probabilities to one-shot events, such as who will win the next election, or even past events that we’re unsure about. Interestingly, different approaches to quantum mechanics invoke different meanings of probability in central ways. Thinking about quantum mechanics helps illuminate probability, and vice versa. Or, to put it more pessimistically: Quantum mechanics as it is currently understood doesn’t really help us choose between competing conceptions of probability, as every conception has a home in some quantum formulation or other. Let’s consider three of the leading approaches to quantum theory. There are “dynamical collapse” theories, such as the GRW model proposed in 1985 by Giancarlo Ghirardi, Alberto Rimini and Tullio Weber. There are “pilot wave” or “hidden variable” approaches, most notably the de Broglie-Bohm theory, invented by David Bohm in 1952 based on earlier ideas from Louis de Broglie. And there is the “many worlds” formulation suggested by Hugh Everett in 1957. Each of these represents a way of solving the measurement problem of quantum mechanics. The problem is that conventional quantum theory describes the state of a system in terms of a wave function, which evolves smoothly and deterministically according to the Schrödinger equation. At least, it does unless the system is being observed; in that case, according to the textbook presentation, the wave function suddenly “collapses” into some particular observational outcome. The collapse itself is unpredictable; the wave function assigns a number to each possible outcome, and the probability of observing that outcome is equal to the value of the wave function squared. The measurement problem is simply: What constitutes a “measurement”? When exactly does it occur? Why are measurements seemingly different from ordinary evolution? Dynamical-collapse theories offer perhaps the most straightforward resolution to the measurement problem. They posit that there is a truly random component to quantum evolution, according to which every particle usually obeys the Schrödinger equation, but occasionally its wave function will spontaneously localize at some position in space. Such collapses are so rare that we would never observe one for a single particle, but in a macroscopic object made of many particles, collapses happen all the time. This prevents macroscopic objects — like the cat in Schrödinger’s infamous thought experiment — from evolving into an observable superposition. All the particles in a large system will be entangled with each other, so that when just one of them localizes in space, the rest are brought along for the ride. Probability in such models is fundamental and objective. There is absolutely nothing about the present that precisely determines the future. Dynamical-collapse theories fit perfectly into an old-fashioned frequentist view of probability. What happens next is unknowable, and all we can say is what the long-term frequency of different outcomes will be. Laplace’s demon wouldn’t be able to exactly predict the future, even if it knew the present state of the universe exactly. Pilot-wave theories tell a very different story. Here, nothing is truly random; the quantum state evolves deterministically, just as the classical state did for Newton. The new element is the concept of hidden variables, such as the actual positions of particles, in addition to the traditional wave function. The particles are what we actually observe, while the wave function serves merely to guide them. In a sense, pilot-wave theories bring us back to the clockwork universe of classical mechanics, but with an important twist: When we’re not making an observation, we don’t, and can’t, know the actual values of the hidden variables. We can prepare a wave function so that we know it exactly, but we only learn about the hidden variables by observing them. The best we can do is to admit our ignorance and introduce a probability distribution over their possible values. Probability in pilot-wave theories, in other words, is entirely subjective. It characterizes our knowledge, not an objective frequency of occurrences over time. A full-powered Laplace demon that knew both the wave function and all the hidden variables could predict the future exactly, but a hobbled version that only knew the wave function would still have to make probabilistic predictions. Then we have many-worlds. This is my personal favorite approach to quantum mechanics, but it’s also the one for which it is most challenging to pinpoint how and why probability enters the game. Many-worlds quantum mechanics has the simplest formulation of all the alternatives. There is a wave function, and it obeys Schrödinger’s equation, and that’s all. There are no collapses and no additional variables. Instead, we use Schrödinger’s equation to predict what will happen when an observer measures a quantum object in a superposition of multiple possible states. The answer is that the combined system of observer and object evolves into an entangled superposition. In each part of the superposition, the object has a definite measurement outcome and the observer has measured that outcome. Everett’s brilliant move was simply to say, “And that’s okay” — all we need to do is recognize that each part of the system subsequently evolves separately from all of the others, and therefore qualifies as a separate branch of the wave function, or “world.” The worlds aren’t put in by hand; they were lurking in the quantum formalism all along. The idea of all those worlds might seem extravagant or distasteful, but those aren’t respectable scientific objections. A more legitimate question is the nature of probability within this approach. In many-worlds, we can know the wave function exactly, and it evolves deterministically. There is nothing unknown or unpredictable. Laplace’s demon could predict the entire future of the universe with perfect confidence. How is probability involved at all? An answer is provided by the idea of “self-locating,” or “indexical,” uncertainty. Imagine that you are about to measure a quantum system, thus branching the wave function into different worlds (for simplicity, let’s just say there will be two worlds). It doesn’t make sense to ask, “After the measurement, which world will I be on?” There will be two people, one on each branch, both descended from you; neither has a better claim to being “really you” than the other. But even if both people know the wave function of the universe, there is now something they don’t know: which branch of the wave function they are on. There will inevitably be a period of time after branching occurs but before the observers find out what outcome was obtained on their branch. They don’t know where they are in the wave function. That’s self-locating uncertainty, as first emphasized in the quantum context by the physicist Lev Vaidman. You might think you could just look at the experimental outcome really quickly, so that there was no noticeable period of uncertainty. But in the real world, the wave function branches incredibly fast, on timescales of 10−21 seconds or less. That’s far quicker than a signal can even reach your brain. There will always be some period of time when you’re on a certain branch of the wave function, but you don’t know which one. Can we resolve this uncertainty in a sensible way? Yes, we can, as Charles Sebens and I have argued, and doing so leads precisely to the Born rule: The credence you should attach to being on any particular branch of the wave function is just the amplitude squared for that branch, just as in ordinary quantum mechanics. Sebens and I needed to make a new assumption, which we called the “epistemic separability principle”: Whatever predictions you make for experimental outcomes, they should be unaltered if we only change the wave function for completely separate parts of the system. Self-locating uncertainty is a different kind of epistemic uncertainty from that featured in pilot-wave models. You can know everything there is to know about the universe, and there’s still something you’re uncertain about, namely where you personally are within it. Your uncertainty obeys the rules of ordinary probability, but it requires a bit of work to convince yourself that there’s a reasonable way to assign numbers to your belief. You might object that you want to make predictions now, even before branching happens. Then there’s nothing uncertain; you know exactly how the universe will evolve. But included in that knowledge is the conviction that all the future versions of yourself will be uncertain, and they should use the Born rule to assign credences to the various branches they could be on. In that case, it makes sense to act precisely as if you live in a truly stochastic universe, with the frequency of various outcomes given by the Born rule. (David Deutsch and David Wallace have made this argument rigorous using decision theory.) In one sense, all of these notions of probability can be thought of as versions of self-locating uncertainty. All we have to do is consider the set of all possible worlds — all the different versions of reality one could possibly conceive. Some such worlds obey the rules of dynamical-collapse theories, and each of these is distinguished by the actual sequence of outcomes for all the quantum measurements ever performed. Other worlds are described by pilot-wave theories, and in each one the hidden variables have different values. Still others are many-worlds realities, where agents are uncertain about which branch of the wave function they are on. We might think of the role of probability as expressing our personal credences about which of these possible worlds is the actual one. The study of probability takes us from coin flipping to branching universes. Hopefully our understanding of this tricky concept will progress hand in hand with our understanding of quantum mechanics itself. Comment on this article
06ed5426c2d145c2
Chin. Phys. B Citation Search Quick Search ISSN 1674-1056 (Print) CN 11-5639/O4    » About CPB    » Editorial Board    » SCI IF    » Staff    » Contact Browse CPB    » In Press    » Current Issue    » Earlier Issues    » View by Fields    » Top Downloaded    » Sci Top Cited    » Submit an Article    » Manuscript Tracking    » Call for Papers    » Scope    » Instruction for Authors    » Copyright Agreement    » Templates    » Author FAQs    » PACS    » Review Policy    » Referee Login    » Referee FAQs    » Editor in Chief Login    » Editor Login    » Office Login HighLights More»    • Arbitrated quantum signature scheme with continuous-variable squeezed vacuum states Yan-Yan Feng(冯艳艳), Rong-Hua Shi(施荣华), Ying Guo(郭迎) Chin. Phys. B 2018, 27 (2): 020302 We propose an arbitrated quantum signature (AQS) scheme with continuous variable (CV) squeezed vacuum states, which requires three parties, i.e., the signer Alice, the verifier Bob and the arbitrator Charlie trusted by Alice and Bob, and three phases consisting of the initial phase, the signature ph... • Strontium optical lattice clock at the National Time Service Center Ye-Bing Wang(王叶兵), Mo-Juan Yin(尹默娟), Jie Ren(任洁), Qin-Fang Xu(徐琴芳), Ben-Quan Lu(卢本全), Jian-Xin Han(韩建新), Yang Guo(郭阳), Hong Chang(常宏) Chin. Phys. B 2018, 27 (2): 023701 We report the 87Sr optical lattice clock developed at the National Time Service Center. We achieved a closed-loop operation of the optical lattice clock based on 87Sr atoms. The linewidth of the spin-polarized clock peak is 3.9 Hz with a clock laser pulse length of 300 ms, which corresponds to a Fou... • Magnetocaloric effect in the layered organic-inorganic hybrid (CH3NH3)2CuCl4 Yinina Ma(马怡妮娜), Kun Zhai(翟昆), Liqin Yan(闫丽琴), Yisheng Chai(柴一晟), Dashan Shang(尚大山), Young Sun(孙阳) Chin. Phys. B 2018, 27 (2): 027501 We present a study of magnetocaloric effect of the quasi-two-dimensional (2D) ferromagnet (CH3NH3)2CuCl4 in ab plane (easy-plane). From the measurements of magnetic field dependence of magnetization at various temperatures, we have discovered a large magnetic entropy change associated with the ferro... • Magnetic field aligned orderly arrangement of Fe3O4 nanoparticles in CS/PVA/Fe3O4 membranes Meng Du(杜萌), Xing-Zhong Cao(曹兴忠), Rui Xia(夏锐), Zhong-Po Zhou(周忠坡), Shuo-Xue Jin(靳硕学), Bao-Yi Wang(王宝义) Chin. Phys. B 2018, 27 (2): 027805 The CS/PVA/Fe3O4 nanocomposite membranes with chainlike arrangement of Fe3O4 nanoparticles are prepared by a magnetic-field-assisted solution casting method. The aim of this work is to investigate the relationship between the microstructure of the magnetic anisotropic CS/PVA/Fe3O4 membrane and the e... Chin. Phys. B     Chin. Phys. B--2018, Vol.27, No.2 Select | Export to EndNote TOPIC REVIEW—Soft matter and biological physics Theoretical studies and molecular dynamics simulations on ion transport properties in nanochannels and nanopores Ke Xiao(肖克), Dian-Jie Li(李典杰), Chen-Xu Wu(吴晨旭) Chin. Phys. B, 2018, 27 (2): 024702 doi: 10.1088/1674-1056/27/2/024702 Full Text: [PDF 457 KB] (Downloads:422) RICH HTML Show Abstract Control of ion transport and fluid flow through nanofluidic devices is of primary importance for energy storage and conversion, drug delivery and a wide range of biological processes. Recent development of nanotechnology, synthesis techniques, purification technologies, and experiment have led to rapid advances in simulation and modeling studies on ion transport properties. In this review, the applications of Poisson-Nernst-Plank (PNP) equations in analyzing transport properties are presented. The molecular dynamics (MD) studies of transport properties of ion and fluidic flow through nanofluidic devices are reported as well. Application of microdosimetry on biological physics for ionizing radiation Dandan Chen(陈丹丹), Liang Sun(孙亮) Chin. Phys. B, 2018, 27 (2): 028701 doi: 10.1088/1674-1056/27/2/028701 Full Text: [PDF 424 KB] (Downloads:290) RICH HTML Show Abstract Stochastic characterization of radiation interaction is of importance to cell damage. Microdosimetry is to investigate the random structures of particle tracks in order to understand the dose-effect in cellular scales. In the review, we introduced the basic concepts of microdosimetry as well as the experimental methods (TEPC) and Monte Carlo simulations. Three basic biophysical models are interpreted and compared, including the target model, linear-quadratic model, and microdosimetric-kinetic model. The bottlenecks in the current microdosimetry research are also discussed, which need the interdisciplinary contributions from biology, physics, mathematics, computer science and electric engineering. Lipoprotein in cholesterol transport: Highlights and recent insights into its structural basis and functional mechanism Shu-Yu Chen(陈淑玉), Na Li(李娜), Tao-Li Jin(金桃丽), Lu Gou(缑璐), Dong-Xiao Hao(郝东晓), Zhi-Qi Tian(田芷淇), Sheng-Li Zhang(张胜利), Lei Zhang(张磊) Chin. Phys. B, 2018, 27 (2): 028702 doi: 10.1088/1674-1056/27/2/028702 Full Text: [PDF 5299 KB] (Downloads:171) RICH HTML Show Abstract Lipoproteins are protein-lipid macromolecular assemblies which are used to transport lipids in circulation and are key targets in cardiovascular disease (CVD). The highly dynamic lipoprotein molecules are capable of adopting an array of conformations that is crucial to lipid transport along the cholesterol transport pathway, among which high-density lipoprotein (HDL) and low-density lipoprotein (LDL) are major players in plasma cholesterol metabolism. For a more detailed illustration of cholesterol transport process, as well as the development of therapies to prevent CVD, here we review the functional mechanism and structural basis of lipoproteins in cholesterol transport, as well as their structural dynamics in the plasma lipoprotein (HDL and LDL) elevations, in order to obtain better quantitative understandings on structure-function relationship of lipoproteins. Finally, we also provide an approach for further research on the lipoprotein in cholesterol transport. Bio-macromolecular dynamic structures and functions, illustrated with DNA, antibody, and lipoprotein Lu Gou(缑璐), Taoli Jin(金桃丽), Shuyu Chen(陈淑玉), Na Li(李娜), Dongxiao Hao(郝东晓), Shengli Zhang(张胜利), Lei Zhang(张磊) Chin. Phys. B, 2018, 27 (2): 028708 doi: 10.1088/1674-1056/27/2/028708 Full Text: [PDF 1380 KB] (Downloads:150) RICH HTML Show Abstract Bio-macromolecules, such as proteins and nucleic acids, are the basic materials that perform fundamental activities required for life. Their structural heterogeneities and dynamic personalities are vital to understand the underlying functional mechanisms of bio-macromolecules. With the rapid development of advanced technologies such as single-molecule technologies and cryo-electron microscopy (cryo-EM), an increasing number of their structural details and mechanics properties at molecular level have significantly raised awareness of basic life processes. In this review, firstly the basic principles of single-molecule method and cryo-EM are summarized, to shine a light on the development in these fields. Secondly, recent progress driven by the above two methods are underway to explore the dynamic structures and functions of DNA, antibody, and lipoprotein. Finally, an outlook is provided for the further research on both the dynamic structures and functions of bio-macromolecules, through single-molecule method and cryo-EM combining with molecular dynamics simulations. Surface-tension-confined droplet microfluidics Xinlian Chen(陈新莲), Han Wu(伍罕), Jinbo Wu(巫金波) Chin. Phys. B, 2018, 27 (2): 029202 doi: 10.1088/1674-1056/27/2/029202 Full Text: [PDF 3673 KB] (Downloads:228) RICH HTML Show Abstract This article is a concise overview about the developing microfluidic systems named surface-tension-confined droplet microfluidics (STORMs). Different from traditional complexed droplet microfluidics which generated and confined the droplets by three-dimensional (3D) poly(dimethylsiloxane)-based microchannels, STORM systems provide twodimensional (2D) platforms for control of droplets. STORM devices utilize surface energy, with methods such as surface chemical modification and mechanical processing, to control the movement of fluid droplets. Various STORM devices have been readily prepared, with distinct advantages over conventional droplet microfluidics, which generated and confined the droplets by 3D poly(dimethylsiloxane)-based microchannels, such as significant reduction of energy consumption necessary for device operation, facile or even direct introduction of droplets onto patterned surface without external driving force such as a micropump, thus increased frequency or efficiency of droplets generation of specific STORM device, among others. Thus, STORM devices can be excellent alternatives for majority areas in droplet microfluidics and irreplaceable choices in certain fields by contrast. In this review, fabrication methods or strategies, manipulation methods or mechanisms, and main applications of STORM devices are introduced. SPECIAL TOPIC—Soft matter and biological physics Optimizing the atom types of proteins through iterative knowledge-based potentials Xin-Xiang Wang(汪心享), Sheng-You Huang(黄胜友) Chin. Phys. B, 2018, 27 (2): 020503 doi: 10.1088/1674-1056/27/2/020503 Full Text: [PDF 443 KB] (Downloads:241) RICH HTML Show Abstract Knowledge-based scoring functions have been widely used for protein structure prediction, protein-small molecule, and protein-nucleic acid interactions, in which one critical step is to find an appropriate representation of protein structures. A key issue is to determine the minimal protein representations, which is important not only for developing of scoring functions but also for understanding the physics of protein folding. Despite significant progresses in simplifying residues into alphabets, few studies have been done to address the optimal number of atom types for proteins. Here, we have investigated the atom typing issue by classifying the 167 heavy atoms of proteins through 11 schemes with 1 to 20 atom types based on their physicochemical and functional environments. For each atom typing scheme, a statistical mechanics-based iterative method was used to extract atomic distance-dependent potentials from protein structures. The atomic distance-dependent pair potentials for different schemes were illustrated by several typical atom pairs with different physicochemical properties. The derived potentials were also evaluated on a high-resolution test set of 148 diverse proteins for native structure recognition. It was found that there was a crossover around the scheme of four atom types in terms of the success rate as a function of the number of atom types, which means that four atom types may be used when investigating the basic folding mechanism of proteins. However, it was revealed by a close examination of typical potentials that 14 atom types were needed to describe the protein interactions at atomic level. The present study will be beneficial for the development of protein related scoring functions and the understanding of folding mechanisms. Thin film dynamics in coating problems using Onsager principle Yana Di(邸亚娜), Xianmin Xu(许现民), Jiajia Zhou(周嘉嘉), Masao Doi Chin. Phys. B, 2018, 27 (2): 024501 doi: 10.1088/1674-1056/27/2/024501 Full Text: [PDF 269 KB] (Downloads:231) RICH HTML Show Abstract A new variational method is proposed to investigate the dynamics of the thin film in a coating flow where a liquid is delivered through a fixed slot gap onto a moving substrate. A simplified ODE system has also been derived for the evolution of the thin film whose thickness hf is asymptotically constant behind the coating front. We calculate the phase diagram as well as the film profiles and approximate the film thickness theoretically, and agreement with the well-known scaling law as Ca2/3 is found. Capillary filling in closed-end nanotubes Chen Zhao(赵晨), Jiajia Zhou(周嘉嘉), Masao Doi Chin. Phys. B, 2018, 27 (2): 024701 doi: 10.1088/1674-1056/27/2/024701 Full Text: [PDF 519 KB] (Downloads:201) RICH HTML Show Abstract Capillary filling in small length scale is an important process in nanotechnology and microfabrication. When one end of the tube or channel is sealed, it is important to consider the escape of the trapped gas. We develop a dynamic model on capillary filling in closed-end tubes, based on the diffusion-convection equation and Henry's law of gas dissolution. We systematically investigate the filling dynamics for various sets of parameters, and compare the results with a previous model which assumes a linear density profile of dissolved gas and neglect the convective term. Computational mechanistic investigation of radiation damage of adenine induced by hydroxyl radicals Rongri Tan(谈荣日), Huixuan Liu(刘慧宣), Damao Xun(寻大毛), Wenjun Zong(宗文军) Chin. Phys. B, 2018, 27 (2): 027102 doi: 10.1088/1674-1056/27/2/027102 Full Text: [PDF 1404 KB] (Downloads:160) RICH HTML Show Abstract The radiation damage of adenine base was studied by B3LYP and MP2 methods in the presence of hydroxyl radicals to probe the reactivities of five possible sites of an isolated adenine molecule. Both methods predict that the C8 site is the more vulnerable than the other sites. For its bonding covalently with the hydroxyl radicals, B3LYP predicts a barrierless pathway, while MP2 finds a transition state with an energy of 106.1 kJ/mol. For the hydroxylation at the C2 site, the barrier was calculated to be 165.3 kJ/mol using MP2 method. For the dehydrogenation reactions at five sites of adenine, B3LYP method predicts that the free energy barrier decreases in the order of H8 > H2 > HN62 > HN61 > HN9. Monitoring the formation of oil-water emulsions with a fast spatially resolved NMR spectroscopy method Meng-Ting You(游梦婷), Zhi-Liang Wei(韦芝良), Jian Yang(杨健), Xiao-Hong Cui(崔晓红), Zhong Chen(陈忠) Chin. Phys. B, 2018, 27 (2): 028201 doi: 10.1088/1674-1056/27/2/028201 Full Text: [PDF 2120 KB] (Downloads:158) RICH HTML Show Abstract In the present study, a fast chemical shift imaging (CSI) method has been used to dynamically monitor the formation of oil-water emulsions and the phase separation process of the emulsion phase from the excessive water or oil phase on the molecular level. With signals sampled from series of small voxels simultaneously within a few seconds, high-resolution one-dimensional (1D) 1H nuclear magnetic resonance (NMR) spectra from different spatial positions for inhomogeneous emulsion systems induced by susceptibility differences among components can be obtained independently. On the basis of integrals from these 1H NMR spectra, profiles obtained explicitly demonstrate the spatial and temporal variations of oil concentrations. Furthermore, the phase separation time and the length of the oil-water emulsion phase are determined. In addition, effects of oil types and proportions of the emulsifier on the emulsification states are also inspected. Experimental results indicate that 1D PHASICS (Partial Homogeneity Assisted Inhomogeneity Correction Spectroscopy) provides a helpful and promising alternative to research on dynamic processes or chemical reactions. Protection-against-water-attack determined difference between strengths of backbone hydrogen bonds in kinesin's neck zipper region Jing-Yu Qin(覃静宇), Yi-Zhao Geng(耿轶钊), Gang Lü(吕刚), Qing Ji(纪青), Hai-Ping Fang(方海平) Chin. Phys. B, 2018, 27 (2): 028704 doi: 10.1088/1674-1056/27/2/028704 Full Text: [PDF 4647 KB] (Downloads:164) RICH HTML Show Abstract Docking of the kinesin's neck linker (NL) to the motor domain is the key force-generation process of the kinesin. In this process, NL's β 10 portion forms four backbone hydrogen bonds (HBs) with the motor domain. These backbone hydrogen bonds show big differences in their effective strength. The origins of these strength differences are still unclear. Using molecular dynamics method, we investigate the stability of the backbone HBs in explicit water environment. We find that the strength differences of these backbone HBs mainly arise from their relationships with water molecules which are controlled by arranging the surrounding residue sidechains. The arrangement of the residues in the C-terminal part of β 10 results in the existence of the water-attack channels around the backbone HBs in this region. Along these channels the water molecules can directly attack the backbone HBs and make these HBs relatively weak. In contrast, the backbone HB at the N-terminus of β 10 is protected by the surrounding hydrophobic and hydrophilic residues which cooperate positively with the central backbone HB and make this HB highly strong. The intimate relationship between the effective strength of protein backbone HB and water revealed here should be considered when performing mechanical analysis for protein conformational changes. Diffusional inhomogeneity in cell cultures Jia-Zheng Zhang(张佳政), Na Li(李娜), Wei Chen(陈唯) Chin. Phys. B, 2018, 27 (2): 028705 doi: 10.1088/1674-1056/27/2/028705 Full Text: [PDF 252 KB] (Downloads:166) RICH HTML Show Abstract Cell migrations in the cell cultures are found to follow non-Gaussian statistics. We recorded long-term cell migration patterns with more than six hundred cells located in 28 mm2. Our experimental data support the claim that an individual cell migration follows Gaussian statistics. Because the cell culture is inhomogeneous, the statistics of the cell culture exhibit a non-Gaussian distribution. We find that the normalized histogram of the diffusion velocity follows an exponential tail. A simple model is proposed based on the diffusional inhomogeneity to explain the exponential distribution of locomotion activity in this work. Using numerical calculation, we prove that our model is in great agreement with the experimental data. Noise decomposition algorithm and propagation mechanism in feed-forward gene transcriptional regulatory loop Rong Gui(桂容), Zhi-Hong Li(李治泓), Li-Jun Hu(胡丽君), Guang-Hui Cheng(程光晖), Quan Liu(刘泉), Juan Xiong(熊娟), Ya Jia(贾亚), Ming Yi(易鸣) Chin. Phys. B, 2018, 27 (2): 028706 doi: 10.1088/1674-1056/27/2/028706 Full Text: [PDF 3464 KB] (Downloads:181) RICH HTML Show Abstract Feed-forward gene transcriptional regulatory networks, as a set of common signal motifs, are widely distributed in the biological systems. In this paper, the noise characteristics and propagation mechanism of various feed-forward gene transcriptional regulatory loops are investigated, including (i) coherent feed-forward loops with AND-gate, (ii) coherent feed-forward loops with OR-gate logic, and (iii) incoherent feed-forward loops with AND-gate logic. By introducing logarithmic gain coefficient and using linear noise approximation, the theoretical formulas of noise decomposition are derived and the theoretical results are verified by Gillespie simulation. From the theoretical and numerical results of noise decomposition algorithm, three general characteristics about noise transmission in these different kinds of feed-forward loops are observed. i) The two-step noise propagation of upstream factor is negative in the incoherent feed-forward loops with AND-gate logic, that is, upstream factor can indirectly suppress the noise of downstream factors. ii) The one-step propagation noise of upstream factor is non-monotonic in the coherent feed-forward loops with OR-gate logic. iii) When the branch of the feed-forward loop is negatively controlled, the total noise of the downstream factor monotonically increases for each of all feed-forward loops. These findings are robust to variations of model parameters. These observations reveal the universal rules of noise propagation in the feed-forward loops, and may contribute to our understanding of design principle of gene circuits. TOPICAL REVIEW—Solid-state quantum information processing Quantum light storage in rare-earth-ion-doped solids Yi-Lin Hua(华怡林), Zong-Quan Zhou(周宗权), Chuan-Feng Li(李传锋), Guang-Can Guo(郭光灿) Chin. Phys. B, 2018, 27 (2): 020303 doi: 10.1088/1674-1056/27/2/020303 Full Text: [PDF 3863 KB] (Downloads:304) RICH HTML Show Abstract The reversible transfer of unknown quantum states between light and matter is essential for constructing large-scale quantum networks. Over the last decade, various physical systems have been proposed to realize such quantum memory for light. The solid-state quantum memory based on rare-earth-ion-doped solids has the advantages of a reduced setup complexity and high robustness for scalable application. We describe the methods used to spectrally prepare the quantum memory and release the photonic excitation on-demand. We will review the state of the art experiments and discuss the perspective applications of this particular system in both quantum information science and fundamental tests of quantum physics. Quantum information processing with nitrogen-vacancy centers in diamond Gang-Qin Liu(刘刚钦), Xin-Yu Pan(潘新宇) Chin. Phys. B, 2018, 27 (2): 020304 doi: 10.1088/1674-1056/27/2/020304 Full Text: [PDF 3246 KB] (Downloads:1008) RICH HTML Show Abstract Nitrogen-vacancy (NV) center in diamond is one of the most promising candidates to implement room temperature quantum computing. In this review, we briefly discuss the working principles and recent experimental progresses of this spin qubit. These results focus on understanding and prolonging center spin coherence, steering and probing spin states with dedicated quantum control techniques, and exploiting the quantum nature of these multi-spin systems, such as superposition and entanglement, to demonstrate the superiority of quantum information processing. Those techniques also stimulate the fast development of NV-based quantum sensing, which is an interdisciplinary field with great potential applications. Qubits based on semiconductor quantum dots Xin Zhang(张鑫), Hai-Ou Li(李海欧), Ke Wang(王柯), Gang Cao(曹刚), Ming Xiao(肖明), Guo-Ping Guo(郭国平) Chin. Phys. B, 2018, 27 (2): 020305 doi: 10.1088/1674-1056/27/2/020305 Full Text: [PDF 11357 KB] (Downloads:414) RICH HTML Show Abstract Semiconductor quantum dots are promising hosts for qubits to build a quantum processor. In the last twenty years, intensive researches have been carried out and diverse kinds of qubits based on different types of semiconductor quantum dots were developed. Recent advances prove high fidelity single and two qubit gates, and even prototype quantum algorithms. These breakthroughs motivate further research on realizing a fault tolerant quantum computer. In this paper we review the main principles of various semiconductor quantum dot based qubits and the latest associated experimental results. Finally the future trends of those qubits will be discussed. Entangled-photons generation with quantum dots Yuan Li(李远), Fei Ding(丁飞), Oliver G Schmidt Chin. Phys. B, 2018, 27 (2): 020307 doi: 10.1088/1674-1056/27/2/020307 Full Text: [PDF 7456 KB] (Downloads:201) RICH HTML Show Abstract Entanglement between particles is a crucial resource in quantum information processing, an important example of which is the exploitation of entangled photons in quantum communication protocols. Among the different available sources of entangled photons, semiconductor quantum dots (QDs) excel owing to their deterministic emission properties, potential for electrical injections, and direct compatibility with semiconductor manufacturing techniques. Despite the great promises, QD-based sources are far from being ideal. In particular, such sources present several critical issues, which require the overcoming of challenges pertaining to spectral tunability, entanglement fidelity, photon indistinguishability and brightness. In this article, we will discuss the potential solutions to these problems and review the recent progress in the field. Nuclear magnetic resonance for quantum computing: Techniques and recent achievements Tao Xin(辛涛), Bi-Xue Wang(王碧雪), Ke-Ren Li(李可仁), Xiang-Yu Kong(孔祥宇), Shi-Jie Wei(魏世杰), Tao Wang(王涛), Dong Ruan(阮东), Gui-Lu Long(龙桂鲁) Chin. Phys. B, 2018, 27 (2): 020308 doi: 10.1088/1674-1056/27/2/020308 Full Text: [PDF 1442 KB] (Downloads:451) RICH HTML Show Abstract Rapid developments in quantum information processing have been made, and remarkable achievements have been obtained in recent years, both in theory and experiments. Coherent control of nuclear spin dynamics is a powerful tool for the experimental implementation of quantum schemes in liquid and solid nuclear magnetic resonance (NMR) system, especially in liquid-state NMR. Compared with other quantum information processing systems, the NMR platform has the advantages such as the long coherence time, the precise manipulation, and well-developed quantum control techniques, which make it possible to accurately control a quantum system with up to 12-qubits. Extensive applications of liquid-state NMR spectroscopy in quantum information processing such as quantum communication, quantum computing, and quantum simulation have been thoroughly studied over half a century. This article introduces the general principles of NMR quantum information processing, and reviews the new-developed techniques. The review will also include the recent achievements of the experimental realization of quantum algorithms for machine learning, quantum simulations for high energy physics, and topological order in NMR. We also discuss the limitation and prospect of liquid-state NMR spectroscopy and the solid-state NMR systems as quantum computing in the article. Cavity optomechanics: Manipulating photons and phonons towards the single-photon strong coupling Yu-long Liu(刘玉龙), Chong Wang(王冲), Jing Zhang(张靖), Yu-xi Liu(刘玉玺) Chin. Phys. B, 2018, 27 (2): 024204 doi: 10.1088/1674-1056/27/2/024204 Full Text: [PDF 600 KB] (Downloads:772) RICH HTML Show Abstract Cavity optomechanical systems provide powerful platforms to manipulate photons and phonons, open potential applications for modern optical communications and precise measurements. With the refrigeration and ground-state cooling technologies, studies of cavity optomechanics are making significant progress towards the quantum regime including nonclassical state preparation, quantum state tomography, quantum information processing, and future quantum internet. With further research, it is found that abundant physical phenomena and important applications in both classical and quantum regimes appeal as they have a strong optomechanical nonlinearity, which essentially depends on the single-photon optomechanical coupling strength. Thus, engineering the optomechanical interactions and improving the single-photon optomechanical coupling strength become very important subjects. In this article, we first review several mechanisms, theoretically proposed for enhancing optomechanical coupling. Then, we review the experimental progresses on enhancing optomechanical coupling by optimizing its structure and fabrication process. Finally, we review how to use novel structures and materials to enhance the optomechanical coupling strength. The manipulations of the photons and phonons at the level of strong optomechanical coupling are also summarized. Superconducting quantum bits Wei-Yang Liu(刘伟洋), Dong-Ning Zheng(郑东宁), Shi-Ping Zhao(赵士平) Chin. Phys. B, 2018, 27 (2): 027401 doi: 10.1088/1674-1056/27/2/027401 Full Text: [PDF 2772 KB] (Downloads:474) RICH HTML Show Abstract Superconducting quantum bits (qubits) and circuits are the leading candidate for the implementation of solid-state quantum computation. They have also been widely used in a variety of studies of quantum physics, atomic physics, quantum optics, and quantum simulation. In this article, we will present an overview of the basic principles of the superconducting qubits, including the phase, flux, charge, and transmon (Xmon) qubits, and the progress achieved so far concerning the improvements of the device design and quantum coherence property. Experimental studies in various research fields using the superconducting qubits and circuits will be briefly reviewed. Magneto-optical properties of self-assembled InAs quantum dots for quantum information processing Jing Tang(唐静), Xiu-Lai Xu(许秀来) Chin. Phys. B, 2018, 27 (2): 027804 doi: 10.1088/1674-1056/27/2/027804 Full Text: [PDF 2991 KB] (Downloads:300) RICH HTML Show Abstract Semiconductor quantum dots have been intensively investigated because of their fundamental role in solid-state quantum information processing. The energy levels of quantum dots are quantized and can be tuned by external field such as optical, electric, and magnetic field. In this review, we focus on the development of magneto-optical properties of single InAs quantum dots embedded in GaAs matrix, including charge injection, relaxation, tunneling, wavefunction distribution, and coupling between different dimensional materials. Finally, the perspective of coherent manipulation of quantum state of single self-assembled quantum dots by photocurrent spectroscopy with an applied magnetic field is discussed. Soliton-cnoidal interactional wave solutions for the reduced Maxwell-Bloch equations Li-Li Huang(黄丽丽), Zhi-Jun Qiao(乔志军), Yong Chen(陈勇) Chin. Phys. B, 2018, 27 (2): 020201 doi: 10.1088/1674-1056/27/2/020201 Full Text: [PDF 7754 KB] (Downloads:268) RICH HTML Show Abstract In this paper, we study soliton-cnoidal wave solutions for the reduced Maxwell-Bloch equations. The truncated Painlevé analysis is utilized to generate a consistent Riccati expansion, which leads to solving the reduced Maxwell-Bloch equations with solitary wave, cnoidal periodic wave, and soliton-cnoidal interactional wave solutions in an explicit form. Particularly, the soliton-cnoidal interactional wave solution is obtained for the first time for the reduced Maxwell-Bloch equations. Finally, we present some figures to show properties of the explicit soliton-cnoidal interactional wave solutions as well as some new dynamical phenomena. A local energy-preserving scheme for Zakharov system Qi Hong(洪旗), Jia-ling Wang(汪佳玲), Yu-Shun Wang(王雨顺) Chin. Phys. B, 2018, 27 (2): 020202 doi: 10.1088/1674-1056/27/2/020202 Full Text: [PDF 3258 KB] (Downloads:155) RICH HTML Show Abstract In this paper, we propose a local conservation law for the Zakharov system. The property is held in any local time-space region which is independent of the boundary condition and more essential than the global energy conservation law. Based on the rule that the numerical methods should preserve the intrinsic properties as much as possible, we propose a local energy-preserving (LEP) scheme for the system. The merit of the proposed scheme is that the local energy conservation law can be conserved exactly in any time-space region. With homogeneous Dirchlet boundary conditions, the proposed LEP scheme also possesses the discrete global mass and energy conservation laws. The theoretical properties are verified by numerical results. Energy states of the Hulthen plus Coulomb-like potential with position-dependent mass function in external magnetic fields M Eshghi, R Sever, S M Ikhdair Chin. Phys. B, 2018, 27 (2): 020301 doi: 10.1088/1674-1056/27/2/020301 Full Text: [PDF 320 KB] (Downloads:119) RICH HTML Show Abstract We need to solve a suitable exponential form of the position-dependent mass (PDM) Schrödinger equation with a charged particle placed in the Hulthen plus Coulomb-like potential field and under the actions of the external magnetic and Aharonov-Bohm (AB) flux fields. The bound state energies and their corresponding wave functions are calculated for the spatially-dependent mass distribution function of interest in physics. A few plots of some numerical results with respect to the energy are shown. Arbitrated quantum signature scheme with continuous-variable squeezed vacuum states Hot! Chin. Phys. B, 2018, 27 (2): 020302 doi: 10.1088/1674-1056/27/2/020302 Full Text: [PDF 1069 KB] (Downloads:220) RICH HTML Show Abstract We propose an arbitrated quantum signature (AQS) scheme with continuous variable (CV) squeezed vacuum states, which requires three parties, i.e., the signer Alice, the verifier Bob and the arbitrator Charlie trusted by Alice and Bob, and three phases consisting of the initial phase, the signature phase and the verification phase. We evaluate and compare the original state and the teleported state by using the fidelity and the beam splitter (BS) strategy. The security is ensured by the CV-based quantum key distribution (CV-QKD) and quantum teleportation of squeezed states. Security analyses show that the generated signature can be neither disavowed by the signer and the receiver nor counterfeited by anyone with the shared keys. Furthermore, the scheme can also detect other manners of potential attack although they may be successful. Also, the integrality and authenticity of the transmitted messages can be guaranteed. Compared to the signature scheme of CV-based coherent states, our scheme has better encoding efficiency and performance. It is a potential high-speed quantum signature scheme with high repetition rate and detection efficiency which can be achieved by using the standard off-the-shelf components when compared to the discrete-variable (DV) quantum signature scheme. Detecting high-dimensional multipartite entanglement via some classes of measurements Lu Liu(刘璐), Ting Gao(高亭), Fengli Yan(闫凤利) Chin. Phys. B, 2018, 27 (2): 020306 doi: 10.1088/1674-1056/27/2/020306 Full Text: [PDF 242 KB] (Downloads:134) RICH HTML Show Abstract Mutually unbiased bases, mutually unbiased measurements and general symmetric informationally complete measurements are three related concepts in quantum information theory. We investigate multipartite systems using these notions and present some criteria detecting entanglement of arbitrary high dimensional multi-qudit systems and multipartite systems of subsystems with different dimensions. It is proved that these criteria can detect the k-nonseparability (k is even) of multipartite qudit systems and arbitrary high dimensional multipartite systems of m subsystems with different dimensions. We show that they are more efficient and wider of application range than the previous ones. They provide experimental implementation in detecting entanglement without full quantum state tomography. Destroying MTZ black holes with test particles Yu Song(宋宇), Hao Tang(唐浩), De-Cheng Zou(邹德成), Cheng-Yi Sun(孙成一), Rui-Hong Yue(岳瑞宏) Chin. Phys. B, 2018, 27 (2): 020401 doi: 10.1088/1674-1056/27/2/020401 Full Text: [PDF 295 KB] (Downloads:122) RICH HTML Show Abstract Neglecting the self-force and radiative effects, we follow the spirit of Wald's gedanken experiment and discuss whether a (2+1)-dimensional Martinez-Teitelboim-Zanelli (MTZ) black hole can turn into a naked singularity by capturing a charged and massive particle. We find that after capturing a charged and massive test particle, an extremal or near-extremal MTZ black hole could turn into naked singularity, leading to a possible violation of the cosmic censorship. There exist ranges of the test particles' energies △E which allow the appearance of naked singularities from both extremal and near extremal MTZ black holes. Current loss of magnetically insulated coaxial diode with cathode negative ion Dan-Ni Zhu(朱丹妮), Jun Zhang(张军), Hui-Huang Zhong(钟辉煌), Jing-Ming Gao(高景明), Zhen Bai(白珍) Chin. Phys. B, 2018, 27 (2): 020501 doi: 10.1088/1674-1056/27/2/020501 Full Text: [PDF 1925 KB] (Downloads:111) RICH HTML Show Abstract Current loss without an obvious impedance collapse in the magnetically insulated coaxial diode (MICD) is studied through experiment and particle-in-cell (PIC) simulation when the guiding magnetic field is strong enough. Cathode negative ions are clarified to be the predominant reason for it. Theoretical analysis and simulation both indicate that the velocity of the negative ion reaches up to 1 cm/ns due to the space potential between the anode and cathode gap (A-C gap). Accordingly, instead of the reverse current loss and the parasitic current loss, the negative ion loss appears during the whole pulse. The negative ion current loss is determined by its ionization production rate. It increases with diode voltage increasing. The smaller space charge effect caused by the beam thickening and the weaker radial restriction both promote the negative ion production under a lower magnetic field. Therefore, as the magnetic field increases, the current loss gradually decreases until the beam thickening nearly stops. Generalized Chaplygin equations for nonholonomic systems on time scales Shi-Xin Jin(金世欣), Yi Zhang(张毅) Chin. Phys. B, 2018, 27 (2): 020502 doi: 10.1088/1674-1056/27/2/020502 Full Text: [PDF 211 KB] (Downloads:110) RICH HTML Show Abstract The generalized Chaplygin equations for nonholonomic systems on time scales are proposed and studied. The Hamilton principle for nonholonomic systems on time scales is established, and the corresponding generalized Chaplygin equations are deduced. The reduced Chaplygin equations are also presented. Two special cases of the generalized Chaplygin equations on time scales, where the time scales are equal to the set of real numbers and the integer set, are discussed. Finally, several examples are given to illustrate the application of the results. Performance study of aluminum shielded room for ultra-low-field magnetic resonance imaging based on SQUID: Simulations and experiments Bo Li(李波), Hui Dong(董慧), Xiao-Lei Huang(黄小磊), Yang Qiu(邱阳), Quan Tao(陶泉), Jian-Ming Zhu(朱建明) Chin. Phys. B, 2018, 27 (2): 020701 doi: 10.1088/1674-1056/27/2/020701 Full Text: [PDF 1015 KB] (Downloads:162) RICH HTML Show Abstract The aluminum shielded room has been an important part of ultra-low-field magnetic resonance imaging (ULF MRI) based on the superconducting quantum interference device (SQUID). The shielded room is effective to attenuate the external radio-frequency field and keep the extremely sensitive detector, SQUID, working properly. A high-performance shielded room can increase the signal-to-noise ratio (SNR) and improve image quality. In this study, a circular coil with a diameter of 50 cm and a square coil with a side length of 2.0 m was used to simulate the magnetic fields from the nearby electric apparatuses and the distant environmental noise sources. The shielding effectivenesses (SE) of the shielded room with different thicknesses of aluminum sheets were calculated and simulated. A room using 6-mm-thick aluminum plates with a dimension of 1.5 m×1.5 m×2.0 m was then constructed. The SE was experimentally measured by using three-axis SQUID magnetometers, with tranisent magnetic field induced in the aluminum plates by the strong pre-polarization pulses. The results of the measured SE agreed with that from the simulation. In addition, the introduction of a 0.5-mm gap caused the obvious reduction of SE indicating the importance of door design. The nuclear magnetic resonance (NMR) signals of water at 5.9 kHz were measured in free space and in a shielded room, and the SNR was improved from 3 to 15. The simulation and experimental results will help us design an aluminum shielded room which satisfies the requirements for future ULF human brain imaging. Finally, the cancellation technique of the transient eddy current was tried, the simulation of the cancellation technique will lead us to finding an appropriate way to suppress the eddy current fields. Strontium optical lattice clock at the National Time Service Center Hot! Chin. Phys. B, 2018, 27 (2): 023701 doi: 10.1088/1674-1056/27/2/023701 Full Text: [PDF 487 KB] (Downloads:287) RICH HTML Show Abstract We report the 87Sr optical lattice clock developed at the National Time Service Center. We achieved a closed-loop operation of the optical lattice clock based on 87Sr atoms. The linewidth of the spin-polarized clock peak is 3.9 Hz with a clock laser pulse length of 300 ms, which corresponds to a Fourier-limited linewidth of 3 Hz. The fitting of the in-loop error signal data shows that the instability is approximately 5×10-15τ-1/2, affected primarily by the white noise. The fractional frequency difference averages down to 5.7×10-17 for an averaging time of 3000 s. Neutron powder diffraction and high-pressure synchrotron x-ray diffraction study of tantalum nitrides Lei-hao Feng(冯雷豪), Qi-wei Hu(胡启威), Li Lei(雷力), Lei-ming Fang(房雷鸣), Lei Qi(戚磊), Lei-lei Zhang(张雷雷), Mei-fang Pu(蒲梅芳), Zi-li Kou(寇自力), Fang Peng(彭放), Xi-ping Chen(陈喜平), Yuan-hua Xia(夏元华), Yohei Kojima(小岛洋平), Hiroaki Ohfuji(大藤宏明), Duan-wei He(贺端威), Bo Chen(陈波), Tetsuo Irifune(入舩徹男) Chin. Phys. B, 2018, 27 (2): 026201 doi: 10.1088/1674-1056/27/2/026201 Full Text: [PDF 1967 KB] (Downloads:194) RICH HTML Show Abstract Tantalum nitride (TaN) compact with a Vickers hardness of 26 GPa is prepared by a high-pressure and high-temperature (HPHT) method. The crystal structure and atom occupations of WC-type TaN have been investigated by neutron powder diffraction, and the compressibility of WC-type TaN has been investigated by using in-situ high-pressure synchrotron x-ray diffraction. The third-order Birch-Murnaghan equation of state fitted to the x-ray diffraction pressure-volume (P-V) sets of data, collected up to 41 GPa, yields ambient pressure isothermal bulk moduli of B0=369(2) GPa with pressure derivatives of B0'=4 for the WC-type TaN. The bulk modulus of WC-type TaN is not in good agreement with the previous result (B0=351 GPa), which is close to the recent theoretical calculation result (B0=378 GPa). An analysis of the experiment results shows that crystal structure of WC-type TaN can be viewed as alternate stacking of Ta and N layers along the c direction, and the covalent Ta-N bonds between Ta and N layers along the c axis in the crystal structure play an important role in the incompressibility and hardness of WC-type TaN. Chiral p-wave pairing of ultracold fermionic atoms due to a quadratic band touching Hai-Xiao Wang(王海啸), Zi-Heng Liu(刘子衡), Jian-Hua Jiang(蒋建华) Chin. Phys. B, 2018, 27 (2): 027402 doi: 10.1088/1674-1056/27/2/027402 Full Text: [PDF 411 KB] (Downloads:104) RICH HTML Show Abstract We study the superfuild ground state of ultracold fermions in optical lattices with a quadratic band touching. Examples are a checkerboard lattice around half filling and a kagome lattice above one third filling. Instead of pairing between spin states, here we focus on pairing interactions between different orbital states. We find that our systems have only odd-parity (orbital) pairing instability while the singlet (orbital) pairing instability vanishes thanks to the quadratic band touching. In the mean field level, the ground state is found to be a chiral p-wave pairing superfluid (mixed with finite f-wave pairing order-parameters) which supports Majorana fermions. Theoretical analysis of suppressing Dick effect in Ramsey-CPT atomic clock by interleaving lock Xiao-Lin Sun(孙晓林), Jian-Wei Zhang(张建伟), Peng-Fei Cheng(程鹏飞), Ya-Ni Zuo(左娅妮), Li-Jun Wang(王力军) Chin. Phys. B, 2018, 27 (2): 023101 doi: 10.1088/1674-1056/27/2/023101 Full Text: [PDF 382 KB] (Downloads:154) RICH HTML Show Abstract For most pulsed atomic clocks, the Dick effect is one of the main limits to reach its frequency stability limitation due to quantum projection noise. In this paper, we measure the phase noise of the local oscillator in the Ramsey-CPT atomic clock and calculate the Dick effect induced Allan deviation based on a three-level atomic model, which is quite different from typical atomic clocks. We further present a detailed analysis of optimizing the sensitivity function and minimizing the Dick effect by interleaving lock. By optimizing the duty circle of laser pulses, average time during detection and optical intensity of laser beam, the Dick effect induced Allan deviation can be reduced to the level of 10-14. State-to-state dynamics of F(2P)+HO(2Π) →O(3P)+HF(1+) reaction on 13A" potential energy surface Juan Zhao(赵娟), Hui Wu(吴慧), Hai-Bo Sun(孙海波), Li-Fei Wang(王立飞) Chin. Phys. B, 2018, 27 (2): 023102 doi: 10.1088/1674-1056/27/2/023102 Full Text: [PDF 1649 KB] (Downloads:132) RICH HTML Show Abstract State-to-state time-dependent quantum dynamics calculations are carried out to study F(2P)+HO(2Π)→O(3P)+HF(1+) reaction on 13A" ground potential energy surface (PES). The vibrationally resolved reaction probabilities and the total integral cross section agree well with the previous results. Due to the heavy-light-heavy (HLH) system and the large exoergicity, the obvious vibrational inversion is found in a state-resolved integral cross section. The total differential cross section is found to be forward-backward scattering biased with strong oscillations at energy lower than a threshold of 0.10 eV, which is the indication of the indirect complex-forming mechanism. When the collision energy increases to greater than 0.10 eV, the angular distribution of the product becomes a strong forward scattering, and almost all the products are distributed at θt=0°. This forward-peaked distribution can be attributed to the larger J partial waves and the property of the F atom itself, which make this reaction a direct abstraction process. The state-resolved differential cross sections are basically forward-backward symmetric for v'=0, 1, and 2 at a collision energy of 0.07 eV; for a collision energy of 0.30 eV, it changes from backward/sideward scattering to forward peaked as v' increasing from 0 to 3. These results indicate that the contribution of differential cross sections with more highly vibrational excited states to the total differential cross sections is principal, which further verifies the vibrational inversion in the products. Excited state intramolecular proton transfer mechanism of o-hydroxynaphthyl phenanthroimidazole Shuang Liu(刘爽), Yan-Zhen Ma(马艳珍), Yun-Fan Yang(杨云帆), Song-Song Liu(刘松松), Yong-Qing Li(李永庆), Yu-Zhi Song(宋玉志) Chin. Phys. B, 2018, 27 (2): 023103 doi: 10.1088/1674-1056/27/2/023103 Full Text: [PDF 1144 KB] (Downloads:141) RICH HTML Show Abstract By utilizing the density functional theory (DFT) and the time-dependent density functional theory (TDDFT), the excited state intramolecular proton transfer (ESIPT) mechanism of o-hydroxynaphthyl phenanthroimidazole (HNPI) is studied in detail. Upon photo is excited, the intramolecular hydrogen bond is obviously enhanced in the S1 state, which thus promotes the ESIPT process. Hydrogen bond is shown to be strengthened via comparing the molecular structures and the infrared vibration spectra of the S0 and S1 states. Through analyzing the frontier molecular orbitals, we can conclude that the excitation is a type of the intramolecular charge transfer excitation, which also indicates the trend of proton transfer in S1 state. The vertical excitation based on TDDFT calculation can effectively repeat the absorption and fluorescence spectra of the experiment. However, the fluorescence spectrum of normal structure, which is similar to the spectrum of isomer structure is not detected in the experiment. It can be concluded that the fluorescence measured in the experiment is attributed to both structures. In addition, by analyzing the potential energy curves (PECs) calculated by the B3LYP functional method, it can be derived that since the molecule to cross the potential barrier in the S1 state is smaller than in the S0 state and the reverse proton transfer process in the S1 state is more difficult than in the S0 state, the ESIPT occurs in the S1 state. Comparisons of electrical and optical properties between graphene and silicene-A review Wirth-Lima A J, Silva M G, Sombra A S B Chin. Phys. B, 2018, 27 (2): 023201 doi: 10.1088/1674-1056/27/2/023201 Full Text: [PDF 1297 KB] (Downloads:246) RICH HTML Show Abstract Two-dimensional (2D) metamaterials are considered to be of enormous relevance to the progress of all exact sciences. Since the discovery of graphene, researchers have increasingly investigated in depth the details of electrical/optical properties pertinent to other 2D metamaterials, including those relating to the silicene. In this review are included the details and comparisons of the atomic structures, energy diagram bands, substrates, charge densities, charge mobilities, conductivities, absorptions, electrical permittivities, dispersion relations of the wave vectors, and supported electromagnetic modes related to graphene and silicene. Hence, this review can help readers to acquire, recover or increase the necessary technological basis for the development of more specific studies on graphene and silicene. Temporal interference driven by an oscillating electric field in photodetachment of H- ion De-Hua Wang(王德华), Meng-Zhi Li(李梦芝), Hong-Na Song(宋鸿娜), Xiao-Xiao Ren(任笑笑) Chin. Phys. B, 2018, 27 (2): 023202 doi: 10.1088/1674-1056/27/2/023202 Full Text: [PDF 886 KB] (Downloads:135) RICH HTML Show Abstract The real time domain interferometry for the photodetachment dynamics driven by the oscillating electric field has been studied for the first time. Both the geometry of the detached electron trajectories and the electron probability density are shown to be different from those in the photodetachment dynamics in a static electric field. The influence of the oscillating electric field on the detached electron leads to a surprisingly intricate shape of the electron waves, and multiple interfering trajectories generate complex interference patterns in the electron probability density. Using the semiclassical open-orbit theory, we calculate the interference patterns in the time-dependent electron probability density for different electric field strengths, different frequencies and phases in the oscillating electric field. This method is universal, and can be extended to study the photoionization dynamics of the atoms in the time-dependent electric field. Our study can guide the future experimental researches in the photodetachment or photoionization microscopy of negative ions and atoms in the oscillating electric field. Temperature dependence of line parameters of 12C16O2 near 2.004 μm studied by tunable diode laser spectroscopy Hongliang Ma(马宏亮), Mingguo Sun(孙明国), Shenlong Zha(査申龙), Qiang Liu(刘强), Zhensong Cao(曹振松), Yinbo Huang(黄印博), Zhu Zhu(朱柱), Ruizhong Rao(饶瑞中) Chin. Phys. B, 2018, 27 (2): 023301 doi: 10.1088/1674-1056/27/2/023301 Full Text: [PDF 397 KB] (Downloads:187) RICH HTML Show Abstract The absorption spectrum of carbon dioxide at 2.004 μm has been recorded at sample temperatures between 218.0 K and room temperature, by using a high-resolution tunable diode laser absorption spectrometer (TDLAS) combined with a temperature controlled cryogenically cooled absorption cell. The self-, N2-, and air-broadening coefficients for nine transitions of 12C16O2 belonging to the 20012 ← 00001 band in the 4987 cm-1-4998 cm-1 region have been measured at different temperatures. From these measurements, we have further determined the temperature dependence exponents of the pressure-broadening coefficients. To the best of our knowledge, the temperature dependence parameters of the collisional broadening coefficients are reported experimentally for the first time for these nine transitions. The measured halfwidth coefficients and the air temperature dependence exponents of these transitions are compared with the available values reported in the literature and HITRAN 2012 database. Agreements and discrepancies are also discussed. Responsive mechanism and molecular design of di-2-picolylamine-based two-photon fluorescent probes for zinc ions Mei-Yu Zhu(朱美玉), Ke Zhao(赵珂), Jun Song(宋军), Chuan-Kui Wang(王传奎) Chin. Phys. B, 2018, 27 (2): 023302 doi: 10.1088/1674-1056/27/2/023302 Full Text: [PDF 1841 KB] (Downloads:122) RICH HTML Show Abstract The properties of one-photon absorption (OPA), emission and two-photon absorption (TPA) of a di-2-picolylamine-based zinc ion sensor are investigated by employing the density functional theory in combination with response functions. The responsive mechanism is explored. It is found that the calculated OPA and TPA properties are quite consistent with experimental data. Because the intra-molecular charge transfer (ICT) increases upon zinc ion binding, the TPA intensity is enhanced dramatically. According to the model sensor, we design a series of zinc ion probes which differ by conjugation center, acceptor and donor moieties. The properties of OPA, emission and TPA of the designed molecules are calculated at the same computational level. Our results demonstrate that the OPA and emission wavelengths of the designed probes have large red-shifts after zinc ions have been bound. Comparing with the model sensor, the TPA intensities of the designed probes are enhanced significantly and the absorption positions are red-shifted to longer wavelength range. Furthermore, the TPA intensity can be improved greatly upon zinc ion binding due to the increased ICT mechanism. These compounds are potential excellent candidates for two-photon fluorescent zinc ion probes. Exploring the methane combustion reaction: A theoretical contribution Ya Peng(彭亚), Zhong-An Jiang(蒋仲安), Ju-Shi Chen(陈举师) Chin. Phys. B, 2018, 27 (2): 023401 doi: 10.1088/1674-1056/27/2/023401 Full Text: [PDF 6799 KB] (Downloads:140) RICH HTML Show Abstract This paper represents an attempt to extend the mechanisms of reactions and kinetics of a methane combustion reaction. Three saddle points (SPs) are identified in the reaction CH4+O(3P) → OH +CH3 using the complete active space selfconsistent field and the multireference configuration interaction methods with a proper active space. Our calculations give a fairly accurate description of the regions around the twin first-order SPs (3A' and 3A") along the direction of O(3P) attacking a near-collinear H-CH3. One second-order SP2nd (3E) between the above twin SPs is the result of the C3v symmetry Jahn-Teller coupling within the branching space. Further kinetic calculations are performed with the canonical unified statistical theory method with the temperature ranging from 298 K to 1000 K. The rate constants are also reported. The present work reveals the reaction mechanism of hydrogen-abstraction by the O(3P) from methane, and develops a better understanding for the role of SPs. In addition, a comparison of the reactions of O(3P) with methane through different channels allows a molecule-level discussion of the reactivity and mechanism of the title reaction. Higher order harmonics suppression in extreme ultraviolet and soft x-ray Yong Chen(陈勇), Lai Wei(魏来), Feng Qian(钱凤), Zuhua Yang(杨祖华), Shaoyi Wang(王少义), Yinzhong Wu(巫殷忠), Qiangqiang Zhang(张强强), Quanpin Fan(范全平), Leifeng Cao(曹磊峰) Chin. Phys. B, 2018, 27 (2): 024101 doi: 10.1088/1674-1056/27/2/024101 Full Text: [PDF 8857 KB] (Downloads:158) RICH HTML Show Abstract The extreme ultraviolet and soft x-ray sources are widely used in various domains. Suppressing higher order harmonics and improving spectral purity are significant. This paper describes a novel method of higher order harmonics suppression with single order diffraction gratings in extreme ultraviolet and soft x-ray. The principle of harmonic suppression with single order diffraction grating is described, and an extreme ultraviolet and soft x-ray non-harmonics grating monochromator is designed based on the single order diffraction grating. The performance is simulated by an optical design software. The emergent beams of a monochromator with different gratings are measured by a transmission grating spectrometer. The results show that the single order diffraction grating can suppress higher order harmonics effectively, and it is expected to be widely used in synchrotron radiation, diagnostics of laser induced plasma, and astrophysics. Sub-external cavity effect and elimination method in laser self-mixing interference wave plate measurement system Haisha Niu(牛海莎), Yanxiong Niu(牛燕雄), Jianjun Song(宋建军) Chin. Phys. B, 2018, 27 (2): 024201 doi: 10.1088/1674-1056/27/2/024201 Full Text: [PDF 1117 KB] (Downloads:106) RICH HTML Show Abstract Laser self-mixing interference (SMI) wave plate measurement method is a burgeoning technique for its simplicity and efficiency. But for the non-coated sample, the reflected light from the surface can seriously affect the measurement results. To analyze the reason theoretically, a self-consistent model for laser operation with a sub-external and an external cavity is established, and the sub-external cavity formed by the sample and a cavity mirror is proved to be the main error source. A synchronous tuning method is proposed to eliminate the sub-external cavity effect. Experiments are carried out on the synchronously tuning double external cavities self-mixing interference system, and the error of the system is in the range of -0.435°~0.387° compared with the ellipsometer. The research plays an important role in improving the performance and enlarging the application range of the laser self-mixing interference system. Double-rod metasurface for mid-infrared polarization conversion Yang Pu(蒲洋), Yi Luo(罗意), Lu Liu(刘路), De He(何德), Hongyan Xu(徐洪艳), Hongwei Jing(景洪伟), Yadong Jiang(蒋亚东), Zhijun Liu(刘志军) Chin. Phys. B, 2018, 27 (2): 024202 doi: 10.1088/1674-1056/27/2/024202 Full Text: [PDF 1659 KB] (Downloads:171) RICH HTML Show Abstract Resonant responses of metasurface enable effective control over the polarization properties of lights. In this paper, we demonstrate a double-rod metasurface for broadband polarization conversion in the mid-infrared region. The metasurface consists of a metallic double-rod array separated from a reflecting ground plane by a film of zinc selenide. By superimposing three localized resonances, cross polarization conversion is achieved over a bandwidth of 16.9 THz around the central frequency at 34.6 THz with conversion efficiency exceeding 70%. The polarization conversion performance is in qualitative agreement with simulation. The surface current distributions and electric field profiles of the resonant modes are discussed to analyze the underlying physical mechanism. Our demonstrated broadband polarization conversion has potential applications in the area of mid-infrared spectroscopy, communication, and sensing. Quantum state transfer via a hybrid solid-optomechanical interface Chin. Phys. B, 2018, 27 (2): 024203 doi: 10.1088/1674-1056/27/2/024203 Full Text: [PDF 738 KB] (Downloads:117) RICH HTML Show Abstract We propose a scheme to implement quantum state transfer between two distant quantum nodes via a hybrid solid-optomechanical interface. The quantum state is encoded on the native superconducting qubit, and transferred to the microwave photon, then the optical photon successively, which afterwards is transmitted to the remote node by cavity leaking, and finally the quantum state is transferred to the remote superconducting qubit. The high efficiency of the state transfer is achieved by controllable Gaussian pulses sequence and numerically demonstrated with theoretically feasible parameters. Our scheme has the potential to implement unified quantum computing-communication-computing, and high fidelity of the microwave-optics-microwave transfer process of the quantum state. Absorption linewidth inversion with wavelength modulation spectroscopy Yue Yan(颜悦), Zhenhui Du(杜振辉), Jinyi Li(李金义), Ruixue Wang(王瑞雪) Chin. Phys. B, 2018, 27 (2): 024205 doi: 10.1088/1674-1056/27/2/024205 Full Text: [PDF 433 KB] (Downloads:129) RICH HTML Show Abstract For absorption linewidth inversion with wavelength modulation spectroscopy (WMS), an optimized WMS spectral line fitting method was demonstrated to infer absorption linewidth effectively, and the analytical expressions for relationships between Lorentzian linewidth and the separations of first harmonic peak-to-valley and second harmonic zero-crossing were deduced. The transition of CO2 centered at 4991.25 cm-1 was used to verify the optimized spectral fitting method and the analytical expressions. Results showed that the optimized spectra fitting method was able to infer absorption accurately and compute more than 10 times faster than the commonly used numerical fitting procedure. The second harmonic zero-crossing separation method calculated an even 6 orders faster than the spectra fitting without losing any accuracy for Lorentzian dominated cases. Additionally, linewidth calculated through second harmonic zero-crossing was preferred for much smaller error than the first harmonic peak-to-valley separation method. The presented analytical expressions can also be used in on-line optical sensing applications, electron paramagnetic resonance, and further theoretical characterization of absorption lineshape. Spectral redshift of high-order harmonics by adding a weak pulse in the falling part of the trapezoidal laser pulse Xue-Fei Pan(潘雪飞), Jun Zhang(张军), Shuai Ben(贲帅), Tong-Tong Xu(徐彤彤), Xue-Shen Liu(刘学深) Chin. Phys. B, 2018, 27 (2): 024206 doi: 10.1088/1674-1056/27/2/024206 Full Text: [PDF 7805 KB] (Downloads:125) RICH HTML Show Abstract We investigate the spectral redshift of high-order harmonics of the H2+ (D2+) molecule by numerically solving the non-Born-Oppenheimer time-dependent Schrödinger equation (TDSE). The results show that the spectral redshift of high-order harmonics can be observed by adding a weak pulse in the falling part of the trapezoidal laser pulses. Comparing with the H2+ molecule, the shift of high-order harmonic generation (HHG) spectrum for the D2+ molecule is more obvious. We employ the spatial distribution in HHG and time-frequency analysis to illustrate the physical mechanism of the spectral redshift of high-order harmonics. Influence of intra-cavity loss on transmission characteristics of fiber Bragg grating Fabry-Perot cavity Di Wang(王迪), Meng Ding(丁孟), Hao-Yang Pi(皮浩洋), Xuan Li(李璇), Fei Yang(杨飞), Qing Ye(叶青), Hai-Wen Cai(蔡海文), Fang Wei(魏芳) Chin. Phys. B, 2018, 27 (2): 024207 doi: 10.1088/1674-1056/27/2/024207 Full Text: [PDF 948 KB] (Downloads:192) RICH HTML Show Abstract A theoretical model of the fiber Bragg grating Fabry-Perot (FBG-FP) transmission spectrum considering loss of FBG and intra-cavity fiber is presented. Several types of FBG-FPs are inscribed experimentally, and their spectra are measured. The results confirm that weak intra-cavity loss is enhanced at the resonance transmission peak, that is, loss of transmission peaks is observably larger than other wavelengths. For FBG-FPs with multi resonance peaks, when the resonance peak wavelength is closer to the Bragg wavelength, the more significant loss effect of resonance transmission peak is exhibited. The measured spectra are fitted with the presented theoretical model. The fitted coefficient of determinations are near 1, which proves the validity of the theoretical model. This study can be applied to measure FBG loss more accurately, without a reference light. It can play an important role in FBG and FBG-FP writing process optimization and application parameter optimization. Fabrication of mixed perovskite organic cation thin films via controllable cation exchange Yu-Long Zhao(赵宇龙), Jin-Feng Wang(王进峰), Ben-Guang Zhao(赵本广), Chen-Chen Jia(贾晨晨), Jun-Peng Mou(牟俊朋), Lei Zhu(朱磊), Jian Song(宋健), Xiu-Quan Gu(顾修全), Ying-Huai Qiang(强颖怀) Chin. Phys. B, 2018, 27 (2): 024208 doi: 10.1088/1674-1056/27/2/024208 Full Text: [PDF 1690 KB] (Downloads:204) RICH HTML Show Abstract Here in this paper, we demonstrate a facile technique for creating the mixed formamidinium (HN=CHNH3+, FA+) and methylammonium (CH3NH3+, MA+) cations in the lead iodide perovskite. This technique entails a facile drop-casting of formamidinium iodide (FAI) solutions on as-prepared MAPbI3 perovskite thin films under the controlled conditions, which leads to controllable displacement of the MA+ cations by FA+ cations in the perovskite structure at room temperature. Uniform and controllable mixed organic cation perovskite thin films without a “bi-layered” or graded structure are achieved. By applying this approach to photovoltaic devices, we are able to improve the performances of devices through extending their optical-absorption onset further into the infrared region to enhance solar-light harvesting. Additionally, this work provides a simple and efficient technique to tune the structural, electrical, and optoelectronic properties of the light-harvesting materials for high-performance perovskite solar cells. Multiple off-axis acoustic vortices generated by dual coaxial vortex beams Wen Li(李雯), Si-Jie Dai(戴思捷), Qing-Yu Ma(马青玉), Ge-Pu Guo(郭各朴), He-Ping Ding(丁鹤平) Chin. Phys. B, 2018, 27 (2): 024301 doi: 10.1088/1674-1056/27/2/024301 Full Text: [PDF 3067 KB] (Downloads:292) RICH HTML Show Abstract As a kind of special acoustic field, the helical wavefront of an acoustic vortex (AV) beam is demonstrated to have a pressure zero with phase singularity at the center in the transverse plane. The orbital angular momentum of AVs can be applied to the field of particle manipulation, which attracts more and more attention in acoustic researches. In this paper, by using the simplified circular array of point sources, dual coaxial AV beams are excited by the even-and odd-numbered sources with the topological charges of lE and lO based on the phase-coded approach, and the composite acoustic field with an on-axis center-AV and multiple off-axis sub-AVs can be generated by the superimposition of the AV beams for|lE|≠|lO|. The generation of edge phase dislocation is theoretically derived and numerically analyzed for lE=-lO. The numbers and the topological charges as well as the locations of the center-AV and sub-AVs are demonstrated, which are proved to be determined by the topological charges of the coaxial AV beams. The proposed approach breaks through the limit of only one on-axis AV with a single topological charge along the beam axis, and also provides the feasibility of off-axis particle trapping with multiple AVs in object manipulation. Influence of mode conversions in the skull on transcranial focused ultrasound and temperature fields utilizing the wave field separation method: A numerical study Xiang-Da Wang(王祥达), Wei-Jun Lin(林伟军), Chang Su(苏畅), Xiu-Ming Wang(王秀明) Chin. Phys. B, 2018, 27 (2): 024302 doi: 10.1088/1674-1056/27/2/024302 Full Text: [PDF 1322 KB] (Downloads:261) RICH HTML Show Abstract Transcranial focused ultrasound is a booming noninvasive therapy for brain stimuli. The Kelvin-Voigt equations are employed to calculate the sound field created by focusing a 256-element planar phased array through a monkey skull with the time-reversal method. Mode conversions between compressional and shear waves exist in the skull. Therefore, the wave field separation method is introduced to calculate the contributions of the two waves to the acoustic intensity and the heat source, respectively. The Pennes equation is used to depict the temperature field induced by ultrasound. Five computational models with the same incident angle of 0° and different distances from the focus for the skull and three computational models at different incident angles and the same distance from the focus for the skull are studied. Numerical results indicate that for all computational models, the acoustic intensity at the focus with mode conversions is 12.05% less than that without mode conversions on average. For the temperature rise, this percentage is 12.02%. Besides, an underestimation of both the acoustic intensity and the temperature rise in the skull tends to occur if mode conversions are ignored. However, if the incident angle exceeds 30°, the rules of the over-and under-estimation may be reversed. Moreover, shear waves contribute 20.54% of the acoustic intensity and 20.74% of the temperature rise in the skull on average for all computational models. The percentage of the temperature rise in the skull from shear waves declines with the increase of the duration of the ultrasound. Numerical study on discharge characteristics influenced by secondary electron emission in capacitive RF argon glow discharges by fluid modeling Lu-Lu Zhao(赵璐璐), Yue Liu(刘悦), Tagra Samir Chin. Phys. B, 2018, 27 (2): 025201 doi: 10.1088/1674-1056/27/2/025201 Full Text: [PDF 299 KB] (Downloads:110) RICH HTML Show Abstract A one-dimensional (1D) fluid model of capacitive RF argon glow discharges between two parallel-plate electrodes at low pressure is employed. The influence of the secondary electron emission on the plasma characteristics in the discharges is investigated numerically by the model. The results show that as the secondary electron emission coefficient increases, the cycle-averaged electric field has almost no change; the cycle-averaged electron temperature in the bulk plasma almost does not change, but it increases in the two sheath regions; the cycle-averaged ionization rate, electron density, electron current density, ion current density, and total current density all increase. Also, the cycle-averaged secondary electron fluxes on the surfaces of the electrodes increase as the secondary electron emission coefficient increases. The evolutions of the electron flux, the secondary electron flux and the ion flux on the powered electrode increase as the secondary electron emission coefficient increases. The cycle-averaged electron pressure heating, electron Ohmic heating, electron heating, and ion heating in the two sheath regions increase as the secondary electron emission coefficient increases. The cycle-averaged electron energy loss increases with increasing secondary electron emission coefficient. Electric field in two-dimensional complex plasma crystal: Simulated lattices Behnam Bahadory Chin. Phys. B, 2018, 27 (2): 025202 doi: 10.1088/1674-1056/27/2/025202 Full Text: [PDF 403 KB] (Downloads:114) RICH HTML Show Abstract We focus on molecular dynamics simulated two-dimensional complex plasma crystals. We use rigid walls as a confinement force and produce square and rectangular crystals. We report various types of two-row crystals. The narrow and long crystals are likely to be used as wigglers; therefore, we simulate such crystals. Also, we analyze the electric fields of simulated crystals. A bit change in lattice parameters can change the internal structures of crystals and their electric fields notably. These parameters are the number of grains, grains charge, length, and width of the crystal. With the help of electric fields, we show the details of crystal structures. Schamel equation in an inhomogeneous magnetized sheared flow plasma with q-nonextensive trapped electrons Shaukat Ali Shan, Qamar-ul-Haque Chin. Phys. B, 2018, 27 (2): 025203 doi: 10.1088/1674-1056/27/2/025203 Full Text: [PDF 6548 KB] (Downloads:169) RICH HTML Show Abstract An investigation is carried out for understanding the properties of ion-acoustic (IA) solitary waves in an inhomogeneous magnetized electron-ion plasma with field-aligned sheared flow under the impact of q-nonextensive trapped electrons. The Schamel equation and its stationary solution in the form of solitary waves are obtained for this inhomogeneous plasma. It is shown that the amplitude of IA solitary waves increases with higher trapping efficiency (β), while the width remains almost the same. Further, it is found that the amplitude of the solitary waves decreases with enhanced normalized drift speed, shear flow parameter and the population of the energetic particles. The size of the nonlinear solitary structures is calculated to be a few hundred meters and it is pointed out that the present results are useful to understand the solar wind plasma. Dust charging and levitating in a sheath of plasma containing energetic particles Jing Ou(欧靖), Xiao-Yun Zhao(赵晓云), Bin-Bin Lin(林滨滨) Chin. Phys. B, 2018, 27 (2): 025204 doi: 10.1088/1674-1056/27/2/025204 Full Text: [PDF 860 KB] (Downloads:147) RICH HTML Show Abstract The structure of the sheath in the presence of energetic particles is investigated in the multi-fluid framework. Based on the orbital motion limited (OML) theory, the dust grain charging inside the sheath of plasma containing energetic particles is examined for the carbon wall, and then the effect of the energetic particles on the stationary dust particle inside the sheath is discussed through the trapping potential energy. It is found that with the increase of energetic ion concentration or energy, the size of dust staying in levitation equilibrium decreases and the levitating position is much closer to the wall. In the case of deuterium ions as energetic ions, the bigger dust particle can be trapped by the sheath than in the case of hydrogen ions as energetic ions. When the energetic electron component is present, the levitating position of dust particle in the sheath depends strongly on the energetic electron. The levitating dust particle is closer to the wall as the energetic electron energy or concentration is increased. In addition, with the increase of temperature of thermal background ion, the size of dust particle trapped by the sheath decreases and the levitating positions of dust particles with the same size radius inside the sheath move toward the wall. Our results can be helpful in investigating the property of the sheath where the energetic particle component is present. A fast emittance measurement unit for high intensity DC beam Ai-Lin Zhang(张艾霖), Hai-Tao Ren(任海涛), Shi-Xiang Peng(彭士香), Tao Zhang(张滔), Yuan Xu(徐源), Jing-Feng Zhang(张景丰), Jia-Mei Wen(温佳美), Wen-Bin Wu(武文斌), Zhi-Yu Guo(郭之虞), Jia-Er Ceng(陈佳洱) Chin. Phys. B, 2018, 27 (2): 025205 doi: 10.1088/1674-1056/27/2/025205 Full Text: [PDF 1048 KB] (Downloads:127) RICH HTML Show Abstract A combined unit, which has the ability to measure the current and emittance of the high intensity direct current (DC) ion beam, is developed at Peking University (PKU). It is a multi-slit single-wire (MSSW)-type beam emittance meter combined with a water-cooled Faraday Cup, named high intensity beam emittance measurement unit-6 (HIBEMU-6). It takes about 15 seconds to complete one measurement of the beam current and its emittance. The emittance of a 50-mA@50-kV DC proton beam is measured. Rayleigh-Taylor instability at spherical interfaces of incompressible fluids Hong-Yu Guo(郭宏宇), Li-Feng Wang(王立锋), Wen-Hua Ye(叶文华), Jun-Feng Wu(吴俊峰), Ying-Jun Li(李英骏), Wei-Yan Zhang(张维岩) Chin. Phys. B, 2018, 27 (2): 025206 doi: 10.1088/1674-1056/27/2/025206 Full Text: [PDF 333 KB] (Downloads:119) RICH HTML Show Abstract Rayleigh-Taylor instability (RTI) of three incompressible fluids with two interfaces in spherical geometry is derived analytically. The growth rate on the two interfaces and the perturbation feedthrough coefficients between two spherical interfaces are derived. For low-mode perturbation, the feedthrough effect from outer interface to inner interface is much more severe than the corresponding planar case, while the feedback from inner interface to the outer interface is smaller than that in planar geometry. The low-mode perturbations lead to the pronounced RTI growth on the inner interface of a spherical shell that are larger than the cylindrical and planar results. It is the low-mode perturbation that results in the difference between the RTI growth in spherical and cylindrical geometry. When the mode number of the perturbation is large enough, the results in cylindrical geometry are recovered. Analytical studies on the evolution processes of rarefied deuterium plasma shell Z-pinch by PIC and MHD simulations Cheng Ning(宁成), Xiao-Qiang Zhang(张小强), Yang Zhang(张扬), Shun-Kai Sun(孙顺凯), Chuang Xue(薛创), Zhi-Xing Feng(丰志兴), Bai-Wen Li(李百文) Chin. Phys. B, 2018, 27 (2): 025207 doi: 10.1088/1674-1056/27/2/025207 Full Text: [PDF 477 KB] (Downloads:203) RICH HTML Show Abstract In this paper, we analytically explore the magnetic field and mass density evolutions obtained in particle-in-cell (PIC) and magnetohydrodynamics (MHD) simulations of a rarefied deuterium shell Z-pinch and compare those results, and also we study the effects of artificially increased Spitzer resistivity on the magnetic field evolution and Z-pinch dynamic process in the MHD simulation. There are significant differences between the profiles of mass density in the PIC and MHD simulations before 45 ns of the Z-pinch in this study. However, after the shock formation in the PIC simulation, the mass density profile is similar to that in the MHD simulation in the case of using multiplier 2 to modify the Spitzer resistivity. Compared with the magnetic field profiles of the PIC simulation of the shell, the magnetic field diffusion has still not been sufficiently revealed in the MHD simulation even though their convergence ratios become the same by using larger multipliers in the resistivity. The MHD simulation results suggest that the magnetic field diffusion is greatly enhanced by increasing the Spitzer resistivity used, which, however, causes the implosion characteristic to change from shock compression to weak shock, even shockless evolution, and expedites the expansion of the shell. Too large a multiplier is not suggested to be used to modify the resistivity in some Z-pinch applications, such as the Z-pinch driven inertial confinement fusion (ICF) in a dynamic hohlraum. Two-fluid or Hall MHD model, even the PIC/fluid hybrid simulation would be considered as a suitable physical model when there exist the plasma regions with very low density in the simulated domain. Effect of actuating frequency on plasma assisted detonation initiation Si-Yin Zhou(周思引), Xue-Ke Che(车学科), Di Wang(王迪), Wan-Sheng Nie(聂万胜) Chin. Phys. B, 2018, 27 (2): 025208 doi: 10.1088/1674-1056/27/2/025208 Full Text: [PDF 1561 KB] (Downloads:134) RICH HTML Show Abstract Aiming at studying the influence of actuating frequency on plasma assisted detonation initiation by alternating current dielectric barrier discharge, a loosely coupled method is used to simulate the detonation initiation process of a hydrogen-oxygen mixture in a detonation tube at different actuating frequencies. Both the discharge products and the detonation forming process which is assisted by the plasma are analyzed. It is found that the patterns of the temporal and spatial distributions of discharge products in one cycle are not changed by the actuating frequency. However, the concentration of every species decreases as the actuating frequency rises, and atom O is the most sensitive to this variation, which is related to the decrease of discharge power. With respect to the reaction flow of the detonation tube, the deflagration-to-detonation transition (DDT) time and distance both increase as the actuating frequency rises, but the degree of effect on DDT development during flow field evolution is erratic. Generally, the actuating frequency affects none of the amplitude value of the pressure, temperature, species concentration of the flow field, and the combustion degree within the reaction zone. Structural, magnetic properties, and electronic structure of hexagonal FeCoSn compound Yong Li(李勇), Xue-Fang Dai(代学芳), Guo-Dong Liu(刘国栋), Zhi-Yang Wei(魏志阳), En-Ke Liu(刘恩克), Xiao-Lei Han(韩小磊), Zhi-Wei Du(杜志伟), Xue-Kui Xi(郗学奎), Wen-Hong Wang(王文洪), Guang-Heng Wu(吴光恒) Chin. Phys. B, 2018, 27 (2): 026101 doi: 10.1088/1674-1056/27/2/026101 Full Text: [PDF 2156 KB] (Downloads:191) RICH HTML Show Abstract The structural, magnetic properties, and electronic structures of hexagonal FeCoSn compounds with as-annealed bulk and ribbon states were investigated by x-ray powder diffraction (XRD), differential scanning calorimetry (DSC), transmission electron microscope (TEM), scanning electron microscope (SEM), magnetic measurements, and first-principles calculations. Results indicate that both states of FeCoSn show an Ni2In-type hexagonal structure with a small amount of FeCo-rich secondary phase. The Curie temperatures are located at 257 K and 229 K, respectively. The corresponding magnetizations are 2.57 μB/f.u. and 2.94 μB/f.u. at 5 K with a field of 50 kOe (1 Oe=79.5775 A·m-1). The orbital hybridizations between 3d elements are analyzed from the distribution of density of states (DOS), showing that Fe atoms carry the main magnetic moments and determine the electronic structure around Fermi level. A peak of DOS at Fermi level accounts for the presence of the FeCo-rich secondary phase. The Ni2In-type hexagonal FeCoSn compound can be used during the isostructural alloying for tuning phase transitions. Light trapping and optical absorption enhancement in vertical semiconductor Si/SiO2 nanowire arrays Ying Wang(王莹), Xin-Hua Li(李新化) Chin. Phys. B, 2018, 27 (2): 026102 doi: 10.1088/1674-1056/27/2/026102 Full Text: [PDF 711 KB] (Downloads:125) RICH HTML Show Abstract The full potential of optical absorption property must be further cultivated before silicon (Si) semiconductor nanowire (NW) arrays become available for mainstream applications in optoelectronic devices. In this paper, we demonstrate both experimentally and theoretically that an SiO2 coating can substantially improve the absorption of light in Si NW arrays. When the transparent SiO2 shell is coated on the outer layer of Si NW, the incident light penetrates better into the absorbing NW core. We provide the detailed theoretical analysis by a combination of finite-difference time-domain (FDTD) analysis. It is demonstrated that increasing the thickness of the dielectric shell, we achieve 1.72 times stronger absorption in the NWs than in uncoated NWs. Theoretical study on electronic structure and thermoelectric properties of PbSxTe1-x (x=0.25, 0.5, and 0.75) solid solution Yong Lu(鲁勇), Kai-yue Li(李开跃), Xiao-lin Zhang(张晓林), Yan Huang(黄艳), Xiao-hong Shao(邵晓红) Chin. Phys. B, 2018, 27 (2): 026103 doi: 10.1088/1674-1056/27/2/026103 Full Text: [PDF 772 KB] (Downloads:151) RICH HTML Show Abstract The electronic structure and thermoelectric (TE) properties of PbSxTe1-x (x=0.25, 0.5, and 0.75) solid solution have been studied by combining the first-principles calculations and semi-classical Boltzmann theory. The special quasi-random structure (SQS) method is used to model the solid solutions of PbSxTe1-x, which can produce reasonable electronic structures with respect to experimental results. The maximum zT value can reach 1.67 for p-type PbS0.75Te0.25 and 1.30 for PbS0.5Te0.5 at 800 K, respectively. The performance of p-type PbSxTe1-x is superior to the n-type ones, mainly attributed to the higher effective mass of the carriers. The zT values for PbSxTe1-x solid solutions are higher than that of pure PbTe and PbS, in which the combination of low thermal conductivity and high power factor play important roles. Thermal conductivity of carbon nanotube superlattices: Comparative study with defective carbon nanotubes Kui-Kui Zhou(周魁葵), Ning Xu(徐 宁), Guo-Feng Xie(谢国锋) Chin. Phys. B, 2018, 27 (2): 026501 doi: 10.1088/1674-1056/27/2/026501 Full Text: [PDF 1401 KB] (Downloads:111) RICH HTML Show Abstract We use molecular dynamics simulation to calculate the thermal conductivities of (5, 5) carbon nanotube superlattices (CNTSLs) and defective carbon nanotubes (DCNTs), where CNTSLs and DCNTs have the same size. It is found that the thermal conductivity of DCNT is lower than that of CNTSL at the same concentration of Stone-Wales (SW) defects. We perform the analysis of heat current autocorrelation functions and observe the phonon coherent resonance in CNTSLs, but do not observe the same effect in DCNTs. The phonon vibrational eigen-mode analysis reveals that all modes of phonons are strongly localized by SW defects. The degree of localization of CNTSLs is lower than that of DCNTs, because the phonon coherent resonance results in the phonon tunneling effect in the longitudinal phonon mode. The results are helpful in understanding and tuning the thermal conductivity of carbon nanotubes by defect engineering. Effect of isotope doping on phonon thermal conductivity of silicene nanoribbons: A molecular dynamics study Run-Feng Xu(徐润峰), Kui Han(韩奎), Hai-Peng Li(李海鹏) Chin. Phys. B, 2018, 27 (2): 026801 doi: 10.1088/1674-1056/27/2/026801 Full Text: [PDF 746 KB] (Downloads:256) RICH HTML Show Abstract Silicene, a silicon analogue of graphene, has attracted increasing research attention in recent years because of its unique electrical and thermal conductivities. In this study, phonon thermal conductivity and its isotopic doping effect in silicene nanoribbons (SNRs) are investigated by using molecular dynamics simulations. The calculated thermal conductivities are approximately 32 W/mK and 35 W/mK for armchair-edged SNRs and zigzag-edged SNRs, respectively, which show anisotropic behaviors. Isotope doping induces mass disorder in the lattice, which results in increased phonon scattering, thus reducing the thermal conductivity. The phonon thermal conductivity of isotopic doped SNR is dependent on the concentration and arrangement pattern of dopants. A maximum reduction of about 15% is obtained at 50% randomly isotopic doping with 30Si. In addition, ordered doping (i.e., isotope superlattice) leads to a much larger reduction in thermal conductivity than random doping for the same doping concentration. Particularly, the periodicity of the doping superlattice structure has a significant influence on the thermal conductivity of SNR. Phonon spectrum analysis is also used to qualitatively explain the mechanism of thermal conductivity change induced by isotopic doping. This study highlights the importance of isotopic doping in tuning the thermal properties of silicene, thus guiding defect engineering of the thermal properties of two-dimensional silicon materials. Thermoelectric properties of two-dimensional hexagonal indium-VA Jing-Yun Bi(毕京云), Li-Hong Han(韩利红), Qian Wang(王倩), Li-Yuan Wu(伍力源), Ruge Quhe(屈贺如歌), Peng-Fei Lu(芦鹏飞) Chin. Phys. B, 2018, 27 (2): 026802 doi: 10.1088/1674-1056/27/2/026802 Full Text: [PDF 753 KB] (Downloads:149) RICH HTML Show Abstract The electrical properties and thermoelectric (TE) properties of monolayer In-VA are investigated theoretically by combining first-principles method with Boltzmann transport theory. The ultralow intrinsic thermal conductivities of 2.64 W·m-1·K-1 (InP), 1.31 W·m-1·K-1 (InAs), 0.87 W·m-1·K-1 (InSb), and 0.62 W·m-1 K-1 (InBi) evaluated at room temperature are close to typical thermal conductivity values of good TE materials (κ < 2 W·m-1·K-1). The maximal ZT values of 0.779, 0.583, 0.696, 0.727, and 0.373 for InN, InP, InAs, InSb, and InBi at p-type level are calculated at 900 K, which makes In-VA potential TE material working at medium-high temperature. Electronic structures and optical properties of HfO2-TiO2 alloys studied by first-principles GGA+ U approach Jin-Ping Li(李金平), Song-He Meng(孟松鹤), Cheng Yang(杨程), Han-Tao Lu(陆汉涛), Takami Tohyama(遠山貴巳) Chin. Phys. B, 2018, 27 (2): 027101 doi: 10.1088/1674-1056/27/2/027101 Full Text: [PDF 413 KB] (Downloads:210) RICH HTML Show Abstract The phase diagram of HfO2-TiO2 system shows that when Ti content is less than 33.0 mol%, HfO2-TiO2 system is monoclinic; when Ti content increases from 33.0 mol% to 52.0 mol%, it is orthorhombic; when Ti content reaches more than 52.0 mol%, it presents rutile phase. So, we choose the three phases of HfO2-TiO2 alloys with different Ti content values. The electronic structures and optical properties of monoclinic, orthorhombic and rutile phases of HfO2-TiO2 alloys are obtained by the first-principles generalized gradient approximation (GGA)+U approach, and the effects of Ti content and crystal structure on the electronic structures and optical properties of HfO2-TiO2 alloys are investigated. By introducing the Coulomb interactions of 5d orbitals on Hf atom (U1d), those of 3d orbitals on Ti atom (U2d), and those of 2p orbitals on O atom (Up) simultaneously, we can improve the calculation values of the band gaps, where U1d, U2d, and Up values are 8.0 eV, 7.0 eV, and 6.0 eV for both the monoclinic phase and orthorhombic phase, and 8.0 eV, 7.0 eV, and 3.5 eV for the rutile phase. The electronic structures and optical properties of the HfO2-TiO2 alloys calculated by GGA+U1d (U1d=8.0 eV)+U2d (U2d=7.0 eV)+Up (Up=6.0 eV or 3.5 eV) are compared with available experimental results. Intersubband optical absorption of electrons in double parabolic quantum wells of AlxGa1-xAs/AlyGa1-yAs Shu-Fang Ma(马淑芳), Yuan Qu(屈媛), Shi-Liang Ban(班士良) Chin. Phys. B, 2018, 27 (2): 027103 doi: 10.1088/1674-1056/27/2/027103 Full Text: [PDF 560 KB] (Downloads:198) RICH HTML Show Abstract Some realizable structures of double parabolic quantum wells (DPQWs) consisting of AlxGa1-xAs/AlyGa1-y As are constructed to discuss theoretically the optical absorption due to the intersubband transition of electrons for both symmetric and asymmetric cases with three energy levels of conduction bands. The electronic states in these structures are obtained using a finite element difference method. Based on a compact density matrix approach, the optical absorption induced by intersubband transition of electrons at room temperature is discussed. The results reveal that the peak positions and heights of intersubband optical absorption coefficients (IOACs) of DPQWs are sensitive to the barrier thickness, depending on Al component. Furthermore, external electric fields result in the decrease of peak, and play an important role in the blue shifts of absorption spectra due to electrons excited from ground state to the first and second excited states. It is found that the peaks of IOACs are smaller in asymmetric DPQWs than in symmetric ones. The results also indicate that the adjustable extent of incident photon energy for DPQW is larger than for a square one of a similar size. Our results are helpful in experiments and device fabrication. Characteristic modification by inserted metal layer and interface graphene layer in ZnO-based resistive switching structures Hao-Nan Liu(刘浩男), Xiao-Xia Suo(索晓霞), Lin-Ao Zhang(张林奥), Duan Zhang(张端), Han-Chun Wu(吴汉春), Hong-Kang Zhao(赵宏康), Zhao-Tan Jiang(江兆潭), Ying-Lan Li(李英兰), Zhi Wang(王志) Chin. Phys. B, 2018, 27 (2): 027104 doi: 10.1088/1674-1056/27/2/027104 Full Text: [PDF 5217 KB] (Downloads:165) RICH HTML Show Abstract ZnO-based resistive switching device Ag/ZnO/TiN, and its modified structure Ag/ZnO/Zn/ZnO/TiN and Ag/graphene/ZnO/TiN, were prepared. The effects of inserted Zn layers in ZnO matrix and an interface graphene layer on resistive switching characteristics were studied. It is found that metal ions, oxygen vacancies, and interface are involved in the RS process. A thin inserted Zn layer can increase the resistance of HRS and enhance the resistance ratio. A graphene interface layer between ZnO layer and top electrode can block the carrier transport and enhance the resistance ratio to several times. The results suggest feasible routes to tailor the resistive switching performance of ZnO-based structure. Observation of nonconservation characteristics of radio frequency noise mechanism of 40-nm n-MOSFET Jun Wang(王军), Xiao-Mei Peng(彭小梅), Zhi-Jun Liu(刘志军), Lin Wang(王林), Zhen Luo(罗震), Dan-Dan Wang(王丹丹) Chin. Phys. B, 2018, 27 (2): 027201 doi: 10.1088/1674-1056/27/2/027201 Full Text: [PDF 2011 KB] (Downloads:139) RICH HTML Show Abstract Bias non-conservation characteristics of radio-frequency noise mechanism of 40-nm n-MOSFET are observed by modeling and measuring its drain current noise. A compact model for the drain current noise of 40-nm MOSFET is proposed through the noise analysis. This model fully describes three kinds of main physical sources that determine the noise mechanism of 40-nm MOSFET, i.e., intrinsic drain current noise, thermal noise induced by the gate parasitic resistance, and coupling thermal noise induced by substrate parasitic effect. The accuracy of the proposed model is verified by noise measurements, and the intrinsic drain current noise is proved to be the suppressed shot noise, and with the decrease of the gate voltage, the suppressed degree gradually decreases until it vanishes. The most important findings of the bias non-conservative nature of noise mechanism of 40-nm n-MOSFET are as follows. (i) In the strong inversion region, the suppressed shot noise is weakly affected by the thermal noise of gate parasitic resistance. Therefore, one can empirically model the channel excess noise as being like the suppressed shot noise. (ii) In the middle inversion region, it is almost full of shot noise. (iii) In the weak inversion region, the thermal noise is strongly frequency-dependent, which is almost controlled by the capacitive coupling of substrate parasitic resistance. Measurement results over a wide temperature range demonstrate that the thermal noise of 40-nm n-MOSFET exists in a region from the weak to strong inversion, contrary to the predictions of suppressed shot noise model only suitable for the strong inversion and middle inversion region. These new findings of the noise mechanism of 40-nm n-MOSFET are very beneficial for its applications in ultra low-voltage and low-power RF, such as novel device electronic structure optimization, integrated circuit design and process technology evaluation. Highly stable two-dimensional graphene oxide: Electronic properties of its periodic structure and optical properties of its nanostructures Qin Zhang(张琴), Hong Zhang(张红), Xin-Lu Cheng(程新路) Chin. Phys. B, 2018, 27 (2): 027301 doi: 10.1088/1674-1056/27/2/027301 Full Text: [PDF 1802 KB] (Downloads:192) RICH HTML Show Abstract According to first principle simulations, we theoretically predict a type of stable single-layer graphene oxide (C2O). Using density functional theory (DFT), C2O is found to be a direct gap semiconductor. In addition, we obtain the absorption spectra of the periodic structure of C2O, which show optical anisotropy. To study the optical properties of C2O nanostructures, time-dependent density functional theory (TDDFT) is used. The C2O nanostructure has a strong absorption near 7 eV when the incident light polarizes along the armchair-edge. Besides, we find that the optical properties can be controlled by the edge configuration and the size of the C2O nanostructure. With the elongation strain increasing, the range of light absorption becomes wider and there is a red shift of absorption spectrum. A transparent electromagnetic-shielding film based on one-dimensional metal-dielectric periodic structures Ya-li Zhao(赵亚丽), Fu-hua Ma(马富花), Xu-feng Li(李旭峰), Jiang-jiang Ma(马江将), Kun Jia(贾琨), Xue-hong Wei(魏学红) Chin. Phys. B, 2018, 27 (2): 027302 doi: 10.1088/1674-1056/27/2/027302 Full Text: [PDF 784 KB] (Downloads:256) RICH HTML Show Abstract In this study, we designed and fabricated optical materials consisting of alternating ITO and Ag layers. This approach is considered to be a promising way to obtain a light-weight, ultrathin and transparent shielding medium, which not only transmits visible light but also inhibits the transmission of microwaves, despite the fact that the total thickness of the Ag film is much larger than the skin depth in the visible range and less than that in the microwave region. Theoretical results suggest that a high dielectric/metal thickness ratio can enhance the broadband and improve the transmittance in the optical range. Accordingly, the central wavelength was found to be red-shifted with increasing dielectric/metal thickness ratio. A physical mechanism behind the controlling transmission of visible light is also proposed. Meanwhile, the electromagnetic shielding effectiveness of the prepared structures was found to exceed 40 dB in the range from 0.1 GHz to 18 GHz, even reaching up to 70 dB at 0.1 GHz, which is far higher than that of a single ITO film of the same thickness. Magnetocaloric effect in the layered organic-inorganic hybrid (CH3NH3)2CuCl4 Hot! Chin. Phys. B, 2018, 27 (2): 027501 doi: 10.1088/1674-1056/27/2/027501 Full Text: [PDF 815 KB] (Downloads:285) RICH HTML Show Abstract We present a study of magnetocaloric effect of the quasi-two-dimensional (2D) ferromagnet (CH3NH3)2CuCl4 in ab plane (easy-plane). From the measurements of magnetic field dependence of magnetization at various temperatures, we have discovered a large magnetic entropy change associated with the ferromagnetic-paramagnetic transition. The heat capacity measurements reveal an abnormal adiabatic change below the Curie temperature Tc~8.9 K, which is caused by the nature of quasi-2D layered crystal structure. These results suggest that perovskite organic-inorganic hybrids with a layered structure are suitable candidates as working substances in magnetic refrigeration technology. First-principles study of polarization and piezoelectricity behavior in tetragonal PbTiO3-based superlattices Zhenye Zhu(朱振业) Chin. Phys. B, 2018, 27 (2): 027701 doi: 10.1088/1674-1056/27/2/027701 Full Text: [PDF 1575 KB] (Downloads:213) RICH HTML Show Abstract Using first-principles calculation, the contribution of A-site and B-site atoms to polarization and piezoelectricity d33 in the tetragonal PbTiO3/KNbO3 and PbTiO3/LaAlO3 superlattices is investigated in this paper. It is shown that PbTiO3/KNbO3 superlattice has larger polarization and d33 than PbTiO3/LaAlO3 superlattice, because there is stronger charge transfer between A(B)-site atoms and oxygen atom in PbTiO3/KNbO3 superlattice. In PbTiO3/KNbO3 superlattice, B-site atoms (Ti, Nb) make larger contribution to the total polarization and d33 than the A-site atoms (Pb, K) because of the strong covalent interactions between the transition metal (Ti, Nb) and the oxygen atoms, while piezoelectricity in PbTiO3/LaAlO3 superlattice mainly ascribes to piezoelectric contribution of Pb atom and Ti atom in PbTiO3 component. Furthermore, by calculating the proportion of the piezoelectric contribution from PbTiO3 component in superlattices, we find there is different response of strain to piezoelectric contribution from PbTiO3 component in two superlattices but still with a value larger than 50%. In PbTiO3/KNbO3 superlattice, the c-axis strain reduces the proportion, especially under tensile condition. Meanwhile in PbTiO3/LaAlO3 superlattice, PbTiO3 plays a leading role to the total d33, especially under compressive condition, and the proportion decreases as the tensile strain increases. Multiple broadband magnetoelectric response in Terfenol-D/PZT structure Jian-Biao Wen(文建彪), Juan-Juan Zhang(张娟娟), Yuan-Wen Gao(高原文) Chin. Phys. B, 2018, 27 (2): 027702 doi: 10.1088/1674-1056/27/2/027702 Full Text: [PDF 3693 KB] (Downloads:185) RICH HTML Show Abstract In this paper, a novel magnetoelectric (ME) composite structure is proposed, and the ME response in the structure is measured at the bias magnetic field up to 2000 Oe (1 Oe=79.5775 A·m-1) and the excitation frequency of alternating magnetic field ranging from 1 kHz to 200 kHz. The ME voltage of each PZT layer is detected. According to the measurement results, the phase differences are observed among three channels and the multi-peak phenomenon appears in each channel. Meanwhile, the results show that the ME structure can stay a relatively high ME response within a wide bandwidth. Besides, the hysteretic loops of three PZT layers are observed. When the frequency of alternating current (AC) magnetic field changes, the maximum value of ME coefficient appears in different layers due to the multiple vibration modes of the structure. Moreover, a finite element analysis is performed to evaluate the resonant frequency of the structure, and the theoretical calculating results accord well with the experimental results. The experiment results suggest that the proposed structure may be a good candidate for designing broadband magnetic field sensors. Optimizing effective phase modulation in coupled double quantum well Mach-Zehnder modulators Guang-Hui Wang(王光辉), Jin-Ke Zhang(张金珂) Chin. Phys. B, 2018, 27 (2): 027801 doi: 10.1088/1674-1056/27/2/027801 Full Text: [PDF 324 KB] (Downloads:126) RICH HTML Show Abstract We report optimal phase modulation based on enhanced electro-optic effects in a Mach-Zehnder (MZ) modulator constructed by AlGaAs/GaAs coupled double quantum well (CDQW) waveguides with optical gain. The net change of refractive indexes between two arms of the CDQW MZ modulator is derived by both the electronic polarization method and the normal-surface method. The numerical results show that very large refractive index change over 10-1 can be obtained, making the phase modulation in the CDQW MZ modulator very highly efficient. It is desirable and important that a very small voltage-length product for π phase shift, Vπ×L0=0.0226 V·mm, is obtained by optimizing bias electric field and CDQW structural parameters, which is about seven times smaller than that in single quantum-well MZ modulators. These properties open an avenue for CDQW nanostructures in device applications such as electro-optical switches and phase modulators. Optically induced abnormal terahertz absorption in black silicon Dong-Wei Zhai(翟东为), Hai-Ling Liu(刘海玲), Xxx Sedao, Yu-Ping Yang(杨玉平) Chin. Phys. B, 2018, 27 (2): 027802 doi: 10.1088/1674-1056/27/2/027802 Full Text: [PDF 737 KB] (Downloads:148) RICH HTML Show Abstract The absorption responses of blank silicon and black silicon (silicon with micro/nano-conical surface structures) wafers to an 808-nm continuous-wave (CW) laser are investigated at room temperature by terahertz time-domain spectroscopy. The transmission of the blank silicon shows an appreciable change, from ground state to the pump state, with amplitude varying up to 50%, while that of the black silicon (BS) with different cone sizes is observed to be more stable. Furthermore, the terahertz transmission through BS is observed to be strongly dependent on the size of the conical structure geometry. The conductivities of blank silicon and BS are extracted from the experimental data with and without pumping. The non-photo-excited conductivities increase with increasing frequency and agree well with the Lorentz model, whereas the photo-excited conductivities decrease with increasing frequency and fit well with the Drude-Smith model. Indeed, for BS, the conductivity, electron density and mobility are found to correlate closely with the size of the conical structure. This is attributed to the influence of space confinement on the carrier excitation, that is, the carriers excited at the BS conical structure surface have a stronger localization effect with a backscattering behavior in small-sized microstructures and a higher recombination rate due to increased electron interaction and collision with electrons, interfaces and grain boundaries. Investigation of europium(Ⅲ)-doped ZnS for immunoassay Chao-Fan Zhu(朱超凡), Xue Sha(沙雪), Xue-Ying Chu(楚学影), Jin-Hua Li(李金华), Ming-Ze Xu(徐铭泽), Fang-Jun Jin(金芳军), Zhi-Kun Xu(徐志堃) Chin. Phys. B, 2018, 27 (2): 027803 doi: 10.1088/1674-1056/27/2/027803 Full Text: [PDF 1873 KB] (Downloads:90) RICH HTML Show Abstract Biofunctional europium (Ⅲ)-doped ZnS (ZnS:Eu) nanocrystals are prepared by a sol-gel method. The characteristic luminescence of ZnS:Eu is used as a probe signal to realize sensitive immunoassay. The luminescence intensity of the Eu3+ in the ZnS matrix shows strong concentration dependence, and the optimal doping concentration is 4%. However, the emission wavelengths of the ZnS:Eu nanocrystals are not dependent on doping concentration nor the temperature (from 100 K to 300 K). Our results show that these features allow for reliable immunoassay. Human immunoglobulin, used as a target analyte, is captured by antibody modified ZnS:Eu probe and is finally enriched on gold substrate for detection. High specificity of the assay is demonstrated by control experiments. The linear detection range is 10 nM-800 nM, and the detection limit is about 9.6 nM. Magnetic field aligned orderly arrangement of Fe3O4 nanoparticles in CS/PVA/Fe3O4 membranes Hot! Chin. Phys. B, 2018, 27 (2): 027805 doi: 10.1088/1674-1056/27/2/027805 Full Text: [PDF 828 KB] (Downloads:191) RICH HTML Show Abstract The CS/PVA/Fe3O4 nanocomposite membranes with chainlike arrangement of Fe3O4 nanoparticles are prepared by a magnetic-field-assisted solution casting method. The aim of this work is to investigate the relationship between the microstructure of the magnetic anisotropic CS/PVA/Fe3O4 membrane and the evolved macroscopic physicochemical property. With the same doping content, the relative crystallinity of CS/PVA/Fe3O4-M is lower than that of CS/PVA/Fe3O4. The Fourier transform infrared spectroscopy (FT-TR) measurements indicate that there is no chemical bonding between polymer molecule and Fe3O4 nanoparticle. The Fe3O4 nanoparticles in CS/PVA/Fe3O4 and CS/PVA/Fe3O4-M are wrapped by the chains of CS/PVA, which is also confirmed by scanning electron microscopy (SEM) and x-ray diffraction (XRD) analysis. The saturation magnetization value of CS/PVA/Fe3O4-M obviously increases compared with that of non-magnetic aligned membrane, meanwhile the transmittance decreases in the UV-visible region. The o-Ps lifetime distribution provides information about the free-volume nanoholes present in the amorphous region. It is suggested that the microstructure of CS/PVA/Fe3O4 membrane can be modified in its curing process under a magnetic field, which could affect the magnetic properties and the transmittance of nanocomposite membrane. In brief, a full understanding of the relationship between the microstructure and the macroscopic property of CS/PVA/Fe3O4 nanocomposite plays a vital role in exploring and designing the novel multifunctional materials. Suppression of electron and hole overflow in GaN-based near-ultraviolet laser diodes Yao Xing(邢瑶), De-Gang Zhao(赵德刚), De-Sheng Jiang(江德生), Xiang Li(李翔), Zong-Shun Liu(刘宗顺), Jian-Jun Zhu(朱建军), Ping Chen(陈平), Jing Yang(杨静), Wei Liu(刘炜), Feng Liang(梁锋), Shuang-Tao Liu(刘双韬), Li-Qun Zhang(张立群), Wen-Jie Wang(王文杰), Mo Li(李沫), Yuan-Tao Zhang(张源涛), Guo-Tong Du(杜国同) Chin. Phys. B, 2018, 27 (2): 028101 doi: 10.1088/1674-1056/27/2/028101 Full Text: [PDF 355 KB] (Downloads:188) RICH HTML Show Abstract In order to suppress the electron leakage to p-type region of near-ultraviolet GaN/InxGa1-xN/GaN multiple-quantum-well (MQW) laser diode (LD), the Al composition of inserted p-type AlxGa1-xN electron blocking layer (EBL) is optimized in an effective way, but which could only partially enhance the performance of LD. Here, due to the relatively shallow GaN/In0.04Ga0.96N/GaN quantum well, the hole leakage to n-type region is considered in the ultraviolet LD. To reduce the hole leakage, a 10-nm n-type AlxGa1-xN hole blocking layer (HBL) is inserted between n-type waveguide and the first quantum barrier, and the effect of Al composition of AlxGa1-xN HBL on LD performance is studied. Numerical simulations by the LASTIP reveal that when an appropriate Al composition of AlxGa1-xN HBL is chosen, both electron leakage and hole leakage can be reduced dramatically, leading to a lower threshold current and higher output power of LD. Robust stability characterizations of active metamaterials with non-Foster loads Yi-Feng Fan(范逸风), Yong-Zhi Sun(孙永志) Chin. Phys. B, 2018, 27 (2): 028102 doi: 10.1088/1674-1056/27/2/028102 Full Text: [PDF 551 KB] (Downloads:220) RICH HTML Show Abstract Active metamaterials incorporating with non-Foster elements have been considered as one of the means of overcoming inherent limitations of the passive counterparts, thus achieving broadband or gain metamaterials. However, realistic active metamaterials, especially non-Foster loaded medium, would face the challenge of the possibility of instability. Moreover, they normally appear to be time-variant and in unsteady states, which leads to the necessity of a stability method to cope with the stability issue considering the system model uncertainty. In this paper, we propose an immittance-based stability method to design a non-Foster loaded metamaterial ensuring robust stability. First, the principle of this stability method is introduced after comparing different stability criteria. Based on the equivalent system model, the stability characterization is used to give the design specifications to achieve an active metamaterial with robust stability. Finally, it is applied to the practical design of active metamaterial with non-Foster loaded loop arrays. By introducing the disturbance into the non-Foster circuit (NFC), the worst-case model uncertainty is considered during the design, and the reliability of our proposed method is verified. This method can also be applied to other realistic design of active metamaterials. Observation of oscillations in the transport for atomic layer MoS2 Xiao-Qiang Xie(解晓强), Ying-Zi Peng(彭英姿), Qi-Ye Zheng(郑奇烨), Yuan Li(李源), Ji Chen(陈吉) Chin. Phys. B, 2018, 27 (2): 028103 doi: 10.1088/1674-1056/27/2/028103 Full Text: [PDF 595 KB] (Downloads:117) RICH HTML Show Abstract In our experiment, an atomic layer MoS2 structure grown on SiO2/Si substrates is used in transport test. The voltage U14,23 oscillates and the corresponding period varies with applied current. The largest period appears at 45 μA. The oscillation periods are different when samples are under laser radiation or in darkness. We discover that under the laser irradiation, the oscillation period occurs at lower current than in the darkness case. Meanwhile, the drift velocity is estimated at~107 cm/s. Besides, by studying the envelope of U14,23 versus applied current, we see a beating phenomenon at a certain current value. The beating period in darkness is larger than under laser irradiation. The difference between beating periods reveals the energy difference of electrons. Similar results are obtained by using different laser power densities and different light sources. The possible mechanism behind the oscillation period is discussed. Influences of substrate temperature on microstructure and corrosion behavior of APS Ni50Ti25Al25 inter-metallic coating Sh Khandanjou, M Ghoranneviss, Sh Saviz, M Reza Afshar Chin. Phys. B, 2018, 27 (2): 028104 doi: 10.1088/1674-1056/27/2/028104 Full Text: [PDF 4180 KB] (Downloads:190) RICH HTML Show Abstract In the present investigation, Ni50Ti25Al25 (at.%) mechanically alloyed powder is deposited on carbon steel substrate. Before the coating process, the substrate is heated to temperature ranging from room temperature to 400℃. The microstructure, porosity, microhardness, adhesion strength, and corrosion behavior of the coating are investigated at different substrate temperatures. Results show that coating porosity is lower on high temperature surface. Microhardness and adhesion strength of the deposition layer on the substrate without preheating have lower values than with preheating. The polarization test result shows that corrosion performance of the coating is dependent on micro cracks and porosities and the increasing of substrate temperature can improve the quality of coating and corrosion performance. A low-outgassing-rate carbon fiber array cathode An-Kun Li(李安昆), Yu-Wei Fan(樊玉伟), Bao-Liang Qian(钱宝良), Zi-Cheng Zhang(张自成), Tao Xun(荀涛) Chin. Phys. B, 2018, 27 (2): 028401 doi: 10.1088/1674-1056/27/2/028401 Full Text: [PDF 4085 KB] (Downloads:160) RICH HTML Show Abstract In this paper, a new carbon fiber based cathode-a low-outgassing-rate carbon fiber array cathode-is investigated experimentally, and the experimental results are compared with those of a polymer velvet cathode. The carbon fiber array cathode is constructed by inserting bunches of carbon fibers into the cylindrical surface of the cathode. In experiment, the diode base pressure is maintained at 1×10-2 Pa-2×10-2 Pa, and the diode is driven by a compact pulsed power system which can provide a diode voltage of about 100 kV and pulse duration of about 30 ns at a repetition rate of tens of Hz. Real-time pressure data are measured by a magnetron gauge. Under the similar conditions, the experimental results show that the outgassing rate of the carbon fiber array cathode is an order smaller than that of the velvet cathode and that this carbon fiber array cathode has better shot-to-shot stability than the velvet cathode. Hence, this carbon fiber array cathode is demonstrated to be a promising cathode for the radial diode, which can be used in magnetically insulated transmission line oscillator (MILO) and relativistic magnetron (RM). Enhanced radiation-induced narrow channel effects in 0.13-μm PDSOI nMOSFETs with shallow trench isolation Meng-Ying Zhang(张梦映), Zhi-Yuan Hu(胡志远), Da-Wei Bi(毕大炜), Li-Hua Dai(戴丽华), Zheng-Xuan Zhang(张正选) Chin. Phys. B, 2018, 27 (2): 028501 doi: 10.1088/1674-1056/27/2/028501 Full Text: [PDF 469 KB] (Downloads:230) RICH HTML Show Abstract Total ionizing dose responses of different transistor geometries after being irradiated by 60Co γ-rays, in 0.13-μm partially-depleted silicon-on-insulator (PD SOI) technology are investigated. The negative threshold voltage shift in an n-type metal-oxide semiconductor field effect transistor (nMOSFET) is inversely proportional to the channel width due to radiation-induced charges trapped in trench oxide, which is called the radiation-induced narrow channel effect (RINCE). The analysis based on a charge sharing model and three-dimensional technology computer aided design (TCAD) simulations demonstrate that phenomenon. The radiation-induced leakage currents under different drain biases are also discussed in detail. Effects of proton irradiation at different incident angles on InAlAs/InGaAs InP-based HEMTs Shu-Xiang Sun(孙树祥), Zhi-Chao Wei(魏志超), Peng-Hui Xia(夏鹏辉), Wen-Bin Wang(王文斌), Zhi-Yong Duan(段智勇), Yu-Xiao Li(李玉晓), Ying-Hui Zhong(钟英辉), Peng Ding(丁芃), Zhi Jin(金智) Chin. Phys. B, 2018, 27 (2): 028502 doi: 10.1088/1674-1056/27/2/028502 Full Text: [PDF 1730 KB] (Downloads:175) RICH HTML Show Abstract InP-based high electron mobility transistors (HEMTs) will be affected by protons from different directions in space radiation applications. The proton irradiation effects on InAlAs/InGaAs hetero-junction structures of InP-based HEMTs are studied at incident angles ranging from 0 to 89.9° by SRIM software. With the increase of proton incident angle, the change trend of induced vacancy defects in the InAlAs/InGaAs hetero-junction region is consistent with the vacancy energy loss trend of incident protons. Namely, they both have shown an initial increase, followed by a decrease after incident angle has reached 30°. Besides, the average range and ultimate stopping positions of incident protons shift gradually from buffer layer to hetero-junction region, and then go up to gate metal. Finally, the electrical characteristics of InP-based HEMTs are investigated after proton irradiation at different incident angles by Sentaurus-TCAD. The induced vacancy defects are considered self-consistently through solving Poisson's and current continuity equations. Consequently, the extrinsic transconductance, pinch-off voltage and channel current demonstrate the most serious degradation at the incident angle of 30°, which can be accounted for the most severe carrier sheet density reduction under this condition. Influence of anisotropy on the electrical conductivity and diffusion coefficient of dry K-feldspar: Implications of the mechanism of conduction Li-Dong Dai(代立东), Hai-Ying Hu(胡海英), He-Ping Li(李和平), Wen-Qing Sun(孙文清), Jian-Jun Jiang(蒋建军) Chin. Phys. B, 2018, 27 (2): 028703 doi: 10.1088/1674-1056/27/2/028703 Full Text: [PDF 892 KB] (Downloads:264) RICH HTML Show Abstract The electrical conductivities of single-crystal K-feldspar along three different crystallographic directions are investigated by the Solartron-1260 Impedance/Gain-phase analyzer at 873 K-1223 K and 1.0 GPa-3.0 GPa in a frequency range of 10-1 Hz-106 Hz. The measured electrical conductivity along the ⊥[001] axis direction decreases with increasing pressure, and the activation energy and activation volume of charge carriers are determined to be 1.04 ±0.06 eV and 2.51 ±0.19 cm3/mole, respectively. The electrical conductivity of K-feldspar is highly anisotropic, and its value along the ⊥[001] axis is approximately three times higher than that along the ⊥[100] axis. At 2.0 GPa, the diffusion coefficient of ionic potassium is obtained from the electrical conductivity data using the Nernst-Einstein equation. The measured electrical conductivity and calculated diffusion coefficient of potassium suggest that the main conduction mechanism is of ionic conduction, therefore the dominant charge carrier is transferred between normal lattice potassium positions and adjacent interstitial sites along the thermally activated electric field. Quantitative and sensitive detection of prohibited fish drugs by surface-enhanced Raman scattering Shi-Chao Lin(林世超), Xin Zhang(张鑫), Wei-Chen Zhao(赵伟臣), Zhao-Yang Chen(陈朝阳), Pan Du(杜攀), Yong-Mei Zhao(赵永梅), Zheng-Long Wu(吴正龙), Hai-Jun Xu(许海军) Chin. Phys. B, 2018, 27 (2): 028707 doi: 10.1088/1674-1056/27/2/028707 Full Text: [PDF 869 KB] (Downloads:146) RICH HTML Show Abstract Rapid and simple detections of two kinds of prohibited fish drugs, crystal violet (CV) and malachite green (MG), were accomplished by surface-enhanced Raman scattering (SERS). Based on the optimized Au/cicada wing, the detectable concentration of CV/MG can reach 10-7 M, and the linear logarithmic quantitative relationship curves between logI and logC allows for the determination of the unknown concentration of CV/MG solution. The detection of these two analytes in real environment was also achieved, demonstrating the application potential of SERS in the fast screening of the prohibited fish drugs, which is of great benefit for food safety and environmental monitoring. Generation of optimal persistent formations for heterogeneous multi-agent systems with a leader constraint Guo-Qiang Wang(王国强), He Luo(罗贺), Xiao-Xuan Hu(胡笑旋) Chin. Phys. B, 2018, 27 (2): 028901 doi: 10.1088/1674-1056/27/2/028901 Full Text: [PDF 391 KB] (Downloads:140) RICH HTML Show Abstract In this study, we consider the generation of optimal persistent formations for heterogeneous multi-agent systems, with the leader constraint that only specific agents can act as leaders. We analyze three modes to control the optimal persistent formations in two-dimensional space, thereby establishing a model for their constrained generation. Then, we propose an algorithm for generating the optimal persistent formation for heterogeneous multi-agent systems with a leader constraint (LC-HMAS-OPFGA), which is the exact solution algorithm of the model, and we theoretically prove its validity. This algorithm includes two kernel sub-algorithms, which are optimal persistent graph generating algorithm based on a minimum cost arborescence and the shortest path (MCA-SP-OPGGA), and the optimal persistent graph adjusting algorithm based on the shortest path (SP-OPGAA). Under a given agent formation shape and leader constraint, LC-HMAS-OPFGA first generates the network topology and its optimal rigid graph corresponding to this formation shape. Then, LC-HMAS-OPFGA uses MCA-SP-OPGGA to direct the optimal rigid graph to generate the optimal persistent graph. Finally, LC-HMAS-OPFGA uses SP-OPGAA to adjust the optimal persistent graph until it satisfies the leader constraint. We also demonstrate the algorithm, LC-HMAS-OPFGA, with an example and verify its effectiveness. Another look at the moist baroclinic Ertel-Rossby invariant with mass forcing Shuai Yang(杨帅), Shou-Ting Gao(高守亭), Bin Chen(陈斌) Chin. Phys. B, 2018, 27 (2): 029201 doi: 10.1088/1674-1056/27/2/029201 Full Text: [PDF 721 KB] (Downloads:76) RICH HTML Show Abstract Due to the importance of the mass forcing induced by precipitation and condensation in moist processes, the Lagrangian continuity equation without a source/sink term utilized to prove the Ertel-Rossby invariant (ERI) and its conservation property is re-derived considering the mass forcing. By introducing moist enthalpy and moisture entropy, the baroclinic ERI could be adapted to moist flow. After another look at the moist ERI, it is deployed as the dot product between the generalized velocity and the generalized vorticity in moist flow, which constitutes a kind of generalized helicity. Thus, the baroclinic ERI is further extended to the moist case. Moreover, the derived moist ERI forumla remains formally consistent with the dry version, no matter whether mass forcing is present. By using the Weber transformation and the Lagrangian continuity equation with a source/sink effect, the conservation property of the baroclinic ERI in moist flow is revisited. The presence or absence of mass forcing in the Lagrangian continuity equation determines whether or not the baroclinic ERI in moist flow is materially conserved. In other words, it would be qualified as a quasi-invariant but only being dependent on the circumstances. By another look at the moist baroclinic ERI, it is surely a neat formalism with a simple physical explanation, and the usefulness of its anomaly in diagnosing atmospheric flow is demonstrated by case study. Copyright © the Chinese Physical Society
5e068e30cc1919a0
Periodic solutions of generalized Schrödinger equations on Cayley Trees In this paper we define a discrete generalized Laplacian with arbitrary real power on a Cayley tree. This Laplacian is used to define a discrete generalized Schrödinger operator on the tree. The case discrete fractional Schrödinger operators with index $0 < \alpha < 2$ is considered in detail, and periodic solutions of the corresponding fractional Schrödinger equations are described. This periodicity depends on a subgroup of a group representation of the Cayley tree. For any subgroup of finite index we give a criterion for eigenvalues of the Schrödinger operator under which periodic solutions exist. For a normal subgroup of infinite index we describe a wide class of periodic solutions.
b9ed2a90840c1918
Has there been any experiments, or are there any references, demonstrating gravity between atoms? If so, what are the key experiments/papers? Or if not, what is the smallest thing that has actually experimentally been shown to be affected by gravity? I don't know of specific papers demonstrating gravity between larger objects, but I can vaguely remember learning about them in my classical physics class as an undergraduate. However, I have never heard of experiments demonstrating gravity at atomic or subatomic levels. I don't have a physics background so it's not obvious to me, so just looking to see the actual research/evidence behind it, so I can start to try to imagine how gravity works at a quantum level. • 2 $\begingroup$ While it is possible to demonstrate the gravitational attraction between and an individual atom and the Earth, there is no way to demonstrate the gravitational attraction between individual atoms. Gravity is such a weak force. $\endgroup$ – David Hammen Aug 12 '14 at 10:47 • 3 $\begingroup$ @DavidHammen your comment would be more helpful if you added "compared with EM and nuclear forces" at the end. $\endgroup$ – Carl Witthoft Aug 12 '14 at 11:42 • 2 $\begingroup$ It's not the comparison to the other interactions that matters. What matters is a comparison with what instrumentation can measure. Asking to measure the gravitation between a pair of atoms is asking the impossible of not just today's instrumentation but of instrumentation of decades to come. $\endgroup$ – David Hammen Aug 12 '14 at 14:27 • $\begingroup$ the simple answer is "no, not even close" $\endgroup$ – Fattie Aug 12 '14 at 15:30 • 1 $\begingroup$ "Gravity is such a weak force." @DavidHammen, another way to look at it comes from Frank Wilczek: We see that the question is not, "Why is gravity so feeble?" but rather, "Why is the proton's mass so small?" For in natural (Planck) units, the strength of gravity simply is what it is, a primary quantity, while the proton's mass is the tiny number [1/(13 quintillion)]. --it's because sole atoms have such little mass (relative to the Planck mass) that we can expect unmeasurable gravitational attraction between them. $\endgroup$ – robert bristow-johnson Sep 16 '14 at 17:51 Groups in Seattle, Colorado, and perhaps others managed to measure and verify Newton's inverse-square law at submillimeter distances comparable to 0.1 millimeters, see e.g. Sub-millimeter tests of the gravitational inverse-square law: A search for "large" extra dimensions Motivated by higher-dimensional theories that predict new effects, we tested the gravitational $\frac{1}{r^{2}}$ law at separations ranging down to 218 micrometers using a 10-fold symmetric torsion pendulum and a rotating 10-fold symmetric attractor. We improved previous short-range constraints by up to a factor of 1000 and find no deviations from Newtonian physics. This is a 14 years old paper (with 600+ citations) and I think that these experiments were very hot at that time because the warped- and large-dimensions models in particle physics that may predict violations of Newton's law had been proposed in the preceding two years. But I believe that there's been some extra progress in the field. At that time, the very fine measurement up to 200 microns etc. allowed them to deduce something about the law of gravity up to 10 microns. These are extremely clever, fine mechanical experiments with torsion pendulums, rotating attractors, and resonances. The force they are able to see is really tiny. To see the gravitational force of a single atom is obviously too much to ask (so far?) – the objects whose gravity is seen in the existing experiments contain billions or trillions of atoms. Note that the (attractive) gravitational force between two electrons is about $10^{45}$ times weaker than the (repulsive) electrostatic one! Most of the research in quantum gravity has nothing whatever to do with proposals to modify Newton's laws at these distance scales. Indeed, gravity is the weakest force and it's so weak that for all routinely observable phenomena involving atoms, it can be safely neglected. The research in quantum gravity is dealing with much more extreme phenomena – like the evaporation of tiny black holes – that can't be seen in the lab. Plots and links to new papers available over here (thanks, alemi) • $\begingroup$ eletric force is only 10^42 times greater than gravity $\endgroup$ – user104372 Sep 16 '16 at 10:35 • 1 $\begingroup$ It is $4.1\times 10^{42}$, batesville.k12.in.us/physics/phynet/e%26m/electrostatics/… - Because 4.1 is higher than sqrt(10), it is "closer" to 10^45 than to 10^40 on the log scale, and I just wrote a number where the exponent is rounded to a nearby multiple of five. The qualitative discussion isn't changed and these detailed numbers are meaningless - the ratio is substantially lower for two protons, below 10^40 etc. $\endgroup$ – Luboš Motl Sep 17 '16 at 7:53 • $\begingroup$ Lubos, if you see this, I have been thinking that at the levelo atoms we are in quantum mechanics, and the gravitational potential might generate an extra type of fine structure effect, i.e .change the energy levels. Is this completely off? $\endgroup$ – anna v Jan 31 '18 at 18:43 • $\begingroup$ Dear @annav - all the degeneracy is already lifted by non-gravitational effects, even for hydrogen. The only remaining degeneracy is one into the 2j+1 - degenerate multiplets of j_z for a given j - and that can't be lifted as long as you have the rotational symmetry. So even if we ignore that the gravitational effects are tiny given weakness of the gravity for atoms, they don't qualitatively change anything. Just days ago, we discussed the new preprints on whether gravity-of-Earth effects affect muon's g-2 magnetic moment - now I think that the papers are probably wrong. $\endgroup$ – Luboš Motl Feb 3 '18 at 14:39 • $\begingroup$ You're welcome. @annav - I think that some comment on superposition of fields here is useful. An electron with spin along z+ axis has some electromagnetic and gravitational fields around, OK? So it looks we're combining many complicated fields. But the superpositions are done in the Hilbert space, not in the space of classical fields. Those are very different things. So spin up/down of the electron is still a 2D Hilbert space - and the elmg/grav fields around the electron are entangled with the state of the electron itself, so the Hilbert space remains 2D even with the fields... $\endgroup$ – Luboš Motl Feb 5 '18 at 5:46 Measure the gravitational attraction between two atoms? Heavens no. That's such a tiny, tiny attraction. The atoms will be attracted to themselves gravitationally, but only minutely. They'll be attracted gravitationally much more strongly to the Earth, to the lab setup and measuring equipment, to the buildings around the measuring equipment, and even to the snow on roof of the buildings. What can be measured is the gravitational attraction between atoms and the Earth. Some atomic clocks depend on the fact that atoms are subject to gravity. Atomic fountain clocks such as NIST-F1 use lasers to juggle a stream of atoms of cesium. Lasers cool incoming atoms to near absolute zero and then toss the atoms up into a microwave resonant cavity. The atoms shortly fall back down. Lasers are used on these falling atoms determine if they have switched state. An atomic fountain wouldn't work if atoms weren't subject to gravity. An important question that keeps popping up (pun intended) is "Does antimatter fall up?" Asking this question goes very much against the grain of the equivalence principle, so it's a bit of a fringe question. The ALPHA Collaboration nonetheless worked toward answering this question by attempting to measure the gravitational mass of trapped antihydrogen. The results so far haven't been all that definitive; they found the gravitational mass of antihydrogen to be somewhere between -65 times and +110 times the gravitational mass of hydrogen. A lot more work needs to be done to confirm or invalidate the equivalence principle using antihydrogen. You can read the full article on this experiment at Charman, A. E., & ALPHA Collaboration. (2013). Description and first application of a new technique to measure the gravitational mass of antihydrogen. Nature Communications 4:1785. • 2 $\begingroup$ Can there be a cooler way to measure time than to juggle atoms with lasers? =) $\endgroup$ – Jens Aug 12 '14 at 7:36 • 1 $\begingroup$ The result for antihydrogen is pretty conclusive that people have tried to check but you really can't tell. $\endgroup$ – OrangeDog Aug 12 '14 at 9:33 • 2 $\begingroup$ @PlasmaHH - There is no way to check. Consider two xenon atoms (which are big) separated by a nanometer (which is extremely small). The gravitational acceleration of the two atoms toward each other is about $2.9\times10^{-8}\ \text{nm/s}^2$. Acceleration toward the Earth will completely overwhelm that! So let's take our experiment up to the ISS. Even then, Earth gravity will overwhelm that tiny acceleration. That nanometer separation will result in a $1.3\times 10^{-6}\ \text{nm/s}^2$ tidal acceleration (minimum). There is no way to measure that tiny acceleration. $\endgroup$ – David Hammen Aug 12 '14 at 10:21 • 2 $\begingroup$ +1 for mentioning "does antimatter fall up", which is IMHO the interesting question still unresolved because of the difficulty of making these measurements. $\endgroup$ – zwol Aug 12 '14 at 13:56 • 2 $\begingroup$ It's worth mentioning there are at least three possible outcomes to the falling-antimatter experiment: (a) antimatter has exactly the same mass as normal matter, and falls down; (b) antimatter has negative mass, and falls up; (c) antimatter has slightly different mass than normal matter. For instance, the electron and the quarks get their bare masses from the Higgs field, but most of the proton's rest mass is the kinetic energy of its internal ocean of virtual quarks and gluons. Maybe only the Higgs-induced mass changes for antimatter, so that antihydrogen's mass is slightly different. $\endgroup$ – rob Aug 13 '14 at 2:00 (Skip to the bottom for a list of classical and quantum-mechanical effects of gravitation that have been observed in subatomic particles; my attempt to explain quantitatively what it would take to measure atom-atom gravity got longer than I'd intended, and I haven't had time to shorten it yet.) Let's suppose you want to measure the gravitational attraction between two charged particles, with masses $m_1,m_2$ and charges $q_1e,q_2e$. The classical potential energy between the two particles is $$ U = -\frac{Gm_1m_2 + \alpha\hbar c\, q_1 q_2 }{r} $$ with gravitational constant $G$ and inter-particle distance $r$; the dimensionless fine-structure constant $\alpha\approx 1/137$ is defined by the relation $\alpha\hbar c = e^2/4\pi\epsilon_0$. The remarkable thing about this system is the weakness of the gravitational force: the Particle Data Group tabulates $G/\hbar c \approx 6.7\times10^{-39} (\mathrm{GeV}/c^2)^{-2} $, so for electric and gravitational interactions to take place at the same scale between similar particles, they'd need to have a mass-to-charge ratio of $m/q \approx \sqrt{\alpha \hbar c/G} \approx 10^{18}\,\mathrm{GeV}/c^2$. A proton has a mass-to-charge ratio of $0.94\,\mathrm{GeV}/c^2$, and a heavy nucleus might have $m/q \approx 200\text{–}240\,\mathrm{GeV}/c^2$ — a completely different ballgame. In the land of electroweak interactions, we also have an interesting but feeble force which we might like to study against the overwhelming background of the electromagnetic and strong interactions. There we have the advantage that electroweak interactions strongly violate a symmetry, parity, which the electromagnetic and strong interactions do not. There's a whole class of experiments which put a polarized beam on a target and rapidly flip the spin of the particles in the beam, looking for a parity-violating asymmetry in the interaction of the beam with the target. The state of the art for asymmetry experiments is part-per-billion sensitivity. It's as if I gave you an "unfair" coin which, if you flipped it a billion times, would give you one more tails that heads. There's a fundamental limit to these sorts of experiments, known as counting statistics: if you are expecting $N$ identical-but-uncorrelated things to happen in a particular time interval, you typically get $N\pm\sqrt N$. In order to measure an asymmetry of 10–9, you're screwed by counting statistics unless you have at least 1018 events to compare; if you want "three-sigma significance" then you need another factor of $3^2=10$ more. Remember that a mole — a gram of neutrons, or two grams of molecular hydrogen, or 27 grams of aluminum, and so on up the periodic table — only contains 1024 atoms. Confidently counting 1019 atomic interactions is no mean feat. It's doable, but typically takes about a decade of design work, a couple of years of data collection, and a couple of years of analysis. This approach doesn't scale to gravitational interactions between charged atoms, for two reasons. The first is that the counting statistics is basically impossible, almost literally the square of the state of the art. If you wanted to look for a gravitational asymmetry in the scattering of lead ions, with $m/q=210$, you'd expect an asymmetry around $2\times10^{-16}$, and so you'd need somewhere around 1032 interactions — imagine thousands of tons of lead, examined one atom at a time. The second is that for an electro-gravitational asymmetry, the sign change is on the wrong term: rather than looking for a minute difference between two very similar events, you'd have to look for the same minute correction to like-charge and opposite-charge interactions. It's unlikely you could measure the two interactions with enough precision to be compared to each other. For instance, the rest mass of a Pb+ and Pb ion are different by five parts per million, just because the one has two fewer electrons than the other. Looking for gravitational interactions between neutral atoms would be easier, but not millions of times easier. Neutral atoms may still have magnetic moments, and can electrically polarize each other at close approach; these effects are well-described, but not described at the part-per-million level. Plus, neutral atoms are harder to push around than ions. Any real atom-atom gravitational experiment would have to go through many orders of magnitude of currently-unexplored effects of residual electromagnetism before gravity became measurable. What you can do is to measure the gravitational attraction between one subatomic particle and the rest of the Earth, the same way that my bathroom scale measures the attraction between my belly and the rest of the earth. There are only a handful of successful experiments in what I think of as "semi-quantum gravity," showing quantum-mechanical effects in a Newtonian gravitational potential: • David Hammen's answer mentions the cesium fountain clock, in which a cloud of atoms is permitted to rise and fall under the influence of gravity, but that's essentially a classical effect. The cesium atoms rise and fall just like juggler's balls. • Similarly, I consider the Pound-Rebka experiment a classical effect. While the detection process in that experiment was scattering gamma photons from iron nuclei, the gravitational effect is a frequency shift which is also described by classical electromagnetism combined with general relativity. • The neutron interferometer experiment by Colella, Overhauser, and Werner (1975), and follow-on experiments, manifestly require both (newtonian) gravity and quantum mechanics. A horizontal beam of cold neutrons is divided and recombined by a single-crystal interferometer. The interferometer is rotated so that one outgoing beam is still horizontal, but vertically displaced. It costs the neutrons $mg \approx 100\,\mathrm{neV/m}$ to climb up the interferometer, so the neutrons that take the bottom path have ever-so-slightly less momentum, and therefore a different wavelength $\lambda=h/p$, than the neutrons that take the bottom path; this results in a shift in the phase of the interference pattern that depends on the angle between the interferometer and the horizontal. While the gravitational effect has been observed, it doesn't quantitatively match the prediction of the Schrödinger equation with a linear potential. Speculation in the community is that the interferometers (which are hand-sized, and weigh several ounces) twist when tilted, changing the spacing between the paths and introducing an additional phase shift. COW interferometer • Nesvizhevsky and collaborators (2002) (see also here or here) presented evidence that neutrons in a gravitational well may occupy only discrete bound states. They sent a horizontal beam of ultra-cold neutrons (total velocity ~ 5 m/s, vertical velocity quite small) through a narrow gap between a neutron mirror and a neutron absorber. When the gap was large, the neutrons could bounce off the mirror without touching the absorber and the transmission through the gap was large; when the gap was small, only neutrons with the smallest vertical velocities could shoot the gap without hitting the absorber. For gaps of a few tens of microns, the transmission shows evidence of becoming quantized: the transmission is zero up to a certain gap size, then steps up towards the continuum value as more bound states become available. Neutron transmission • Building on this work, Jenke et al. (2011) have used a vibrating table to drive transitions between gravitational bound states. Greene points out that this is the first experiment ever to drive a quantum-mechanical transition without using an electromagnetic field, using only the strong and gravitational forces. • 1 $\begingroup$ Powerhouse answer, and nice coverage of parity-violating weak observations. $\endgroup$ – dmckee --- ex-moderator kitten Sep 16 '14 at 17:43 • 1 $\begingroup$ Awww, shucks, I'm asymmetrically blushing $\endgroup$ – rob Sep 16 '14 at 23:37 It is currently possible to measure gravity between the single atom and the Avogadro number of atoms. The gravitational energy corresponding to the interaction of electron and proton at the distance of Bohr radius in term of h omega corresponds to omega = 10^-23 Hz and so 1 Hz will be obtained where the second mass is the Avogadro number of protons. This energy difference can be measured directly with so called Trojan wave packets in double Trojan cat states where two semi-classical states of the electron each moving on the circular orbit but in the opposite direction collide twice a period and interfere to the pattern. The interference pattern between two counter-rotating Trojan wave packets will shift as someone was rotating with 1 Hz above it and therefore readily measurable in delta ionization experiment. 1Hz resolution also corresponds to the best frequency resolution used in current atomic clocks that are electronic counters counting the number of microwave or optical oscillations between perfectly measured and defined atomic transitions. RE: I am basically referring to the following paper http://journals.aps.org/pra/abstract/10.1103/PhysRevA.57.2239 which designs the quantum states of the neutral hydrogen atom to measure the weak including gravity effects on the single atom. The electron wave function is split into two components that more less look like the classical electrons. Those components have the angular momentum equal in the modulus but of the opposite sign. Because the phi part of the angular momentum wave function is the exp(m i phi) the collision of the packets twice the period of rotation of the each one amplifies the interference pattern around the circle that has 2 m bumps. The m=n and the higher the Rydberg state the better. When the gravity is applied against the atom in such a way that the left component accumulates different phase than the right one i.e. for example the other mass like neutron is in the plane of rotation and oscillates with the same frequency along the perpendicular to the line of phase accumulation (polarization) in time is of the order of the frequency G Mproton*me / r_0 / hbar where r_0 is of the order of the radius of the electron orbit. Because the linear Start effect on the circular state is 0 http://www.cqed.org/spip.php?article83 and therefore is 0 due to the normal gravity from the Earth or kilograms ball in the variation of this experiment one must use the Trojan wave packet and the excited Trojan wave packet to interfere as the cat superposition if they are to rotate around the same nucleus and one wants the mass relatively in the rest. Alternatively more difficult one electron cat states may be excited around two nuclei one-half of the electron around one and the second around the other and the gravitational Stark effect from the mass above the plane of the rotations can be used while it is placed above one of the nuclei or whatever implementation of the experiment described here http://journals.aps.org/pra/abstract/10.1103/PhysRevA.89.023607 can be imposed on the cat states of the single electron in the atom and other atoms or neutron instead of macroscopic masses and atom clouds. The interference pattern like in Michelson–Morley experiment starts to flow (shifts in time) like someone saw it being slowly rotating above such atom when the gravitational interaction is added. Now since there are sinus-like bumps in the wave function probability the ionization signal on short delta-like pulse is the function of time when the atom is subjected to the gravity of other atom or the Avogadro number of them. Since the number of bumps around the circle is 2 n the higher the quantum number the better. The best would be Rydberg atoms with some n= 1 million or above or giant atoms. Those would have some 50 meters of diameter. It the paper there are magnetic fields but this all applies to the gravity. • 2 $\begingroup$ Could you clarify what you mean here? It's a little unclear to me. $\endgroup$ – HDE 226868 Sep 16 '14 at 0:14
9f9d57c72109290c
Thursday, September 21, 2006 q-Laguerre polynomials and fractionized principal quantum number for hydrogen atom Here and here a semiclassical model based on dark matter and hierarchy of Planck constants is developed for the fractionized principal quantum number n claimed by Mills to have at least the values n=1/k, k=2,3,4,5,6,7,10. This model could explain the claimed fractionization of the principal quantum number n for hydrogen atom in terms of single electron transitions for all cases except n=1/2: the basis reason is that Jones inclusions are characterized by quantum phases q=exp(iπ/n), n> 2. Since quantum deformation of the standard quantum mechanism is involved, this motivates an attempt to understand the claimed fractionization in terms of q-analog of hydrogen atom. The Laguerre polynomials appearing in the solution of Schrödinger equation for hydrogen atom possess quantum variant, so called q-Laguerre polynomials, and one might hope that they would allow to realize this semiclassical picture at the level of solutions of appropriately modified Schrödinger equation and perhaps also resolve the difficulty associated with n=1/2. Unfortunately, the polynomials correspond to 0<q< 1 rather than complex values of q=exp(iπ/m) on circle and the extrapolation of the formulas for energy eigenvalues gives complex energies. The most obvious q-modification of Laguerre equation is to replace the ordinary derivative with an average of q-derivatives for q and its conjugate. As a result one obtains a difference equation and one can deduce from the power series expansion of q-Laguerre polynomials easily the energy eigen values. The ground state energy remains unchanged and excited energies receive corrections which however vanish at the limit when m becomes very large. Fractionization in the desired sense is not obtained. q-Laguerre equation however allows non-polynomial solutions which are square integrable. By the periodicity of the coefficients of the difference equation with respect to the power n in Taylor expansion the solutions can be written as a polynomial of order 2m multiplied by a geometric series. For odd m the geometric series converges and I have not been able to identify any quantization recipe for energy. For even m the geometric series has a pole at certain point, which can be however cancelled if the polynomial coefficient vanishes at the same point. This gives rise to the quantization of energy. It turns out that the fractional principal quantum numbers claimed by Mills correspond very nearly to the zeros of the polynomial with one frustrating exception: n=1/2 producing trouble also in the semiclassical argument. Despite this shortcoming the result forces to take the claims of Mills rather seriously and it might be a good idea for colleagues to take a less arrogant attitude towards experimental findings which do not directely relate to calculations of black hole entropy. Note added: It turned out that for odd m for which geometric series converges always, allows n=1/2 as a universal solution having a special symmetry implying that solution is product of m:th (rather than 2m:th) order polynomial multiplied with a geometric series of xm (rather than x2m). n=1/2 is a universal solution. This is in spirit with what is known about representations of quantum groups and this symmetry removes also the doubling of almost integer states. Besides this one obtains solutions for which n depends on m. This symmetry applies also in case of even values of m studied first numerically. Note added: The exact spectrum for for the principal quantum number n can be found for both even and odd values of m. The expression for n is simply n+= 1/2 + Rn/2, n-= 1/2 - Rn/2, Rn= 2cos(π(n-1)/m)-2cos(πn/m. This expression holds for all roots for even values of m and and for odd values of m for all but one corresponding to n=(m+1)/2. The remaining zero is of course n=1/2 in this case. The chapter Dark Nuclear Physics and Condensed Matter of "p-Adic Length Scale Hypothesis and Dark Matter Hierarchy" and the chapter The Notion of Free Energy and Many-Sheeted Space-Time Concept of "TGD and Fringe Physics" contain the detailed calculations. See also the article Could q-Laguerre equation explain the claimed fractionation of the principal quantum number for hydrogen atom?. Sunday, September 17, 2006 The existence of wormhole contacts have been one of the most exotic predictions of TGD. The realization that wormhole contacts can be regarded as parton-antiparton pairs with parton and antiparton assignable to the light-like causal horizons accompanying wormhole contacts, and that Higgs particle corresponds to wormhole contact, opens the doors for more concrete models of also super-conductivity involving massivation of photons. The formation of a coherent state of wormhole contacts would be the counterpart for the vacuum expectation value of Higgs. The notions of coherent states of Cooper pairs and of charged Higgs challenge the conservation of electromagnetic charge. The following argument however suggests that coherent states of wormhole contacts form only a part of the description of ordinary super-conductivity. The basic observation is that wormhole contacts with vanishing fermion number define space-time correlates for Higgs type particle with fermion and antifermion numbers at light-like throats of the contact. The ideas that a genuine Higgs type photon massivation is involved with super-conductivity and that coherent states of Cooper pairs really make sense are somewhat questionable since the conservation of charge and fermion number is lost. A further questionable feature is that a quantum superposition of many-particle states with widely different masses would be in question. The interpretational problems could be resolved elegantly in zero energy ontology in which the total conserved quantum numbers of quantum state are vanishing. In this picture the energy, fermion number, and total charge of any positive energy state are compensated by opposite quantum numbers of the negative energy state in geometric future. This makes possible to speak about superpositions of Cooper pairs and charged Higgs bosons separately in positive energy sector. Rather remarkably, if this picture is taken seriously, super-conductivity can be seen as providing a direct support for both the hierarchy of scaled variants of standard model physics and for the zero energy ontology. The chapter Super-Conductivity in Many-Sheeted Space-Time of "p-Adic Length Scale Hypothesis and Dark Matter Hierarchy" contains the updated version of the model. Updated model for high temperature superconductivity A model of high Tc superconductivity was one of the first applications for still developing ideas about the hierarchy of Planck constants and corresponding hierarchy of dark matters. It is not difficult to guess that this model looked rather fuzzy complex of ideas when looked one year later. To not totally lose my self respect I had to update the model. The model for high Tc super-conductivity relies on the notions of quantum criticality, dynamical Planck constant, and many-sheeted space-time. These ideas lead to a concrete model for high Tc superconductors as quantum critical superconductors allowing to understand the characteristic spectral lines as characteristics of interior and boundary Cooper pairs bound together by phonon and color interaction respectively. The model for quantum critical electronic Cooper pairs generalizes to Cooper pairs of fermionic ions and for sufficiently large hbar stability criteria, in particular thermal stability conditions, can be satisfied in a given length scale. At qualitative level the model explains various strange features of high Tc superconductors. One can understand the high value of Tc and ambivalent character of high Tc super conductors suggesting both BCS type Cooper pairs and exotic Cooper pairs with non-vanishing spin, the existence of pseudogap and scalings laws for observables above Tc, the role of stripes and doping and the existence of a critical doping, etc... An unexpected prediction is that coherence length is actually hbar/hbar0= 211 times longer than the coherence length predicted by conventional theory so that type I super-conductor would be in question with stripes serving as duals for the defects of type I super-conductor in nearly critical magnetic field replaced now by ferromagnetic phase. At quantitative level the model predicts correctly the four poorly understood photon absorption lines and the critical doping ratio from basic principles. The current carrying structures have structure locally similar to that of axon including the double layered structure of cell membrane and also the size scales are predicted to be same so that the idea that axons are high Tc superconductors is highly suggestive. Saturday, September 09, 2006 What really distinguishes between future and past? Our knowledge about geometric future is very uncertain as compared to that about geometric past. Hence we usually use words like plan/hunch/hope/... in the case of geometric future and speak about memories in the case of geometric past. We also regard geometric past as something absolutely stable. Why we cannot remember geometric future as reliably as the geometric past? Is it that geometric future is highly unstable as compared to the geometric past? Why this should be the case? This provides a possible TGD based articulation for the basic puzzles relating to time experience. The latest progress in the understanding of quantum TGD allows a more detailed consideration of these questions. 1. Is p-adic-to-real phase transition enough? The basic idea is that the flow of subjective time corresponds to a phase transition front representing a transformation of intentions to actions and propagating towards the geometric future quantum jump by quantum jump. All quantum states have vanishing total quantum numbers in zero energy ontology which now forms the basis of quantum TGD and this ontology allows to imagine models for what could happen in this process. This starting point is the interpretation of fermions as correlates for cognition bosons as correlates for intentions/actions (see this). Fermions correspond to pairs of real and p-adic space-time sheets with opposite quantum numbers with p-adic space-time sheet providing a cognitive representation of the real space-time sheet. Bosonic space-time sheets would be either p-adic or real and thus represent intentions or actions. Fermionic world and its cognitive representations would be common to future and geometric past and the asymmetry would relate only to the intention-action dichotomy. Geometric future contains a lot of p-adic space-time sheets representing intentions which transform to real space-time sheets allowing interpretation as desires inducing eventually neuronal activities. Time mirror mechanism for intentional action assumes that the phase transition gives rise to negative energy space-time sheets representing propagation of signals to geometric past where they induce neuronal activities. From Libet's experiments relating to neuronal correlates of volition the time scale involved is a fraction of second but an infinite hierarchy of time scales is implied by fractality. Conservation of quantum numbers poses strong conditions on p-adic-to-real phase transition. Noether charges are in the real context given by integrals over partonic 2-surfaces. The problem is that these integrals do not make sense p-adically. There are two options. a) Give up the notion of p-adic Noether charge so that it would not make sense to speak about four-momentum and other conserved quantum numbers in case of p-adic space-time sheet. This implies zero energy ontology in the real sector. All real space-time sheets would have vanishing conserved quantum numbers and p-adic-to real transition generates real space-time sheet complex with vanishing total energy. Negative energy signal must be somehow compensated by a positive energy state. b) It might be however possible to assign charges to p-adic space-time sheets. The equations characterizing p-adic space-time sheet representing intention and corresponding real space-time sheet representing action are assumed to be given in terms of same rational functions with coefficients which are algebraic numbers consistent with the extension of p-adic numbers used so that the points common to real and p-adic space-time sheets are in this extension. If real charges belong to the algebraic extension used, one could identify the p-adic charges as real charges. Zero energy ontology requires the presence of positive energy real space-time sheets whose charges compensate those of negative energy space-time sheets. One possibility is that real and corresponding p-adic space-time sheets appear in pairs with vanishing total quantum numbers just as fermionic space-time sheets are assumed to occur (see this. In the case of fermions p-adic-to-real phase transition is impossible by Exclusion Principle so that a stable cognitive representation results. The minimal option would be that p-adic space-time sheets possess negative energy and are transformed to negative energy signals inducing neuronal activities. The flow of subjective time would involve a transformation of the universe to zero energy universe in the sense that total conserved quantum numbers vanish in the real sense in bosonic sector but in fermionic sector real and p-adic charges compensate each other. This picture is probably too simple. Robertson-Walker cosmology has vanishing density of inertial energy. Hence it would seem that real bosons and fermions should appear in both positive and negative energy states and the arrow of time defined by the direction of the propagation of the intention-to-action wave front would be local. The transition of the geometric past back to intentional phase would involve transformation of real bosons to p-adic ones and is in principle possible for this option. For the first option the transition could occur only for real states with vanishing total quantum numbers which would make this transition highly improbable and thus imply irreversibility. The basic criticism is that since intentions in the proposed sense do not involve any selection, one could argue that this picture is not enough to explain the instability of the geometric future unless the instability is due to the instability of p-adic space-time sheets in quantum jumps. 2. Does intentional action transform quantum critical phase to non-quantum critical phase? It is far from clear whether the proposed model is not able to explain the uncertainty of the geometric future and relative stability of the geometric past related very intimately to the possibility to select between different options. TGD based view about dark matter as a hierarchy of phases characterized by M4 and CP2 Planck constants quantized in integer multiples of minimum value hbar0 of hbar (see this) suggests a more refined view about what happens in the quantum jump transforming intention to action. 1. The geometric future of the living system corresponds to a quantum critical state which is a superposition of (at least) two phases. Quantum criticality means that future is very uncertain and universe can be in dramatically different macroscopic quantum states. 2. Experienced flow of time corresponds to a phase transition front proceeding towards the geometric future quantum jump by quantum jump. In this transition intentional action represented by negative energy bosonic signals transforms the quantum critical phase to either of the two phases present. This selection between different phases would be the basic element of actions involving choice. The geometric past is stabilized so that geometric memories about geometric past are relatively stable. This picture applies always in some time scale and there is an entire hierarchy of time and spatial scales corresponding to the hierarchies of p-adic length scales and of Planck constants. Note that Compton length and time are proportional to hbar as is also the span of long term memories and time scale of planned actions. The (at least) two phases present at quantum criticality would have different values of Planck constants. In the simplest case the values of M4 and CP2 Planck constants for the second phase would correspond to the minimal value hbar0 of Planck constants. For instance, cell could be in quantum superposition of ordinary and high Tc super-conducting phase, with high Tc superconductor characterized by a large M4 Planck constant. Intentional action would induce a transition to either of these two phases. Sub-system would chose either the lower or higher level in the hierarchy of consciousness with level characterized by the values of Planck constants. This unavoidably brings in mind a moral choice. Intentional actions involve often a choice between good and bad and this choice could reduce to a choice between values of Planck constant. Good deed would lead to higher value of Planck constant and bad deed to a lower one. This interpretation conforms with the earlier view about quantum ethics stating that good deeds are those which support evolution. The earlier proposal was however based on the assumption that evolution means a gradual increase of a typical p-adic length scale and seems to be too restricted in the recent framework. For instance, in cell length scale the cells of the geometric future could be in quantum critical phase such that large hbar phase corresponds to high Tc super-conductivity and low hbar phase to its absence. In quantum jump cell would transform to either of these phases. The natural interpretation for the transition to low hbar phase is as cell death since the communications of the cell to and quantum control by the magnetic body are lost. Ageing could be seen as a process in which the transitions to small hbar phase begin to dominate or even the quantum criticality is lost. A model for the quantum criticality based on zeros of Riemann zeta developed here,here, here, here, and here allows a more quantitative view about what could happen in the phase transition. For more details see the chapter Time, Space-Time and Consciousness of "Biosystems as Conscious Holograms" or the chapter Quantum Model for Memory of "TGD Inspired Theory of Consciousness". Thursday, September 07, 2006 Powerpoint representations about TGD I have received continually requests to add to my homepage a brief overall view about TGD. This request is well-motivated since the enormous amount of material makes it difficult to get bird's eye of view about TGD. I have now constructed powerpoint representations about TGD and TGD inspired consciousness at my homepage. I have decomposed the representation of TGD in four parts. 1. Basic physical ideas of TGD and new view about space-time with basic ideas of quantum TGD deduced using quantum classical correspondence. 2. Quantum TGD as infinite-dimensional geometry in the world of classical worlds with classical spinor fields of this space representing the quantum states of the Universe. Reduction of quantum TGD to parton level with parton orbits identified as light-like 3-surfaces of 4-D space-time sheets. Interior dynamics of space-time sheet as classical dynamics correlated with parton dynamics by quantum classical correspondence. Chern-Simons action for induced Kähler form and corresponding modified Dirac action as fundamental variational principle giving vacuum functional as Dirac determinant, super-conformal symmetries, their breaking and relations to those of super string models, etc... 3. TGD and von Neumann algebras. Clifford algebra of the world of classical worlds provides an example of hyper-finite factor of type II1 and this has extremely far reaching implications for quantum TGD. The absence of infinities in fermionic degrees of freedom, quantization of Planck constant and generalization of the notion of imbedding space are perhaps the most important implications. 4. Construction of S-matrix based on the reduction of dynamics to the parton level. Zero energy ontology and the reduction of S-matrix to unitary entanglement coefficients between positive and negative energy components of quantum state (makes sense only for hyperfinite factors of type II1 are perhaps the most important implications. p-Adicization program is discussed and allows to understand how TGD S-matrix can be understood as a generalization of braiding S-matrix allowing branching of braids with vertices described by almost-topological QFT defined by Chern-Simons action. Comparison with stringy S-matrix is also included. There are two representations about TGD inspired theory of consciousness and quantum biology contain two representations. 1. TGD Inspired Theory of Consciousness. The notions of quantum jump and self, new view about the relationship between experienced and geometric time, Negentropy Maximization Principle, general theory of qualia, fermionic Fock states and Boolean cognition, p-adic space-time sheets as correlates of cognition and intentionality, etc... 2. TGD based view about quantum biology. Many-sheeted space-time and new view about time, possibility of negative energies and signals propagating backwards in geometric time making possible time mirror mechanism of long term memory recall and remote metabolism, many-sheeted mechanism of metabolism as dropping of particles to larger space-time sheets liberating their zero point kinetic energy as usable energy, p-adic physics as physics of cognition and intentionality, hierarchy of Planck constants and living matter as ordinary matter quantum controlled by macroscopically quantum coherent dark matter at magnetic flux sheets of astrophysical size, high Tc superconductivity as large hbar super conductivity, hierarchy of EEGs used by magnetic bodies to receive information from and quantum control biomatter, new view about genetic code involving the notions of super- and hyper genome, quantum leaps in evolution as phase transitions increasing the value of Planck constant, etc... In case that you are interested, You can find the link to the powepoint representations at the main page. The direct address is
2716bcf4b7965fff
Atomic Orbitals and Nodes From CleanEnergyWIKI Jump to: navigation, search Return to Molecular Orbitals Menu Next Topic Electron Probability, peak density and electron density as a function of distance from the nucleus. Atomic Orbitals Orbitals are important because they determine the distribution of electrons in molecules, which in turn determines the electronic and optical properties of materials. Atomic orbitals are wave functions that are solutions to the Schrödinger equation. This equation allows us to figure out the wave functions and associated energies in atomic orbitals. The square of the wave function gives the probability of finding an electron at a certain point. The integral of the wavefunction over a volume gives the enclosed electron density within that volume. The most likely position to find the 1s electron is at the nucleus.However the most likely radius is at some distance from the nucleus. The graph of wavefunction vs distance falls off exponentially as you move away from the nucleus. The electron density builds quadratically with distance from the nuclues. The peak electron density will be the product of the these two functions. This results in a curve with a density peak at a certain distance. Wavefunctions alone do not tell you the electron density. Approximate shape of atomic orbitals viewed as a surface. Orbital nodes Visualization of electron density as cross sections for orbitals and shells. The p orbitals have orientations along the x, y and z axes. A node is a place where there is zero probability of finding an electron. A radial node has a spherical surface with zero probability. P orbitals have an angular node along axes. We usually indicate the sign of the wave function in drawings by shading the orbital as black and white, or blue and green. 1s: no node 2s: one radial node, 2p one angular node 3s: two radial nodes, 3p one radial node one angular node, 3d two angular nodes The more nodes the higher the energy of the orbitals. The more the function varies spatially, the higher the energy. Core and valence electrons Core electrons are very tightly bound tot he nucleus and spend most of their time very close to the nucleus. They are largely unaffected by the presence of nearby atoms. Valence electrons are less tightly bound to the nucleus and are in the outermost "shell". These electrons are easily affected by the presence of other atoms and the ones that are critical for bonding between atoms. According to the Aufbau principle we start adding electrons to the 2s orbital and then to the three 2p orbitals, each of which can have up to two electrons. The orbitals are filled from lowest energy to the highest. Here are the electron configuration for row two of the periodic table. The 1s orbitals are core while the two 2s and 2p orbitals are valence electrons. Neon has a full octet so it is non-reactive, ie a noble gas. element 1s 2s 2p Li 1s2 2s1 Be 1s2 2s2 B 1s2 2s2 2p1 C 1s2 2s2 2p2 N 1s2 2s2 2p3 O 1s2 2s2 2p4 F 1s2 2s2 2p5 Ne 1s2 2s2 2p6 Here is the general rules for filling orbitals. s p d f g 1 1 2 2 3 3 4 5 7 4 6 8 10 13 5 9 11 14 17 21 6 12 15 18 22 26 7 16 19 23 27 31 8 20 24 28 32 36 Bridging the Language Barrier In working in an interdisciplinary research its easy to get confused with different terms for the same concepts as used by physicists and organic chemists. This table might help: Physicist Speak Organic Chemist Speak Bands Molecular Orbitals Band Gap Excited State Energy Excitons Excited States High work function material Electron deficient - acceptor Low work function material Electron rich -donor  !?&* stuff that messes up my vacuum chamber (fill in your favorite apparatus) Organic compound or polymer Wavelength division mulitiplexer (fill you your favorite device system) Device thingy Return to Molecular Orbitals Menu Next Topic
28a844f0a8a4cbb0
Partial Differential Equations Partial Differential Equations • Ali Arshad added an answer: How can we simulate Differential Algebraic Equations (DAEs) in MATLAB One of the methods for developing control system for distributed parameter systems is to convert the system of PDEs in to a DAE; consisting of independent ODEs related to one another by algebraic equations or constraints.   Ali Arshad · COMSATS Institute of Information Technology Thanks@ Daniel for the concern shown, I will have to dig deep in the problem to get the solution. I have worked with almost all the variable and fixed step, stiff and non stiff ODE solvers but have not solved DAEs. My real task is to remodel a distributed system represented by a partial differential equation with DAEs, and then ultimately designing a sliding mode controller for the system • Toufic El Arwadi added an answer: Can you suggest good references for learning Chaos in Partial Differential Equations? While there are many papers and good books about chaos in ordinary differential equations, I like to know if there are some good books and survey papers about chaos in partial differential equations. Your suggestions are highly appreciated. Best wishes Toufic El Arwadi · Beirut Arab University This is an example about Chaos for PDE • Olena O. Vaneeva added an answer: Is there any Maple package for computing the approximate symmetries and conservation laws for a given system of PDE? There is the GeM package for computing the symmetries and conservation laws for a given system of PDE. Olena O. Vaneeva · National Academy of Sciences of Ukraine I would take a look on these packages: GeM (see also  and SADE  • Mourad Ismail added an answer: Can anyone help me find a FEM code to solve nonlinear partial differential equations for fluid flow? Finite element method code in matlab or in mathematica to solve navierstokes equations for fluid flow Mourad Ismail · University of Grenoble • Chen Huyuan added an answer: How to prove the x_N-odd solution of -\Delta u+|u|^pu=f in B_1, where f is x_N-odd? We know that the solution of -\Delta u=f in B_1 is x_N-odd when f is x_N odd. Chen Huyuan · NYUShanghai, Shanghai, China Thanks, I got it. • Demetris Christopoulos added an answer: Why are most of the fundamental laws in Physics second order degree differential equations? If we look at the laws of Newton, Schroedinger, Einstein and others we can observe that they are all second order degree differential equations, ordinary or partial. Why such a coincidence? Is this an indicator that our projection of reality is just a linear projection or is it something deeper behind this universality of the 2nd degree? Demetris Christopoulos · National and Kapodistrian University of Athens Force can be substituted by curvature in general relativity, thus even if we do not use force, we use again a second order quantity. • Troestler Christophe added an answer: Do you have solved examples of system of nonlinear pde using finite element method? See above • Bernardo Figueroa added an answer: Is it possible to derive boundary integral equations for inertial flows? Boundary integral formulations are considered as robust and efficient methods used to solve the linearized Navier-Stokes equation. The partial differential equations are transformed into integral equations by Green’s identities, where the velocity field is represented as a combination of hydrodynamic potentials of single and double-layer. BIM formulation could also be extended in order to solve for non-Newtonian fluids. I would like to know whether similar methods can be derived as well for the full NS equation where the nonlinear inertial forces cannot be neglected compared next to the linear viscous forces? Thank you. Bernardo Figueroa · Universidad Nacional Autónoma de México I am not sure, however it is possible to solve problems strongly dominated by inertia using the Boundary Element Method, here is a beautiful example: Ha-Ngoc, H. & Fabre, J. 2004 Test-case No. 29B: The velocity and shape of 2D long bubbles in inclined channels or in vertical tubes (PA, PN) Part II: In a flowing liquid. Multiphase Sci.Technol.16, 191–206 • Giovanni Bratti added an answer: Solving coupled pdes for a viscoelastic cantilever? I don't know how to deal with the complex damping E*. I am trying to study the dynamics of a cantilever subjected to bending and torsion, the beam is made of a viscoelastic material. After setting up the equations of motion and using Galerkin's method to convert the equations to odes in matrix form, I have a difficulty in the implementation of the viscoelastic property ( what to do with E* & G*). Giovanni Bratti · Federal University of Santa Catarina There is no problem in use E* instead of E. So you will be using a Structural Damping model, and you can solve your problem using "Direct Analysis". You can see more details about it on the book: Introduction to finite element vibrations analysis, Maurice Pety, ISBN 052126607-6, chapter 9. • Klaus Schittkowski added an answer: What is the best way of numerically solving a system of nonlinear PDEs? I have a system of 6 nonlinear PDEs. These equations involve a time derivative and one spatial derivative. What would be the simplest way to get a time dependent solution of these equations? Klaus Schittkowski · University of Bayreuth Try EASY-FIT which can be downloaded from • Saeed Kazem added an answer: How do you numerically integrate time dependent exponentials? In one of my problems I tried to numerically integrate the following function, F(t) = Exp(-0.5 * t). Can we use Simpon's rule to integrate it? Or are any other methods used to numerically integrate F(t)? Saeed Kazem · Amirkabir University of Technology If you want to use the Simpon's rule to integrate, it's better to divide the domain of integration to m sub-domain and then apply the Simpson's rules for each sub-domain. Therefore the order of convergence for this method is O(h^3/m^2). • Daniel Guan added an answer: Why is a system with PDEs infinitely dimensional? Partial Differential Equations (PDEs) contain at least two independent variables. Generally the system of PDEs is called infinite dimensional, what is the reason behind this argument? Daniel Guan · University of California, Riverside Let us say that you have two free variables x and t. Consider the initial value condition at t=0 and x could be arbitrary at least locally. That is, you already know u(x, 0). Then you have an equation of t for each (fixed) x, if this is true just for an example. You could get a solution for each x. Therefore, generically speaking, the solution is depended on the initial condition u(x, 0), which is a function depended on x. The solution space usually is only a kind of infinite dimensional variety (or manifold if it is kind of smooth). It is only a kind of linear space if the PDE is linear (this answer your second question). • Peyman Hessari added an answer: How is a weak solution of a partial differential equation usefull in Physics and Engineering? In the last few years I always thought as an engineer that the solution a physical system 'produces' is always smooth (differentiable to a certain degree). These solutions are so called classical solutions. But now I have learned of weak solutions that can be found for partial differential equations. Those solutions don't have to be smooth at all, i.e. they have to be square integrable or their first derivative must be square integrable ... So, if the weak solution is not differentiable it will not satisfy the original differential equation. Now, what is the use of the weak solutions that can be found? What is their physical meaning and how are they useful to find classical solutions? Peyman Hessari · Ulsan National Institute of Science and Technology Weak solutions are easy to understand and implement, however they are not always the solution of the original PDEs.  • Chinedu Nwaigwe added an answer: What refinement indicators currently exist for hyperbolic system of PDEs, in particular, the shallow water equations? What refinement indicators currently exist for hyperbolic system of PDEs, in particular, the shallow water equations? which is your favourite and why? also provide references. Chinedu Nwaigwe · The University of Warwick Thanks Agah. I already have the paper and other related works from the first author. I just want to know if there are other indicators in addition to weak local residual and numerical entropy production. • Vladimir Rasvan added an answer: What's spectral stability? What does it mean if a solution to a PDE is "spectrally stable"? Vladimir Rasvan · University of Craiova This problem has several issues. First, your solution should be considered in the setting containing both PDE and their boundary conditions. If all this stuff allows you to define a semi-group of operators along the solutions, the stability condition can be expressed in the language of the infinitesimal generator spectrum. This spectrum must lie in some half plane of C - the complex plane - defined by e.g. Re(z) less than some strictly negative real number. But the spectrum is not limited to eigenvalues but usually it contains also the continuum spectrum. Therefore - be careful! • Yakov Krasnov added an answer: Can anyone help with a heat equation, where boundary converges to the heat equation on the whole real line? Dear All, I have a, probably naive, question on the simple PDE: For a one-dimensional heat equation with x in [-L, L] without external heat, boundary condition is u(t,-L)=0, u(t,L)=0 and initial condition is u(0,x) = 1 for x in [-0.5L, 0.5L], can we say that for any fixed T>0 the solution u(T, x)  of the above heat equation converges to u1(T,x) as the domain [-L, L] tends to [-infinity, infinity]. where u1(T,x) is the solution of a heat equation with the same initial value but on the whole real line. If the answer is yes, could you please provide some references for it? Many thanks Yakov Krasnov · Bar Ilan University Here is an answer • Apostol Faliagas added an answer: How to solve non-linear differential equation using finite element method? I want to use galerkin method to solve a nonlinear fourth order partial differential equation.The equation has 2 independent variables and its time dependent. I know how to come up with high order linear partial differential equation but have no idea of how to come up with non-linear ones.  I want to know how to form the element matrix for non-linear differential equation using galerkin method.Are there some specific books or references that talk about it? Apostol Faliagas · Athens State University The technique that is usually used to solve this kind of equations is linearization (so that the std finite element (FE) methods can be applied) in conjunction with a Newton-Raphson iteration. See for example how FE are used in freefem++ (, manual, example 3.10 "Newton Method for the Steady Navier-Stokes equations") for the solution of the (non-linear) steady state Navier-Stokes equations. Another readily applicable source of example is  • Toka Diagana asked a question: Evolution family associated with the algebraic sum A(t) + B(t) - any thoughts? Let A(t) and B(t) be unbounded linear operators on a Banach space X. Suppose the algebraic sum, A(t) + B(t), makes sense (nontrivial) and that A(t) and B(t) have evolution families associated with them, which we denote by U(t,s) and V(t,s). Under what conditions does A(t) + B(t) have an evolution family W(t,s)? In that event, what are the connections between the evolution families U(t,s), V(t,s), and W(t,s)? • Amad Baiuk added an answer: How to solve the set of differential equations if the stiffness matrix is singular? My set of partial differential equations has a singular stiffness matrix. I use Matlab to solve this system. I found something used to find the inverse of stiffness matrix if it's singular which know as "pinv(A)". However, when I check the result, unfortunately it is not accurate. So, if anyone has a solution of my case I appreciate him with my thankful. Amad Baiuk · Deakin University Thanks guys for interaction. • Mohamed Khebbab added an answer: How can I implement numerical homogenization? I am looking for articles or papers about implementation of numerical homogenization (two scale convergence. thank you for your help • Dan E Kelley added an answer: Does anybody work with the Gerris Flow Solver? To communicate about important difficult issue related to Gerris Dan E Kelley · Dalhousie University Yes, all sorts of people do.  If you state your actual problem you may get some help. • Behnam Farid added an answer: Which method (numerical or analytical) can I use to solve for eigenvectors and eigenvalues of a coupled partial differential equation? a Uxx + b Uyy + c Uyx + d Ux + e Uy + f V = E1 U g Vxx + h Vyy + k Vyx + m Vx + n Vy + q U^2 = E2 V. All coeffients depensd on the variables x, y. On the numerical side, I am exploring finite difference method and runge-kuta methods. But they seem not to give convincing results. Can not ensure othogonalisation of eigenvectors, etc.. I need both eigenvectors and eigenvalues to compute physical quantities, like conductivities, etc... Can I also generalize the method for more than just two coupled systems? Thank you. The Bessel function in the Fourier-Bessel expansion is the Bessel function of the first kind. This has to do with the analytic property expected of the solution and the analytic property of the Bessel function of the first kind. Similarly as regards the Lagendre function that one encounters in the solution of the Schrödinger equation for Hydrogen; the choice of Pl(x) (the Legendre function of the first kind), to be contrasted with Ql(x) (the Legenrde function of the second kind), is related to the analytic property of Pl(x); the function Ql(x) is logarithmically singular at x=+/- 1. For details, consult any textbook on quantum mechanics. See also the book Handbook of Mathematical Functions, edited by Abramowitz and Stegun (Dover Publications). • Sylvanus Kupongoh Samaila asked a question: Can anyone show me how p-Laplacian and p(x)-Laplacian equations arise in electrorheological fluids, its origin and developments? If the domain U is not smooth with wrought boundary, can I stil use Lebesgue-Sobolov space with a variable exponent? • Victor F Petrenko added an answer: How do you solve the two-dimensional eigenvalue problem in polar coordinates with homogeneous boundary conditions of the 3rd kind? The boundary conditions at the outer radius of a disk are of the 3rd kind with spatially dependent coefficients Victor F Petrenko · Dartmouth College Use COMSOL 4.4 software. It's very easy to learn. • Amaechi J. Anyaegbunam added an answer: How can I answer this type of DE: u'(t)^2=4(u(t)-c)? In the process of minimization a functional after putting Hamiltonian equation equal to a constant "c" I arrive at (x(t)^2)(1-x'(t))^2=c and I substitute u(t)=x(t)^2 and u'=2xx' and obtain u'(t)^2=4(u(t)-c). I don't know the way of solving this? Amaechi J. Anyaegbunam · University of Nigeria Alireza Ahmadi wanted to solve the 1st order nonlinear ODE x^2(1 – x')^2 = c --------------------- (1) and proposed the transformation u= x^2, u' = 2xx'. When this transformation is applied the correct result is u '= 2[sqrt(u) +/- sqrt(c)] -------------- (2) The transformed ODE (u')^2 = 4[u – c] ----------------- (3) given by Ahmadi is wrong. This has been pointed out in previous posts Hence, it is incorrect to solve Eq. (3) as a prelude to solving Eq. (1). As was also pointed out previously no transformation is needed before solving Eq. (1) • Bankim Chandra Mandal added an answer: How to solve a coupled 2nd order time-dependent PDE? I want to solve analytically a coupled 2nd order space-time problem, originated from an optimal control problem. One of the problems is forward, another is backward in time. For example, (i) $y_t-y_{xx}=u, y(x,0)=0, y(0,t)=0, y(1,t)=g(t)$ (ii) $-p_t-p_{xx}=y, p(x,T)=0,p(0,t)=0, p(1,t)=h(t)$ with the coupling condition $p(x,t)+c*u(x,t)=0$ in $(0,1)\times (0,T)$. I have tried separation of variables, but it is getting complicated, any suggestions? Bankim Chandra Mandal · University of Geneva Actually not, I wanted to get analytical solution for this system. I checked that, people have worked in finding numerical solution, but I found very little about any kind of closed form analytical solution. Prof. Krzysztof Z. Sokalski has given a good technique to tackle these problems. Thank you for your consideration to this Q&A. • B. Vasu added an answer: What are the advantages of numerical method over analyatical method? We use several numerical methods. Why do we use it and is it really applicable? B. Vasu · Motilal Nehru National Institute of Technology A major advantage of numerical method is that a numerical solution can be obtained for problems, where an analytical solution does not exist. An additional advantage is, that a numerical method only uses evaluation of standard functions and the operations: addition, subtraction, multiplication and division. Because these are just the operations a computer can perform, numerical mathematics and computers form a perfect combination. • Mehran Parsaei added an answer: What is the difference between essential boundary conditions and natural boundary conditions? What is the difference between essential boundary conditions and natural boundary conditions, and what is the difference between primary variables and secondary variables? Mehran Parsaei · University of Yazd From mathematical point of view two general boundary conditions may be considered. Dirichlet or essential B.C.'s versus Neumann or natural or free B.C.'s. The 1st one refer to the primitive variable of the problem when the second refer to the so-called secondary variables. these two type of B.C.'s are equivalent in the exact solution of a boundary valued problems. The basic difference emerges in numerical FEM solution. Drichlet B.C.'s have to be specified explicitly whereas Neumann or natural or free conditions are dealt with implicitly as part of the formulation. For more Information you can refer to : Spectral/hp element Method for cfd, Karniadakis and Shervvin, Ch. 2 Topic Followers (5479) See all
f461b8bf8961098a
DSP-Based Testing of Analog and Mixed-Signal Circuits Format: Paperback Language: English Format: PDF / Kindle / ePub Size: 7.80 MB Downloadable formats: PDF The Spherical Wave Structure of Matter, particularly the behaviour of the In and Out Waves, is able to resolve this puzzle so that the appearance of instant communication is understood and yet neither Albert Einstein nor QM need be wrong. Consequently, particles behave like fermions or like bosons only if they are totally identical. None of the properties of a wave are changed by reflection. The agreement of observed frequencies and Schrodinger's Wave Equations further established the fundamental importance of Quantum Theory and thus the Wave properties of both light and matter. Pages: 272 Publisher: Wiley-IEEE Computer Society Pr; 1 edition (April 27, 1987) ISBN: 0818607858 Quantum Field Theory and Critical Phenomena (The International Series of Monographs on Physics) Dedicated Digital Processors: Methods in Hardware/Software Co-Design Millimetre Wave and Terahertz Sensors and Technology (Proceedings of Spie) Dynamical Problems in Continuum Physics (The IMA Volumes in Mathematics and its Applications) (Volume 4) To begin, a few nice quotes on Quantum Physics Digital Signal Processing and download epub Digital Signal Processing and. In the case of the aqueous solvent system described in the experimental example above, the vibrational oscillations of the solvent water molecules were excited , e.g. Nonlinear Dynamics of Ocean Waves: Proceedings of the Symposium the John Hopkins University Applied Physics Laboratory 30-31 May, 1991 Nonlinear Dynamics of Ocean Waves:. The black disk is the object, the black vertical and horizontal lines mark the x- and y-positions of the object respectively. Check that: The horizontal line, that is lined up like an x-axis, actually marks the y-position, or y-coordinate, of the object epub. Therefore, in one second, the wave moves on by fl metres. Therefore, c=fl, where c is the speed of light. A wavefront is a line or surface, in the path of a wave motion, where all the displacements at any point have the same phase , cited: The H.264 Advanced Video download epub http://warholprints.com/library/the-h-264-advanced-video-compression-standard. This is called the principle of superposition. When waves algebraically add to make a bigger wave, we call this constructive interference. The two waves on top will pass each other, creating a bigger wave as seen in the middle, and then continue on as the original waves as seen in the bottom epub. This strategy relies purely on classical physics, not quantum physics. But Lidar says the D-wave is “consistent” with quantum annealing. This is similar to simulated annealing — except you can, in essence, go through the hills rather than over them. “You can take advantage of a quantum phenomenon called tunneling,” Lidar says. “It’s like a quantum shortcut.” He’s careful to say that he and his team have not proven that the D-Wave uses quantum annealing, but the system certainly appears to use it pdf. The fact that we can represent nature this way mathematically gives the wave function special meaning to us as creatures who are interested in predicting future events. A nice analogy can be made to the power of the wave function and the power early astronomers had in predicting the seasons. Early civilization was extremely concerned with the growing season. The fact that one could watch the stars and make precise enough measurements so as to predict when the next growing season occurred was extremely important pdf. Wind blowing through trees can also create sound this indirect way. Sound can also be created by vibrating an object in a liquid such as water or in a solid such as iron. A train rolling on a steel railroad track will create a sound wave that travels through the tracks , source: Wilson Lines in Quantum Field read for free warholprints.com. As sound travels through air, the wave creates zones of high pressure and low pressure as it moves along. The sound wave eventually makes its way to our heads where the pressure differentials vibrate our eardrums. These vibrations are interpreted by our brains as sound Direct and Inverse Problems of read pdf http://warholprints.com/library/direct-and-inverse-problems-of-electromagnetic-and-acoustic-wave-theory-proceedings-of-ivth! The physics help and lessons provided are written for physics students at the high school and introductory college level. Most of the physics lessons are designed to be projected to a class and can be used by a teacher to demonstrate many physics concepts , cited: Dynamics read epub warholprints.com. Tools for Signal Compression: Applications to Speech and Audio Coding So what’s the probabilistic distribution of a wave function? What’s extremely weird is that this probabilistic distribution depends on what’s being measured. Then, the wave function will collapse into a localized wave function. The position of this localized wave function follows a probabilistic distribution whose density is the square of the norm of the wave function before it collapses ref.: Towards Quantum Gravity: Proceedings of the XXXV International Winter School on Theoretical Physics Held in Polanica, Poland, 2-11 February 1999 (Lecture Notes in Physics) phpstack-9483-21148-60252.cloudwaysapps.com. In this way we find that $\rho_0$ cancels out and that we are left with \begin{equation} \label{Eq:I:47:13} \frac{\partial^2\chi}{\partial t^2} = \kappa\,\frac{\partial^2\chi}{\partial x^2}. \end{equation} We shall call $c_s^2 = \kappa$, so that we can write \begin{equation} \label{Eq:I:47:14} \frac{\partial^2\chi}{\partial x^2} = \frac{1}{c_s^2}\,\frac{\partial^2\chi}{\partial t^2}. \end{equation} This is the wave equation which describes the behavior of sound in matter Physics for Scientists and read epub excesscapacityaudit.com. The red dots represent the wave nodes A standing wave, also known as a stationary wave, is a wave that remains in a constant position. This phenomenon can occur because the medium is moving in the opposite direction to the wave, or it can arise in a stationary medium as a result of interference between two waves traveling in opposite directions. The sum of two counter-propagating waves (of equal amplitude and frequency) creates a standing wave Nonclassical Light from Semiconductor Laser and LED warholprints.com. Now remember we're trying to find energy eigenstates, and that is to find wave functions, time independent wave functions, that solve the time independent Schrodinger equation , source: Photonic Switching Technology: Systems and Networks http://llmusicgroup.com/lib/photonic-switching-technology-systems-and-networks. Continuity of the voltage across the capacitor. Know the symbolic representation of a capacitor. Brief description of a coil symbol , e.g. Wave Dynamics and Stability of download pdf Wave Dynamics and Stability of Thin Film. The wave can then be seen as a colored path in the complex plane epub. This vanishing of the probability for the superposed state [half-dead/half-alive] is known as "decoherence" ..... Decoherence inevitably happens in a large system built of quantum components: its individual quantum states rattle around at random, disposing of all the strange quantum superpositions that depend on almost impossibly precise coherence between all the constituent quantum states. ..... [decoherence] is a property of large systems in general, not of some specific "act of measurement" that has to be distinguished in some mysterious way from other straightforward physical processes Variational Calculations in download online download online. Topics in Soliton Theory, Volume 167 (North-Holland Mathematics Studies) Introduction to Nonlinear Dispersive Equations (Universitext) Numerical Computation of Electric and Magnetic Fields Vibrations and Waves in Continuous Mechanical Systems Waves and Oscillations in Nature: An Introduction The Plane Wave Spectrum Representation of Electromagnetic Fields: (Reissue 1996 with Additions) Quantum Field Theory in Condensed Matter Physics Thermo Field Dynamics and Condensed States Random Fields: Rigorous Results in Statistical Mechanics and Quantum Field Theory (2 Volume Set) (Colloquia Mathematica Societatis Janos Bolyai) Multidimensional Periodic Schrödinger Operator: Perturbation Theory and Applications (Springer Tracts in Modern Physics) Therefore uncertainty in the measurement of the position of the sand particle which is too small. Therefore uncertainty principle is not effective in daily life. Uncertainty principle in microscopic world (subatomic world): Since the matter consists of atoms and molecules and atom comprises electrons. The mass of electron is me = 9.1 × 10-31 kg The size of electron = lA = 10-10 m From uncertainty principle Uncertainty in the velocity of electron = 0.74 × 107 m/s = 7.4 × 106 m/sec C. which is observable download. Light from the big bang has been turned into microwaves by its passage across space. These microwaves were discovered in 1964 and are known as the cosmic microwave background radiation. Bicep2 was designed to measure their polarisation. Rumours began on Friday that the detection of primordial gravitational waves would be announced Advanced Digital Signal read online Advanced Digital Signal Processing and. Psi prime could look like that, could have a corner. Because if V has finite jumps, if psi double prime has finite jumps, and if psi prime is not continuous, it would have delta functions. So for these two conditions, continuous or even finite jumps, psi prime is still continuous ref.: Vortex Flow in Nature and Technology http://rjlexperts.com/library/vortex-flow-in-nature-and-technology. For me personally, familiarizing myself with Quantum Physics played a MAJOR role in enabling me to better understand how and why various "growth lessons" that I had experienced earlier in life really happened, which resulted in developing a much deeper and stronger "Belief" or "Faith" in my own personal ability to begin consciously creating the events, conditions and circumstances that I desired , e.g. Subsurface Sensing http://warholprints.com/library/subsurface-sensing. Here it is: Quantum states are represented by wave functions, which are vectors in a mathematical space called Hilbert space. Wave functions evolve in time according to the Schrödinger equation. Quite a bit simpler — and the two postulates are exactly the same as the first two of the textbook approach. Everett, in other words, is claiming that all the weird stuff about “measurement” and “wave function collapse” in the conventional way of thinking about quantum mechanics isn’t something we need to add on; it comes out automatically from the formalism Transient Lens Synthesis: Differential Geometry in Electromagnetic Theory (Electromagnetics library) http://www.ronny-goerner.de/books/transient-lens-synthesis-differential-geometry-in-electromagnetic-theory-electromagnetics-library. On the other hand, you can keep cutting the height of a wave in half, and it keeps on being a wave. Waves can add constructively, or destructively. But particles always add constructively. 5 M&Ms plus 3 M&Ms is always 8 M&Ms: never 2. Because of the difference in the way they add, they act very differently in a double-slit experiment , source: Bäcklund and Darboux Transformations: Geometry and Modern Applications in Soliton Theory (Cambridge Texts in Applied Mathematics) http://blog.malvenko.net/?lib/baecklund-and-darboux-transformations-geometry-and-modern-applications-in-soliton-theory-cambridge. While explaining the photoelectric effect, Einstein proposed that electromagnetic radiation, a wave, can also behave as particle (photon) Dynamics http://warholprints.com/library/dynamics. They behave in a way that is like nothing that you have ever seen before. ... Our imagination is stretched to the utmost, not, as in fiction, to imagine things which are not really there, but just to comprehend those things which are there. .. , source: Collisions, Rings, and Other Newtonian N-Body Problems (Cbms Regional Conference Series in Mathematics) Collisions, Rings, and Other Newtonian. The American physicist Albert Michelson invented the optical interferometer illustrated in figure 1.13. The incoming beam is split into two beams by the half-silvered mirror DeBorgli Wave Particle duality - A Myth - Photon and Mind: Science without religion is Blind and Religion without Science is Lame - Einstein ( Part 16) ... and Origin of The Universe Book 7) sesstolica.ru.
f1e84ada0e33a5bb
Interpretations of quantum mechanics From Wikipedia, the free encyclopedia Jump to: navigation, search An interpretation of quantum mechanics is a set of statements which attempt to explain how quantum mechanics informs our understanding of nature. Although quantum mechanics has held up to rigorous and thorough experimental testing, many of these experiments are open to different interpretations. There exist a number of contending schools of thought, differing over whether quantum mechanics can be understood to be deterministic, which elements of quantum mechanics can be considered "real", and other matters. This question is of special interest to philosophers of physics, as physicists continue to show a strong interest in the subject. They usually consider an interpretation of quantum mechanics as an interpretation of the mathematical formalism of quantum mechanics, specifying the physical meaning of the mathematical entities of the theory. History of interpretations[edit] Main quantum mechanics interpreters The definition of quantum theorists' terms, such as wave functions and matrix mechanics, progressed through many stages. For instance, Erwin Schrödinger originally viewed the electron's wave function as its charge density smeared across the field, whereas Max Born reinterpreted it as the electron's probability density distributed across the field. Although the Copenhagen interpretation was originally most popular, quantum decoherence has gained popularity. Thus the many-worlds interpretation has been gaining acceptance.[1][2] Moreover, the strictly formalist position, shunning interpretation, has been challenged by proposals for falsifiable experiments that might one day distinguish among interpretations, as by measuring an AI consciousness[3] or via quantum computing.[4] As a rough guide development of the mainstream view during the 1990s to 2000s, consider the "snapshot" of opinions collected in a poll by Schlosshauer et al. at the 2011 "Quantum Physics and the Nature of Reality" conference of July 2011.[5] The authors reference a similarly informal poll carried out by Max Tegmark at the "Fundamental Problems in Quantum Theory" conference in August 1997. The main conclusion of the authors is that "the Copenhagen interpretation still reigns supreme", receiving the most votes in their poll (42%), besides the rise to mainstream notability of the many-worlds interpretations: "The Copenhagen interpretation still reigns supreme here, especially if we lump it together with intellectual offsprings such as information-based interpretations and the Quantum Bayesian interpretation. In Tegmark's poll, the Everett interpretation received 17% of the vote, which is similar to the number of votes (18%) in our poll." Nature of interpretation[edit] More or less, all interpretations of quantum mechanics share two qualities: 1. They interpret a formalism—a set of equations and principles to generate predictions via input of initial conditions 2. They interpret a phenomenology—a set of observations, including those obtained by empirical research and those obtained informally, such as humans' experience of an unequivocal world Two qualities vary among interpretations: 1. Ontology—claims about what things, such as categories and entities, exist in the world 2. Epistemology—claims about the possibility, scope, and means toward relevant knowledge of the world In philosophy of science, the distinction of knowledge versus reality is termed epistemic versus ontic. A general law is a regularity of outcomes (epistemic), whereas a causal mechanism may regulate the outcomes (ontic). A phenomenon can receive interpretation either ontic or epistemic. For instance, indeterminism may be attributed to limitations of human observation and perception (epistemic), or may be explained as a real existing maybe encoded in the universe (ontic). Confusing the epistemic with the ontic, like if one were to presume that a general law actually "governs" outcomes—and that the statement of a regularity has the role of a causal mechanism—is a category mistake. In a broad sense, scientific theory can be viewed as offering scientific realism—approximately true description or explanation of the natural world—or might be perceived with antirealism. A realist stance seeks the epistemic and the ontic, whereas an antirealist stance seeks epistemic but not the ontic. In the 20th century's first half, antirealism was mainly logical positivism, which sought to exclude unobservable aspects of reality from scientific theory. Since the 1950s, antirealism is more modest, usually instrumentalism, permitting talk of unobservable aspects, but ultimately discarding the very question of realism and posing scientific theory as a tool to help humans make predictions, not to attain metaphysical understanding of the world. The instrumentalist view is carried by the famous quote of David Mermin, "Shut up and calculate", often misattributed to Richard Feynman.[6] Other approaches to resolve conceptual problems introduce new mathematical formalism, and so propose alternative theories with their interpretations. An example is Bohmian mechanics, whose empirical equivalence with the three standard formalisms—Schrödinger's wave mechanics, Heisenberg's matrix mechanics, and Feynman's path integral formalism, all empirically equivalent—is doubtful.[citation needed] Challenges to interpretation[edit] Difficulties reflect a number of points about quantum mechanics: 1. Abstract, mathematical nature of quantum field theories 2. Existence of apparently indeterministic and yet irreversible processes 3. Role of the observer in determining outcomes 4. Classically unexpected correlations between remote objects 5. Complementarity of proffered descriptions 6. Rapidly rising intricacy, far exceeding humans' present calculational capacity, as a system's size increases 7. Lack of interest on this subject by Dirac and other notables (including Feynman) The mathematical structure of quantum mechanics is based on rather abstract mathematics, like Hilbert space. In classical field theory, a physical property at a given location in the field is readily derived. In Heisenberg's formalism, on the other hand, to derive physical information about a location in the field, one must apply a quantum operation to a quantum state, an elaborate mathematical process.[7] Schrödinger's formalism describes a waveform governing probability of outcomes across a field. Yet how do we find in a specific location a particle whose wavefunction of mere probability distribution of existence spans a vast region of space? The act of measurement can interact with the system state in peculiar ways, as found in double-slit experiments. The Copenhagen interpretation holds that the myriad probabilities across a quantum field are unreal, yet that the act of observation/measurement collapses the wavefunction and sets a single possibility to become real. Yet quantum decoherence grants that all the possibilities can be real, and that the act of observation/measurement sets up new subsystems.[8] Quantum entanglement, as illustrated in the EPR paradox, seemingly violates principles of local causality.[9] Complementarity holds that no set of classical physical concepts can simultaneously refer to all properties of a quantum system. For instance, wave description A and particulate description B can each describe quantum system S, but not simultaneously. Still, complementarity does not usually imply that classical logic is at fault (although Hilary Putnam took such a view in "Is Logic Empirical?"); rather, the composition of physical properties of S does not obey the rules of classical propositional logic when using propositional connectives (see "Quantum logic"). As now well known, the "origin of complementarity lies in the non-commutativity of operators" that describe quantum objects (Omnès 1999). Since the intricacy of a quantum system is exponential, it is difficult to derive classical approximations. Instrumentalist interpretation[edit] Any modern scientific theory requires at the very least an instrumentalist description that relates the mathematical formalism to experimental practice and prediction. In the case of quantum mechanics, the most common instrumentalist description is an assertion of statistical regularity between state preparation processes and measurement processes. That is, if a measurement of a real-value quantity is performed many times, each time starting with the same initial conditions, the outcome is a well-defined probability distribution agreeing with the real numbers; moreover, quantum mechanics provides a computational instrument to determine statistical properties of this distribution, such as its expectation value. Calculations for measurements performed on a system S postulate a Hilbert space H over the complex numbers. When the system S is prepared in a pure state, it is associated with a vector in H. Measurable quantities are associated with Hermitian operators acting on H: these are referred to as observables. Repeated measurement of an observable A where S is prepared in state ψ yields a distribution of values. The expectation value of this distribution is given by the expression This mathematical machinery gives a simple, direct way to compute a statistical property of the outcome of an experiment, once it is understood how to associate the initial state with a Hilbert space vector, and the measured quantity with an observable (that is, a specific Hermitian operator). As an example of such a computation, the probability of finding the system in a given state is given by computing the expectation value of a (rank-1) projection operator The probability is then the non-negative real number given by Summary of common interpretations of quantum mechanics[edit] Classification adopted by Einstein[edit] An interpretation (i.e. a semantic explanation of the formal mathematics of quantum mechanics) can be characterized by its treatment of certain matters addressed by Einstein, such as: To explain these properties, we need to be more explicit about the kind of picture an interpretation provides. To that end we will regard an interpretation as a correspondence between the elements of the mathematical formalism M and the elements of an interpreting structure I, where: • The mathematical formalism M consists of the Hilbert space machinery of ket-vectors, self-adjoint operators acting on the space of ket-vectors, unitary time dependence of the ket-vectors, and measurement operations. In this context a measurement operation is a transformation which turns a ket-vector into a probability distribution (for a formalization of this concept see quantum operations). • The interpreting structure I includes states, transitions between states, measurement operations, and possibly information about spatial extension of these elements. A measurement operation refers to an operation which returns a value and might result in a system state change. Spatial information would be exhibited by states represented as functions on configuration space. The transitions may be non-deterministic or probabilistic or there may be infinitely many states. The crucial aspect of an interpretation is whether the elements of I are regarded as physically real. Hence the bare instrumentalist view of quantum mechanics outlined in the previous section is not an interpretation at all, for it makes no claims about elements of physical reality. The current usage of realism and completeness originated in the 1935 paper in which Einstein and others proposed the EPR paradox.[10] In that paper the authors proposed the concepts element of reality and the completeness of a physical theory. They characterised element of reality as a quantity whose value can be predicted with certainty before measuring or otherwise disturbing it, and defined a complete physical theory as one in which every element of physical reality is accounted for by the theory. In a semantic view of interpretation, an interpretation is complete if every element of the interpreting structure is present in the mathematics. Realism is also a property of each of the elements of the maths; an element is real if it corresponds to something in the interpreting structure. For example, in some interpretations of quantum mechanics (such as the many-worlds interpretation) the ket vector associated to the system state is said to correspond to an element of physical reality, while in other interpretations it is not. Determinism is a property characterizing state changes due to the passage of time, namely that the state at a future instant is a function of the state in the present (see time evolution). It may not always be clear whether a particular interpretation is deterministic or not, as there may not be a clear choice of a time parameter. Moreover, a given theory may have two interpretations, one of which is deterministic and the other not. Local realism has two aspects: • The value returned by a measurement corresponds to the value of some function in the state space. In other words, that value is an element of reality; • The effects of measurement have a propagation speed not exceeding some universal limit (e.g. the speed of light). In order for this to make sense, measurement operations in the interpreting structure must be localized. A precise formulation of local realism in terms of a local hidden variable theory was proposed by John Bell. Bell's theorem, combined with experimental testing, restricts the kinds of properties a quantum theory can have, the primary implication being that quantum mechanics cannot satisfy both the principle of locality and counterfactual definiteness. The Copenhagen interpretation[edit] The Copenhagen interpretation is the "standard" interpretation of quantum mechanics formulated by Niels Bohr and Werner Heisenberg while collaborating in Copenhagen around 1927. Bohr and Heisenberg extended the probabilistic interpretation of the wavefunction proposed originally by Max Born. The Copenhagen interpretation rejects questions like "where was the particle before I measured its position?" as meaningless. The measurement process randomly picks out exactly one of the many possibilities allowed for by the state's wave function in a manner consistent with the well-defined probabilities that are assigned to each possible state. According to the interpretation, the interaction of an observer or apparatus that is external to the quantum system is the cause of wave function collapse, thus according to Paul Davies, "reality is in the observations, not in the electron".[11] What collapses in this interpretation is the knowledge of the observer and not an "objective" wavefunction. Many worlds[edit] The many-worlds interpretation is an interpretation of quantum mechanics in which a universal wavefunction obeys the same deterministic, reversible laws at all times; in particular there is no (indeterministic and irreversible) wavefunction collapse associated with measurement. The phenomena associated with measurement are claimed to be explained by decoherence, which occurs when states interact with the environment producing entanglement, repeatedly "splitting" the universe into mutually unobservable alternate histories—effectively distinct universes within a greater multiverse. In this interpretation the wavefunction has objective reality. Consistent histories[edit] Main article: Consistent histories The consistent histories interpretation generalizes the conventional Copenhagen interpretation and attempts to provide a natural interpretation of quantum cosmology. The theory is based on a consistency criterion that allows the history of a system to be described so that the probabilities for each history obey the additive rules of classical probability. It is claimed to be consistent with the Schrödinger equation. According to this interpretation, the purpose of a quantum-mechanical theory is to predict the relative probabilities of various alternative histories (for example, of a particle). Ensemble interpretation, or statistical interpretation[edit] The ensemble interpretation, also called the statistical interpretation, can be viewed as a minimalist interpretation. That is, it claims to make the fewest assumptions associated with the standard mathematics. It takes the statistical interpretation of Born to the fullest extent. The interpretation states that the wave function does not apply to an individual system – for example, a single particle – but is an abstract statistical quantity that only applies to an ensemble (a vast multitude) of similarly prepared systems or particles. Probably the most notable supporter of such an interpretation was Einstein: The attempt to conceive the quantum-theoretical description as the complete description of the individual systems leads to unnatural theoretical interpretations, which become immediately unnecessary if one accepts the interpretation that the description refers to ensembles of systems and not to individual systems. — Einstein in Albert Einstein: Philosopher-Scientist, ed. P.A. Schilpp (Harper & Row, New York) The most prominent current advocate of the ensemble interpretation is Leslie E. Ballentine, professor at Simon Fraser University, author of the graduate level text book Quantum Mechanics, A Modern Development. An experiment illustrating the ensemble interpretation is provided in Akira Tonomura's Video clip 1.[12] It is evident from this double-slit experiment with an ensemble of individual electrons that, since the quantum mechanical wave function (absolutely squared) describes the completed interference pattern, it must describe an ensemble. A new version of the ensemble interpretation that relies on a reformulation of probability theory was introduced by Raed Shaiia.[13][14] De Broglie–Bohm theory[edit] The de Broglie–Bohm theory of quantum mechanics is a theory by Louis de Broglie and extended later by David Bohm to include measurements. Particles, which always have positions, are guided by the wavefunction. The wavefunction evolves according to the Schrödinger wave equation, and the wavefunction never collapses. The theory takes place in a single space-time, is non-local, and is deterministic. The simultaneous determination of a particle's position and velocity is subject to the usual uncertainty principle constraint. The theory is considered to be a hidden variable theory, and by embracing non-locality it satisfies Bell's inequality. The measurement problem is resolved, since the particles have definite positions at all times.[15] Collapse is explained as phenomenological.[16] Relational quantum mechanics[edit] The essential idea behind relational quantum mechanics, following the precedent of special relativity, is that different observers may give different accounts of the same series of events: for example, to one observer at a given point in time, a system may be in a single, "collapsed" eigenstate, while to another observer at the same time, it may be in a superposition of two or more states. Consequently, if quantum mechanics is to be a complete theory, relational quantum mechanics argues that the notion of "state" describes not the observed system itself, but the relationship, or correlation, between the system and its observer(s). The state vector of conventional quantum mechanics becomes a description of the correlation of some degrees of freedom in the observer, with respect to the observed system. However, it is held by relational quantum mechanics that this applies to all physical objects, whether or not they are conscious or macroscopic. Any "measurement event" is seen simply as an ordinary physical interaction, an establishment of the sort of correlation discussed above. Thus the physical content of the theory has to do not with objects themselves, but the relations between them.[17][18] An independent relational approach to quantum mechanics was developed in analogy with David Bohm's elucidation of special relativity,[19] in which a detection event is regarded as establishing a relationship between the quantized field and the detector. The inherent ambiguity associated with applying Heisenberg's uncertainty principle is subsequently avoided.[20] Transactional interpretation[edit] The transactional interpretation of quantum mechanics (TIQM) by John G. Cramer is an interpretation of quantum mechanics inspired by the Wheeler–Feynman absorber theory.[21] It describes a quantum interaction in terms of a standing wave formed by the sum of a retarded (forward-in-time) and an advanced (backward-in-time) wave. The author argues that it avoids the philosophical problems with the Copenhagen interpretation and the role of the observer, and resolves various quantum paradoxes. Stochastic mechanics[edit] An entirely classical derivation and interpretation of Schrödinger's wave equation by analogy with Brownian motion was suggested by Princeton University professor Edward Nelson in 1966.[22] Similar considerations had previously been published, for example by R. Fürth (1933), I. Fényes (1952), and Walter Weizel (1953), and are referenced in Nelson's paper. More recent work on the stochastic interpretation has been done by M. Pavon.[23] An alternative stochastic interpretation was developed by Roumen Tsekov.[24] Objective collapse theories[edit] Objective collapse theories differ from the Copenhagen interpretation in regarding both the wavefunction and the process of collapse as ontologically objective. In objective theories, collapse occurs randomly ("spontaneous localization"), or when some physical threshold is reached, with observers having no special role. Thus, they are realistic, indeterministic, no-hidden-variables theories. The mechanism of collapse is not specified by standard quantum mechanics, which needs to be extended if this approach is correct, meaning that Objective Collapse is more of a theory than an interpretation. Examples include the Ghirardi-Rimini-Weber theory[25] and the Penrose interpretation.[26] von Neumann/Wigner interpretation: consciousness causes the collapse[edit] In his treatise The Mathematical Foundations of Quantum Mechanics, John von Neumann deeply analyzed the so-called measurement problem. He concluded that the entire physical universe could be made subject to the Schrödinger equation (the universal wave function). He also described how measurement could cause a collapse of the wave function.[27] This point of view was prominently expanded on by Eugene Wigner, who argued that human experimenter consciousness (or maybe even dog consciousness) was critical for the collapse, but he later abandoned this interpretation.[28][29] Variations of the von Neumann interpretation include: Subjective reduction research This principle, that consciousness causes the collapse, is the point of intersection between quantum mechanics and the mind/body problem; and researchers are working to detect conscious events correlated with physical events that, according to quantum theory, should involve a wave function collapse; but, thus far, results are inconclusive.[30][31] Participatory anthropic principle (PAP) Main article: Anthropic principle John Archibald Wheeler's participatory anthropic principle says that consciousness plays some role in bringing the universe into existence.[32] Other physicists have elaborated their own variations of the von Neumann interpretation; including: • Henry P. Stapp (Mindful Universe: Quantum Mechanics and the Participating Observer) • Bruce Rosenblum and Fred Kuttner (Quantum Enigma: Physics Encounters Consciousness) • Amit Goswami (The Self-Aware Universe) Many minds[edit] Quantum logic[edit] Main article: Quantum logic Quantum logic can be regarded as a kind of propositional logic suitable for understanding the apparent anomalies regarding quantum measurement, most notably those concerning composition of measurement operations of complementary variables. This research area and its name originated in the 1936 paper by Garrett Birkhoff and John von Neumann, who attempted to reconcile some of the apparent inconsistencies of classical boolean logic with the facts related to measurement and observation in quantum mechanics. Quantum information theories[edit] Quantum informational approaches[33] have attracted growing support.[34][35] They subdivide into two kinds[36] • Information ontologies, such as J. A. Wheeler's "it from bit". These approaches have been described as a revival of immaterialism[37] • Interpretations where quantum mechanics is said to describe an observer's knowledge of the world, rather than the world itself. This approach has some similarity with Bohr's thinking.[38] Collapse (also known as reduction) is often interpreted as an observer acquiring information from a measurement, rather than as an objective event. These approaches have been appraised as similar to instrumentalism. The state is not an objective property of an individual system but is that information, obtained from a knowledge of how a system was prepared, which can be used for making predictions about future measurements. ...A quantum mechanical state being a summary of the observer's information about an individual physical system changes both by dynamical laws, and whenever the observer acquires new information about the system through the process of measurement. The existence of two laws for the evolution of the state vector...becomes problematical only if it is believed that the state vector is an objective property of the system...The "reduction of the wavepacket" does take place in the consciousness of the observer, not because of any unique physical process which takes place there, but only because the state is a construct of the observer and not an objective property of the physical system[39] Modal interpretations of quantum theory[edit] Modal interpretations of quantum mechanics were first conceived of in 1972 by B. van Fraassen, in his paper "A formal approach to the philosophy of science." However, this term now is used to describe a larger set of models that grew out of this approach. The Stanford Encyclopedia of Philosophy describes several versions:[40] • The Copenhagen variant • Kochen-Dieks-Healey Interpretations • Motivating Early Modal Interpretations, based on the work of R. Clifton, M. Dickson and J. Bub. Time-symmetric theories[edit] Several theories have been proposed which modify the equations of quantum mechanics to be symmetric with respect to time reversal.[41][42][43][44][45][46] (E.g. see Wheeler-Feynman time-symmetric theory). This creates retrocausality: events in the future can affect ones in the past, exactly as events in the past can affect ones in the future. In these theories, a single measurement cannot fully determine the state of a system (making them a type of hidden variables theory), but given two measurements performed at different times, it is possible to calculate the exact state of the system at all intermediate times. The collapse of the wavefunction is therefore not a physical change to the system, just a change in our knowledge of it due to the second measurement. Similarly, they explain entanglement as not being a true physical state but just an illusion created by ignoring retrocausality. The point where two particles appear to "become entangled" is simply a point where each particle is being influenced by events that occur to the other particle in the future. Not all advocates of time-symmetric causality favour modifying the unitary dynamics of standard quantum mechanics. Thus a leading exponent of the two-state vector formalism, Lev Vaidman, highlights how well the two-state vector formalism dovetails with Hugh Everett's many-worlds interpretation.[47] Branching space-time theories[edit] BST theories resemble the many worlds interpretation; however, "the main difference is that the BST interpretation takes the branching of history to be a feature of the topology of the set of events with their causal relationships... rather than a consequence of the separate evolution of different components of a state vector."[48] In MWI, it is the wave functions that branches, whereas in BST, the space-time topology itself branches. BST has applications to Bell's theorem, quantum computation and quantum gravity. It also has some resemblance to hidden variable theories and the ensemble interpretation: particles in BST have multiple well defined trajectories at the microscopic level. These can only be treated stochastically at a coarse grained level, in line with the ensemble interpretation.[48] Other interpretations[edit] As well as the mainstream interpretations discussed above, a number of other interpretations have been proposed which have not made a significant scientific impact for whatever reason. These range from proposals by mainstream physicists to the more occult ideas of quantum mysticism. Comparison of interpretations[edit] The most common interpretations are summarized in the table below. The values shown in the cells of the table are not without controversy, for the precise meanings of some of the concepts involved are unclear and, in fact, are themselves at the center of the controversy surrounding the given interpretation. No experimental evidence exists that distinguishes among these interpretations. To that extent, the physical theory stands, and is consistent with itself and with reality; difficulties arise only when one attempts to "interpret" the theory. Nevertheless, designing experiments which would test the various interpretations is the subject of active research. Most of these interpretations have variants. For example, it is difficult to get a precise definition of the Copenhagen interpretation as it was developed and argued about by many people. Interpretation Author(s) Deterministic? Wavefunction Local? Counterfactual Ensemble interpretation Max Born, 1926 Agnostic No Yes Agnostic No No No No No Copenhagen interpretation Niels Bohr, Werner Heisenberg, 1927 No No1 Yes No Yes2 Causal No No No de Broglie–Bohm theory Louis de Broglie, 1927, David Bohm, 1952 Yes Yes3 Yes4 Yes No No No17 Yes Yes von Neumann interpretation John von Neumann, 1932, John Archibald Wheeler, Eugene Wigner No Yes Yes No Yes Causal No No Yes Quantum logic Garrett Birkhoff, 1936 Agnostic Agnostic Yes5 No No Interpretational6 Agnostic No No Many-worlds interpretation Hugh Everett, 1957 Yes Yes No No No No Yes Ill-posed Yes Time-symmetric theories Satosi Watanabe, 1955 Yes Yes Yes Yes No No Yes No Yes Stochastic interpretation Edward Nelson, 1966 No No Yes Yes16 No No No Yes16 No Many-minds interpretation H. Dieter Zeh, 1970 Yes Yes No No No Interpretational7 Yes Ill-posed Yes Consistent histories Robert B. Griffiths, 1984 No No No No No No Yes No Yes Objective collapse theories Ghirardi–Rimini–Weber, 1986, Penrose interpretation, 1989 No Yes Yes No Yes No No No No Transactional interpretation John G. Cramer, 1986 No Yes Yes No Yes9 No No14 Yes No Relational interpretation Carlo Rovelli, 1994 Agnostic No Agnostic10 No Yes11 Intrinsic12 No 18 No No • 1 According to Bohr, the concept of a physical state independent of the conditions of its experimental observation does not have a well-defined meaning. According to Heisenberg the wavefunction represents a probability, but not an objective reality itself in space and time. • 2 According to the Copenhagen interpretation, the wavefunction collapses when a measurement is performed. • 3 Both particle AND guiding wavefunction are real. • 4 Unique particle history, but multiple wave histories. • 5 But quantum logic is more limited in applicability than Coherent Histories. • 6 Quantum mechanics is regarded as a way of predicting observations, or a theory of measurement. • 7 Observers separate the universal wavefunction into orthogonal sets of experiences. • 9 In the TI the collapse of the state vector is interpreted as the completion of the transaction between emitter and absorber. • 10 Comparing histories between systems in this interpretation has no well-defined meaning. • 11 Any physical interaction is treated as a collapse event relative to the systems involved, not just macroscopic or conscious observers. • 12 The state of the system is observer-dependent, i.e., the state is specific to the reference frame of the observer. • 14 The transactional interpretation is explicitly non-local. • 15 The assumption of intrinsic periodicity is an element of non-locality consistent with relativity as the periodicity varies in a causal way. • 16 In the stochastic interpretation is not possible to define velocities for particles, i.e. the paths are not smooth. Moreover, to know the motion of the particles at any moment, you have to know what the Markov process is. However, once we know the exactly initial conditions and the Markov process, the theory is in fact a realistic interpretation of quantum mechanics. • 17 The kind of non-locality required by the theory, sufficient to violate the Bell inequalities, is weaker than that assumed in EPR. In particular, this kind non-locality is compatible with no signaling theorem and Lorentz invariance. See also[edit] 1. ^ Vaidman, L. (2002, March 24). Many-Worlds Interpretation of Quantum Mechanics. Retrieved March 19, 2010, from Stanford Encyclopedia of Philosophy: 2. ^ Frank J. Tipler (1994). The Physics of Immortality: Modern Cosmology, God, and the Resurrection of the Dead. Anchor Books. ISBN 978-0-385-46799-5.  A controversial poll mentioned in found that of 72 "leading cosmologists and other quantum field theorists", 58% including Stephen Hawking, Murray Gell-Mann, and Richard Feynman supported a many-worlds interpretation ["Who believes in many-worlds?",, Accessed online: 24 Jan 2011]. 3. ^ Quantum theory as a universal physical theory, by David Deutsch, International Journal of Theoretical Physics, Vol 24 #1 (1985) 4. ^ Three connections between Everett's interpretation and experiment Quantum Concepts of Space and Time, by David Deutsch, Oxford University Press (1986) 6. ^ For a discussion of the provenance of the phrase "shut up and calculate", see Mermin, N. David (2004). "Could feynman have said this?". Physics Today. 57 (5): 10. doi:10.1063/1.1768652.  8. ^ Guido Bacciagaluppi, "The role of decoherence in quantum mechanics", The Stanford Encyclopedia of Philosophy (Winter 2012), Edward N Zalta, ed. 9. ^ La nouvelle cuisine, by John S Bell, last article of Speakable and Unspeakable in Quantum Mechanics, second edition. 10. ^ Einstein, A.; Podolsky, B.; Rosen, N. (1935). "Can quantum-mechanical description of physical reality be considered complete?". Phys. Rev. 47: 777–780. doi:10.1103/physrev.47.777.  11. ^,Werner/Heisenberg,%20Werner%20-%20Physics%20and%20philosophy.pdf 12. ^ "An experiment illustrating the ensemble interpretation". Retrieved 2011-01-24.  13. ^ Shaiia, Raed M. (9 February 2015). "On the Measurement Problem". doi:10.5923/j.ijtmp.20140405.04.  14. ^ 15. ^ Maudlin, T. (1995). "Why Bohm's Theory Solves the Measurement Problem". Philosophy of Science. 62: 479–483. doi:10.1086/289879.  16. ^ Durr, D.; Zanghi, N.; Goldstein, S. (Nov 14, 1995). "Bohmian Mechanics as the Foundation of Quantum Mechanics ". arXiv:quant-ph/9511016Freely accessible.  Also published in J.T. Cushing; Arthur Fine; S. Goldstein (17 April 2013). Bohmian Mechanics and Quantum Theory: An Appraisal. Springer Science & Business Media. pp. 21–43. ISBN 978-94-015-8715-0.  17. ^ "Relational Quantum Mechanics (Stanford Encyclopedia of Philosophy)". Retrieved 2011-01-24.  18. ^ For more information, see Carlo Rovelli (1996). "Relational Quantum Mechanics". International Journal of Theoretical Physics. 35 (8): 1637–1678. arXiv:quant-ph/9609002Freely accessible. Bibcode:1996IJTP...35.1637R. doi:10.1007/BF02302261.  19. ^ David Bohm, The Special Theory of Relativity, Benjamin, New York, 1965 20. ^ See relational approach to wave-particle duality. For a full account see Zheng, Qianbing; Kobayashi, Takayoshi (1996). "Quantum Optics as a Relativistic Theory of Light" (PDF). Physics Essays. 9 (3): 447. doi:10.4006/1.3029255.  Also, see Annual Report, Department of Physics, School of Science, University of Tokyo (1992) 240. 21. ^ "Quantum Nocality – Cramer". Retrieved 2011-01-24.  22. ^ Nelson, E (1966). "Derivation of the Schrödinger Equation from Newtonian Mechanics". Phys. Rev. 150: 1079–1085. doi:10.1103/physrev.150.1079.  23. ^ Pavon, M. (2000). "Stochastic mechanics and the Feynman integral". J. Math. Phys. 41: 6060–6078. doi:10.1063/1.1286880.  24. ^ Roumen Tsekov (2012). "Bohmian Mechanics versus Madelung Quantum Hydrodynamics". Ann. Univ. Sofia, Fac. Phys. SE: 112–119. arXiv:0904.0723Freely accessible. Bibcode:2012AUSFP..SE..112T.  25. ^ "Frigg, R. GRW theory" (PDF). Retrieved 2011-01-24.  26. ^ "Review of Penrose's Shadows of the Mind". 1999. Archived from the original on 2001-02-09. Retrieved 2011-01-24.  27. ^ von Neumann, John. (1932/1955). Mathematical Foundations of Quantum Mechanics. Princeton: Princeton University Press. Translated by Robert T. Beyer. 28. ^ [Michael Esfeld, (1999), "Essay Review: Wigner's View of Physical Reality", published in Studies in History and Philosophy of Modern Physics, 30B, pp. 145–154, Elsevier Science Ltd.] 29. ^ Zvi Schreiber (1995). "The Nine Lives of Schrödinger's Cat". arXiv:quant-ph/9501014Freely accessible.  30. ^ Dick J. Bierman and Stephen Whitmarsh. (2006). Consciousness and Quantum Physics: Empirical Research on the Subjective Reduction of the State Vector. in Jack A. Tuszynski (Ed). The Emerging Physics of Consciousness. p. 27-48. 31. ^ Nunn, C. M. H.; et al. (1994). "Collapse of a Quantum Field may Affect Brain Function. '". Journal of Consciousness Studies'. 1 (1): 127–139.  32. ^ "- The anthropic universe". 2006-02-18. Retrieved 2011-01-24.  33. ^ "In the beginning was the bit". New Scientist. 2001-02-17. Retrieved 2013-01-25.  36. ^ Information, Immaterialism, Instrumentalism: Old and New in Quantum Information. Christopher G. Timpson 37. ^ Timpson,Op. Cit.: "Let us call the thought that information might be the basic category from which all else flows informational immaterialism." 38. ^ "Physics concerns what we can say about nature". (Niels Bohr, quoted in Petersen, A. (1963). The philosophy of Niels Bohr. Bulletin of the Atomic Scientists, 19(7):8–14.) 39. ^ Hartle, J. B. (1968). "Quantum mechanics of individual systems". Am. J. Phys. 36 (8): 704–712. doi:10.1119/1.1975096.  40. ^ "Modal Interpretations of Quantum Mechanics". Stanford Encyclopedia of Philosophy. Retrieved 2011-01-24.  41. ^ Watanabe, Satosi (1955). "Symmetry of physical laws. Part III. Prediction and retrodiction". Reviews of Modern Physics. 27 (2): 179–186. doi:10.1103/revmodphys.27.179.  42. ^ Aharonov, Y.; et al. (1964). "Time Symmetry in the Quantum Process of Measurement". Phys. Rev. 134: B1410–1416. doi:10.1103/physrev.134.b1410.  43. ^ Aharonov, Y. and Vaidman, L. "On the Two-State Vector Reformulation of Quantum Mechanics." Physica Scripta, Volume T76, pp. 85-92 (1998). 44. ^ Wharton, K. B. (2007). "Time-Symmetric Quantum Mechanics". Foundations of Physics. 37 (1): 159–168. doi:10.1007/s10701-006-9089-1.  45. ^ Wharton, K. B. (2010). "A Novel Interpretation of the Klein–Gordon Equation". Foundations of Physics. 40 (3): 313–332. doi:10.1007/s10701-009-9398-2.  46. ^ Heaney, M. B. (2013). "A Symmetrical Interpretation of the Klein–Gordon Equation". Foundations of Physics. 43: 733–746. doi:10.1007/s10701-013-9713-9.  47. ^ Yakir Aharonov, Lev Vaidman: The Two-State Vector Formalism of Quantum Mechanics: an Updated Review. In: Juan Gonzalo Muga, Rafael Sala Mayato, Íñigo Egusquiza (eds.): Time in Quantum Mechanics, Volume 1, Lecture Notes in Physics 734, pp. 399–447, 2nd ed., Springer, 2008, ISBN 978-3-540-73472-7, DOI 10.1007/978-3-540-73473-4_13, arXiv:quant-ph/0105101v2 (submitted 21 May 2001, version of 10 June 2007), p. 443 48. ^ a b Sharlow, Mark; "What Branching Spacetime might do for Physics" p.2 • Bub, J.; Clifton, R. (1996). "A uniqueness theorem for interpretations of quantum mechanics". Studies in History and Philosophy of Modern Physics. 27B: 181–219.  • Rudolf Carnap, 1939, "The interpretation of physics", in Foundations of Logic and Mathematics of the International Encyclopedia of Unified Science. University of Chicago Press. • Dickson, M., 1994, "Wavefunction tails in the modal interpretation" in Hull, D., Forbes, M., and Burian, R., eds., Proceedings of the PSA 1" 366–76. East Lansing, Michigan: Philosophy of Science Association. • --------, and Clifton, R., 1998, "Lorentz-invariance in modal interpretations" in Dieks, D. and Vermaas, P., eds., The Modal Interpretation of Quantum Mechanics. Dordrecht: Kluwer Academic Publishers: 9–48. • Fuchs, Christopher, 2002, "Quantum Mechanics as Quantum Information (and only a little more)." arXiv:quant-ph/0205039 • -------- and A. Peres, 2000, "Quantum theory needs no ‘interpretation’", Physics Today. • Herbert, N., 1985. Quantum Reality: Beyond the New Physics. New York: Doubleday. ISBN 0-385-23569-0. • Hey, Anthony, and Walters, P., 2003. The New Quantum Universe, 2nd ed. Cambridge Univ. Press. ISBN 0-521-56457-3. • Jackiw, Roman; Kleppner, D. (2000). "One Hundred Years of Quantum Physics". Science. 289 (5481): 893.  • Max Jammer, 1966. The Conceptual Development of Quantum Mechanics. McGraw-Hill. • --------, 1974. The Philosophy of Quantum Mechanics. Wiley & Sons. • Al-Khalili, 2003. Quantum: A Guide for the Perplexed. London: Weidenfeld & Nicholson. • de Muynck, W. M., 2002. Foundations of quantum mechanics, an empiricist approach. Dordrecht: Kluwer Academic Publishers. ISBN 1-4020-0932-1.[1] • Roland Omnès, 1999. Understanding Quantum Mechanics. Princeton Univ. Press. • Karl Popper, 1963. Conjectures and Refutations. London: Routledge and Kegan Paul. The chapter "Three views Concerning Human Knowledge" addresses, among other things, instrumentalism in the physical sciences. • Hans Reichenbach, 1944. Philosophic Foundations of Quantum Mechanics. Univ. of California Press. • Tegmark, Max; Wheeler, J. A. (2001). "100 Years of Quantum Mysteries". Scientific American. 284: 68–75. doi:10.1038/scientificamerican0201-68.  • Bas van Fraassen, 1972, "A formal approach to the philosophy of science", in R. Colodny, ed., Paradigms and Paradoxes: The Philosophical Challenge of the Quantum Domain. Univ. of Pittsburgh Press: 303-66. • John A. Wheeler and Wojciech Hubert Zurek (eds), Quantum Theory and Measurement, Princeton: Princeton University Press, ISBN 0-691-08316-9, LoC QC174.125.Q38 1983. Further reading[edit] Almost all authors below are professional physicists. External links[edit] 1. ^ de Muynck, Willem M (2002). Foundations of quantum mechanics: an empiricist approach. Klower Academic Publishers. ISBN 1-4020-0932-1. Retrieved 2011-01-24.
29640984c183767b
Sign up Here's how it works: 1. Anybody can ask a question 2. Anybody can answer I don't understand how quantum mechanics (and therefore also quantum computers) can work given that while we work with quantum states, particles that this quantum state consist of cannot be observed, which is the most fundamental requirement. If I am not mistaken, by "observed" we mean interaction with any other particle (photon, gluon, electron or whatever else). So my very important questions: 1. Aren't the particles this quantum state consists of interacting with each other? Why doesn't that cause the state to collapse? 2. Aren't all particles in the universe interacting with Higgs field and gravitons etc? Why doesn't that cause every quantum state to collapse? I feel there is something very fundamental in quantum mechanics that I am not aware of, hence I would be very pleased to have these questions answered. share|cite|improve this question The state is usually taken to be the state of the whole system, i.e., inclusive of interactions. The wavefunction collapses if someone "outside" the system performs and observation. – Sanath K. Devalapurkar Jan 14 '14 at 23:56 Related: "Quantum Zeno Effect" – dmckee Jan 15 '14 at 0:31 "If I am not mistaken, by "observed" we mean interaction with any other particle" - no, this is wrong. – Anixx Jan 15 '14 at 1:17 @SanathDevalapurkar What if we consider the whole universe as a quantum system? Or at least consider a quantum system that includes the observer. – Cameron Martin May 20 '14 at 0:57 @CameronMartin The universe is non-quantum, as is obvious. If the quantum system includes the observer, then I'm not sure. This is a philosophical question - I'd like it if you could email me, where we could continue this discussion (this post is over 4 months old - it's not right to bring it to the front page). For my email, see my profile page. – Sanath K. Devalapurkar May 20 '14 at 1:00 up vote 9 down vote accepted We have a mathematical model for the observations we can make of any system in the micro world. This model is quantum mechanics and its predictions have been verified experimentally over and over again. Observables are quantities we can measure about the particles and fields in the micro world. A main postulate is that to every observable there corresponds a quantum mechanical operator. These operators enter the quantum mechanical equations whose solutions given the boundary conditions describe a system in the micro world. It is true that a quantum system is continually interacting within itself as described by the quantum model, and there can be continual interactions with the boundaries but interaction is not a synonym for a measurement. The continuous interactions are off mass shell, virtual, and within the bounds of the quantum mechanical solutions of specific energy levels and allowed states and conservation of quantum numbers. They are not measurements. Collapse is fancy terminology for measurement . Nobody is measuring the higg's field continuous virtual exchanges that give mass to the elementary particles, nor the gravitons either. In fact gravitons are hypothetical particle because we have never measured one, in the way we have measured photons. Also nobody is measuring the virtual photons that keep the electrons in their energy levels around the nucleus. The basic misconception is identifying "interaction" with measurement. A measurement necessarily means an interaction. An interaction is much more than a measurement. share|cite|improve this answer Here's a nice short story which illustrates the philosophical problem nicely. Why our subjective experience of measurements is the way it is is a big mystery. – spraff Nov 4 '15 at 9:57 Your question contains a false statement: You are mistaken. In different interpretations of quantum mechanics the definition of "measurement" is different. But I think it would be enough if I give just five of which you can choose yourself. • In Copenhagen/von Neuman interpretations the collapse of the wave function is triggered by the observer. This person has the special property which no other object in universe is capable of. In Copenhagen interpretation the collapse can be triggered by any system which is connected to the observer, including the measurement apparatus and external medium (if the observer is not isolated from it). All things can be arbitrarily divided into the observed system and the measuring system by so-called "Heisenberg cut" with the only requirement the measuring system include the observer. • The von Neuman interpretation is the edge case of Copenhagen interpretation where the Heisenberg cut is placed as close to the observer as possible. As such even the parts of his brain still be be considered the part of the observed system. In von Neuman interpretation the collapse of the wave function happens when the observer feels any qualia(feeling) depended on the measured value. • In Bohm interpretation the collapse of the wave function happens when the observer introduces into the measured system some perturbation, which is inevitable when performing the measurement. The difference between the measurement and any other interaction is in that the perturbation introduced by measurement is unknown beforehand. This is because initial conditions of a system containing the observer are unknown. In other words, the observer always contains information which is unknown and cannot be determined by any means due to self-reference problem. Thomas Breuer called this phenomenon "subjective decoherence". The philosophers believe that this unpredictability of the system containing the observer for himself, defines the free will. • In Relational interpretation the collapse happens when the interaction affects the ultimate measurement performed by ultimate observer on the universal wave function at infinite future. As such, for the collapse to happen the result of interaction should somehow affect the external medium, the stars, etc, either now or in the future, rather than being recohered and lost. • In Many-worlds interpretation the wavefunction collapse never happens. Instead what the observer perceives as the collapse is just the event of entanglement of the observer with the observed system. share|cite|improve this answer Let's first of all clear some things up about the fundamental postulates of quantum mechanics. One of the postulates is that all measurable quantities in a quantum system are represented mathematically by so called observables. An observable is thus a mathematical object, more specifically a real linear operator whose 'eigenstates' form a complete set. This essentially means that any quantum state can be expressed as a linear combination of these eigenstates of the observable. A simple example of an observable is the spin operator. If we apply the postulate to this case it simply means that any spin state can be expressed as a combination of the eigenstates of the spin operator. If we are talking about the spin of an electron, for example, the eigenstates are 'spin up' and 'spin down' (naively one could think of an electron spinning counterclockwise or clockwise, respectively). So any spin state can be seen as a linear combination of these spin up and spin down states. Now, when we do a measurement of the spin of a particular electron, we find out what the spin of the electron is at that moment. Another postulate states that the only possible outcomes of such a measurement is an eigenstate. So the only possible results of measuring the spin of an electron is either spin up, or spin down. After this measurement we thus know that the electron has one of these spins, it's previous spin state has 'collapsed' onto one of these states. Now there are other postulates which explicitly tell us exactly how the state of a quantum system evolves with time. So if we wait a while after we measured the spin state of the electron, it's spin state might have changed if for example it interacts with some other particle. Using the laws of quantum mechanics, we can thus calculate the probabilities of measuring spin up or spin down at a later time. So quantum mechanics really does not state anything about quantum states being constantly observed, or about observation apart from measurement at all for that matter. It is only concerned with measurements of states and evolution of states over time. So to explain your particular question in terms of quantum mechanics, say that we have a complex quantum system consisting of many parts (particles, fields, etc). We can measure some properties of this system at the outset, providing us with a specific initial state of the system. These different parts of the system then might go on to interact with each other and evolve by the laws of quantum mechanics into some new state (i.e. by the Schrödinger equation or Dirac equation or by the equations of some quantum field theory etc). After this, we can do new measurements, and we can in principle calculate, exactly, the probabilities of the different possible outcomes of each of these measurements. When we do these new measurements, the probabilities stop being probabilities however and we get a new definite state, the previous 'probabilistic state' has 'collapsed' (the probabilistic state being a linear combination of eigenstates, and the collapsed state a specific eigenstate). So I might have not answered your two specific questions, but hopefully I cleared some things up about quantum mechanics so that you now can see the inherent flaw in those questions. share|cite|improve this answer I just want to add something to the correct @annav answer, with a practical example in basic Quantum Field Theory. Imagine a particle process with $2$ initial particles and $2$ final particles, you have some initial state (say at t= $-\infty$), which is $|i\rangle =|1\rangle |2\rangle$, where $|1\rangle$ and $|2\rangle$ are the states (at t= $-\infty$) of the initial particles. This initial state $|i\rangle$ has a unitary evolution. Practically, the non-trivial part of this evolution is due to the exchange of "virtual particles" (for instance you may imagine two initial electrons exchanging a "virtual photon", or a initial left-handed electron and initial right-handed electron exchanging a "virtual Higgs") Now, the initial state $|i\rangle$ is evolving, so at $t = +\infty$, the final state could be written $|f\rangle = \sum\limits_{k,l} A_{1,2;k,l}|k\rangle |l\rangle$, where $|k\rangle$ and $|l\rangle$ represent some possible state for the final particles. Until now, you see that there is a (unitary) evolution due to the interaction, but there is no "collapse". $A_{1,2;k,l}$, in the above expression, is simply the probability amplitude to find the final particles in a state $|k\rangle |l\rangle$, supposing the initial particles in a state $|1\rangle |2\rangle$. However, if you make a measurement (at t=$+\infty$), you will have a "collapse", and you will find a final state $|k\rangle |l\rangle$ with the probability $|A_{1,2;k,l}|^2$ An other interesting point is that, considering here simple Quantum Mechanics, interactions between a particle and a measurement apparatus , may appear by entanglement. We may consider the example of the 2-slit experiment with photons. Without any measurement appararatus, the total state is $|\psi\rangle = |\psi_L\rangle + |\psi_R \rangle$, where $L$ and $R$ represent the two slits. If you bring a measurement apparatus potentially able to detect which slit has been used for the photon, but without doing explicitely the measurement, the new state is ; $|\psi'\rangle = |\psi_L\rangle |M_L \rangle + |\psi_R \rangle |M_R \rangle$, where $|M_R\rangle$ and $|M_L\rangle$ are states of the measurement apparatus which are quasi-orthogonal ($\langle M_R|M_L\rangle = 0$). This is a pre-measurement state, we see that there is an entanglement between the states of the particle, and the states of the measurement apparatus. Because the states of the apparatus are orthogonal, this destroys the interference pattern. Now, you may really perform a measurement, in this case, you explicitely detect which slit has been used by the photon. After this, the final state would be $|\psi''\rangle = |\psi_L\rangle |M_L \rangle$, if the $L$ slit path is detected. More correct models would involve in fact entangled (pre-measurement) states between the particle, the measurement apparatus and the environment $ \sum\limits_i |\psi_i\rangle |M_i \rangle |E_i \rangle$. share|cite|improve this answer Your Answer
7a669f081af977c2
Q/A: What is your favorite number? What is your favorite number? The short answer is: $3435$. You may ask: "why?" since this number seems, à priori, pretty boring. But it turns out that it is very interesting: it is the only natural number (along with $1$) which have the following property: $$\begin{align}\color{blue}3\color{red}4\color{green}35&=\color{blue}3^{\color{blue}3}+\color{red}4^{\color{red}4}+\color{green}3^{\color{green}3}+5^5\\&= \color{blue}{27}+\color{red}{256}+\color{green}{27}+3125\\ &={3435}.\end{align}$$ I also like Mills' constant  which is the smallest number $\rm A$ such that $\lfloor \rm A^{3^n} \rfloor$ is a prime number for every $n\in\mathbb N$. Its value is approximately equal to $1.306$ and the primes generated by Mills' constant are known as Mills primes; if the Riemann hypothesis is true, the sequence begins: $$2, 11, 1361, 2521008887, \ldots $$ (sequence A051254 in OEIS). And what about you? What is your favorite number, and why? Q/A: What is the math behind the statistical interpretation of Quantum Mechanics? Srinivisa: What is the math behind the statistical interpretation of Quantum Mechanics? Physics is all about motion, start with a system $\rm S$ composed of a particle of mass $m$, moving along an axis, let it be subject to a known force, put some physical laws out there, mix well and BAM: $x(t)$ will determine for you the position of the particle at any time! Once you know that, you can find its velocity, its momentum... and a lot of other useful stuff. Of course, this is just under the framework of classical mechanics. Quantum Mechanics is very different since we can't have any function that determines with absolute certainty the position of a particle at a given time, and thus we can't determine the velocity of the particle without being uncertain. In fact, by Heisenberg's uncertainty principle, the more you know about a particle's position the less you know about its velocity, and the opposite is true. So instead of the well-defined $x(t)$, Quantum Mechanics uses what we call a wavefunction. As the double-slit experiment shows, light can behave like a particle or like a wave. In fact, even electrons display this same behavior with that famous experiment, which turns out to be very useful, since it will help solving the following problem. If we think of electrons as individual particles orbiting the nucleus of an atom, like the planets in our solar system orbiting our sun, then we get to a serious problem. Indeed, to a very serious one. Since the negatively charged electron is attracted by the positively charged nucleus by the electromagnetic force, then the electron will be continuously accelerating, and would thus radiate away its energy and fall into the nucleus. That's why quantum mechanics came and proposed that we shouldn't think of the electron not just as a particle, but also as a wave. And this wave is basically described by the wavefunction, just like a 'classical' particle is described by the $x(t)$ function. We get the wavefunction of a particle (denoted as $\psi(x,t)$ or as $\Psi(x,t)$) by solving Schrödinger's equation: $$\underset{\textit{The time dependent Schrödinger equation.}}{\boxed{\displaystyle\,\, -\dfrac{\hbar^2}{2m}\dfrac{\partial^2\Psi}{\partial x^2}+\mathrm{V}\Psi.\,\,}}$$ The wavefunction is mathematically expressed as: $$\Psi(x,t)=\mathrm{A}e^{\displaystyle i(kx-\omega t)}$$ where $\mathrm{A}$ is the amplitude of the wave, $e$ is a constant which is approximately equal to $2.71$, $i$ is the square root of minus $1$, $k$ is the momentum divided by $\hbar$ which is the same as $h/2\pi$ where $h$ is Planck's constant, and finally $\omega$ denotes the frequency of the wave times $2\pi$. But isn't a particle, by its nature, located at a single and unique point, whereas a wave is spread out in space? How can we make sense of such object? It's in an attempt to answer those questions that the Born interpretation (or the statistical interpretation) of the wave function was born. The latter propose that $|\Psi(x,t)|^2$ tell us the probability of finding the particle at point $x$, at time $t$. More precisely: $$\int_a^b |\Psi(x,t)|^2\,\mathrm dx=\left\{ \text{probability of finding the particle} \\ \text{between $a$ and $b$, at time $t$.}\\ \right\}.$$ Note: The wavefunction itself is complex, but $|\Psi|^2$ is real and nonnegative.  Here is a plot of wavefunctions of identical particles, have fun! Wave Functions of Identical Particles from the Wolfram Demonstrations Project by Michael Trott - If it doesn't work then install Wolfram CDF Player, it's free! And as alway thanks for reading! Best wishes, $\mathcal H$akim. Do Photons have Mass? Do Photons have Mass? As an admin of the Quantum Physics facebook group, I can assure you with great certainty that this is the most popular question we get. In this article, we shall investigate this question to provide a reasonable answer. One of the ways some people attack this question is by arguing that since gravitational attraction is caused by mass, as Newton's equation shows: $$\mathbf{F}=G\dfrac{m_1\cdot m_2}{r^2}$$ and that since light is bent due to gravity, then light must have mass. But that's actually false, Newton's equation is only an approximation and so: $$\mathbf{F}\approx G\dfrac{m_1\cdot m_2}{r^2}.$$ Furthermore, the cause of gravity is not mass but energy and momentum. And so this argument is false. From special relativity: $$E^2=(mc^2)^2+(pc)^2$$  which can be written as: $$E^2-(pc)^2=(mc^2)^2\tag{$\star$}$$ where $E$ denotes energy, $m$ rest mass, $p$ is the momentum and $c$ is the speed of light in vacuo which is approximately $3\cdot10^8\rm m/s$. And we know that the momentum of a photon is given by $\frac{hf}{c}$ and its energy by $E=hf$ where $h$ is Planck's constant (approx $6.62 \times 10^{-34}\rm m^2 kg / s$) and $f$ is its frequency. Therefore, the following holds: $$E=hf\quad p=\frac{hf}c\implies E=pc.$$ Thus, $E^2=(pc)^2\Rightarrow E^2-(pc)^2=0$. We put this result in $\text{($\star$)}$ to get: $$0=E^2-(pc)^2=(mc^2)^2.$$ Solving for $m$ gives $m=0$. So photons are massless. $\square$ to be continued... Proof that a sum of consecutive odd integers gives rise to a perfect square This is purely a test Some matrice: $$\begin{pmatrix}  1 & 1 & 0 & -2 & 6 & 9 \\  1 & 3 & 0 & -4 & 6 & 7 \\  1 & 1 & 0 & -6 & 6 & 9 \\  4 & 1 & 5 & -4 & 9 & 4 \\ Some NT: \[\lim\limits_{n\to\infty}\dfrac{\pi(x)}{x/\ln(x)}=1\qquad\text{and Ramanujan's famous:}\qquad\frac{2\sqrt{2}}{9801} \sum_{k=0}^\infty \frac{ (4k)! (1103+26390k) }{ (k!)^4 396^{4k} } = \frac1{\pi}.\] Does mathcal work? $$\mathcal L=-\dfrac14\ldots$$
4f548a25a7506675
Car batteries powered by relativity French physicist Gaston Plante invented the lead-acid battery in 1859 – almost 50 years before Einstein developed his theories of relativity. Now scientists have found that the lead-acid battery, which is commonly used in cars, strongly relies on the effects of relativity. Specifically, the scientists calculated that 1.7-1.8 volts of the lead-acid battery’s 2.1 volts (or about 80-85%) arise from relativistic effects. “This is a new, well-documented case of ‘everyday relativity,'” Pyykkö told As the scientists noted in their study, the finding essentially means that “cars start due to relativity.” The lead-acid battery is the oldest type of rechargeable battery, with the main component being lead. With an atomic number of 82, lead is a heavy element. In general, relativistic effects emerge when fast electrons move near a heavy nucleus, such as that of lead. These relativistic effects include anything that depends on the speed of light (or from a mathematical perspective, anything that involves the Dirac or Schrödinger equations). The lead-acid battery contains a positive electrode made of lead dioxide, a negative electrode made of metallic lead, and an electrolyte made of sulfuric acid. Through their calculations, the scientists found that the battery’s relativistic effects arise mainly from the lead dioxide in the positive electrode, and partly from the lead sulfate created during chemical reactions. The discovery of relativistic effects in the lead-acid battery also sheds some light on why no corresponding “tin battery” exists. In the periodic table, tin is located directly above lead and has an atomic number of 50, making it lighter than lead. According to the scientists’ calculations, a tin battery would basically be a lead battery with very minimal relativistic effects. Although tin and lead have similar nonrelativistic energy values, tin’s small relativistic effects prohibit it from being used in an efficient battery. Read the whole article here Car batteries powered by relativity. Leave a Comment (email & website optional)
ff708c6c6f2493b1
Lane P. Hughston Learn More The manifold of pure quantum states can be regarded as a complex projective space endowed with the unitary-invariant Riemannian geometry of Fubini and Study. According to the principles of geometric quantum mechanics, the detailed physical characteristics of a given quantum system can be represented by specific geometrical features that are selected and(More) A new approach to credit risk modelling is introduced that avoids the use of inaccessible stopping times. Default events are associated directly with the failure of obligors to make contractually agreed payments. Noisy information about impending cash flows is available to market participants. In this framework the market filtration is modelled explicitly,(More) We develop a non-life reserving model using a stable-1/2 random bridge to simulate the accumulation of paid claims, allowing for an arbitrary choice of a priori distribution for the ultimate loss. Taking a Bayesian approach to the reserving problem, we derive the process of the conditional distribution of the ultimate loss. The 'best-estimate ultimate loss(More) This paper presents an overview of information-based asset pricing. In the information-based approach, an asset is defined by its cash-flow structure. The market is assumed to have access to " partial " information about future cash flows. Each cash flow is determined by a collection of independent market factors called X-factors. The market filtration is(More) A closed-form solution to the energy-based stochastic Schrödinger equation with a time-dependent coupling is obtained. The solution is algebraic in character, and is expressed directly in terms of independent random data. The data consist of (i) a random variable H which has the distribution P(H = E i) = π i , where π i is the transition probability |ψ 0 |φ(More) A generalised equivalence principle is put forward according to which space-time symmetries and internal quantum symmetries are indistinguishable before symmetry breaking. Based on this principle, a higher-dimensional extension of Minkowski space is proposed and its properties examined. In this scheme the structure of space-time is intrinsically quantum(More)
fefe1272a1faed47
söndag 28 december 2014 The Radiating Atom 9: Hydrogen and Beyond A plane electrical field $E_z$ acting in the $z$-direction and progressing in the $x$-direction, will interact with the $2p_z$ eigenstate of a Hydrogen atom pictured above corresponding to a charge oscillating in the $z$-direction in parallel with $E_z$.  Note that $E_z$ will not interact with the $2p_x$ and $2p_y$ eigenstates. As a sum-up of the present series of posts on the radiating atom, we consider Schrödinger's equation for a radiating Hydrogen atom subject to forcing in the form of a second order wave equation where $\psi (x,t)$ is a real-valued electronic wave function of a space coordinate $x=(x_1,x_2,x_3)$ and time $t$, $H$ is the Hamiltonian defined by  where $\Delta$ is the Laplacian with respect to $x$, $V(x)=-\frac{1}{\vert x\vert}$ is the kernel potential, $m$ the electron mass, $h$ Planck's constant, the dot signifies differentiation with respect to time $t$, $f$ is external forcing, and $\gamma =\gamma (\psi )$ is a non-negative radiation damping coefficient. The formulation of Schrödinger's equation as a second order wave equation in terms of a real-valued wave function was considered by Schrödinger in 1926 as an alternative to the standard formulation as a 1st order complex-valued equation. In the homogeneous case with $f=0$ and  $\gamma =0$,  the two formulations are equivalent: In particular, conservation of total charge as with $\rho =\psi^2+(H^{-1}\dot\psi )^2$ the charge intensity, is obtained by multiplying (1) with $H^{-2}\dot\psi$ and integrating in space. In the non-homogeneous case (1) may be more natural as an expression of a force balance with $-\gamma\dddot\psi$ the Abraham-Lorentz radiation recoil force and $f$ an electrical field component, while the physical meaning of the standard formulation baffled the creators of modern physicists and followers and led into unphysical interpretations as particle statistics.   We consider radiation of frequency $\nu =(E_2-E_1)/h$ where $E_1$ is the energy of the ground state as an eigenfunction $\Psi_1 (x)$ of $H$ with minimal eigenvalue $E_1$ and $E_2$ is a larger eigenvalue with eigenfunction $\Psi_2(x)$. We reformulate (1) in the form • $\ddot\psi +H_1^2\psi -\gamma\dddot\psi = f$,      (2) where $H_1 = H - E_1$ and note that $H_1\Psi_1=0$ and $H_1\Psi_2=(E_2-E_1)\Psi$.  We assume that the forcing is given as a linear combination of plane electromagnetic waves $(0,0,\cos(\omega (x_1-ct))$ of frequencies $\omega\approx\nu =(E_2-E_1)/h$ progressing in the $x_1$-direction with the speed of light $c$. We seek a solution $\psi (x,t)$ of (2) of as a linear combination of $\Psi_1$ and $\Psi_2$ of the form • $\psi (x,t) =c_1(t)\Psi_1(x) + c_2(t)\Psi_2(x)$ with time dependent coefficients $c_1(t)$ and $c_2(t)$. Inserting this Ansatz into (2), multiplying by $\Psi_1$ and $\Psi_2$ and integrating with respect to $x$, we obtain assuming orthonormality of $\Phi_1$ and $\Psi_2$, time-periodicity and normalizing to $c=1$ and $h=1$: • $\ddot c_1(t) -\gamma\dddot c_1(t) = f_1(t)\equiv\int f(x,t)\Psi_1(x)dx$ for all $t$, • $\ddot c_2(t) +\nu^2c_2(t)-\gamma\dddot c_2(t) =f_2(t)\equiv\int f(x,t)\Psi_2(x)dx$ for all $t$. By $x_1$-symmetry of $\Psi_1(x)$ it follows that $f_1(t)=0$ with the effect that $c_1(t)=c_1$ is constant. Further, if $\Psi_2(x)$ is a $(2,1,0)$ p-state oriented in the $x_3$-direction, see above figure, then $f_2(t)$ is a non-zero linear combination of $\cos(\omega t)$, and by the analysis of Mathematical Physics of Black Body Radiation  and Computational Black Body Radiation, • $\int\gamma\ddot\psi^2(x,t)dxdt = \int\gamma\ddot c_2^2(t)dt\approx \int f_2^2(t)dt$,  (3) which expresses that output = input as a fundamental aspect of radiation in time-periodic equilibrium as a phenomenon of near-resonance under small damping. The setting can be generalized to other eigenstates. The essence is the output = input balance, which can express both excitation into eigenstates of larger energy and radiation from such states. The value of the radiation damping coefficient $\gamma (\psi )$ is set so that conservation of charge is maintained under forcing with the radiation balance (3). If $f=0$ and $\psi$ is a pure eigenstate, then $\gamma = 0$. Notice that the above argument can be shifted by replacing the ground-state $\Psi_1(x)$ as a time-independent and non-radiating pure eigenstate by an eigenfunction $\Psi_j(x)$ of $H$ with larger energy, again viewed as a time-independent and non-radiating pure eigenstate. This reflects that the time-dependence of pure eigenstates is not observable and thus up to the imagination of an observer. This is not evident in the standard formulation of Schrödinger's equation. We sum up the virtues of (1) as a semi-classical continuum wave model of a radiating atom subject to forcing, as compared to QED as a non-classical quantum particle model: 1. (1) lends itself to physical interpretation as force balance. 2. (1) lends itself to mathematical analysis.  3. The term $\ddot\psi$ connects to kinetic energy in classical mechanics and suggests that the common terminology of quantum mechanics of connecting $\Delta\psi$ to kinetic energy, is not natural; a connection to a form of elastic energy may have better physical meaning. 4. (1) has a natural extension to a model for a many-electron atom as a system of one-electron equations, which is computable and thus potentially useful, in contrast to the standard multi-dimensional Schrödinger equation, which is uncomputable and thus potentially useless.  5. The incoming wave is represented as forcing independent of the wave function $\psi$, which faciliates mathematical analysis and understanding, and not as in QED through a time-dependent contribution to the Hamiltonian, which opens to troublesome self-interaction.  Inga kommentarer: Skicka en kommentar
534b4d5624270de3
Saturday, June 19, 2010 Quantum theorist explaining gravity is an ox(y)moron This post is an reaction to ArXiv blog article New Quantum Theory Separates Gravitational and Inertial Mass. A typical problem with theoretical physicists is that they think that by presenting an alternative interpretation of a contradiction one can remove the contradiction. Equations of quantum theory cannot describe gravity at all - on the contrary. By Schrödinger equation all wave packets of free particles should expand into infinity, not to collapse by gravity. We cannot switch 1 = 1 to 1 = -1 by any causal math because quantum theory reverses time arrow of general relativity. Observers inside and outside of gravitational field of massive bodies would perceive the same situation from perspective of general relativity and quantum mechanics In AWT we can reconcile general relativity and quantum mechanics in two main ways: 1. By using of particle simulation of nested density fluctuations inside of very dense gas. At the certain level of condensation the resulting solution inside of dense particle clusters would become close to general relativity, while the solution of outside them would become close to general relativity. 2. We can solve wave equation in very high number of dimensions. After the solution in inner 3D slice of solution can be compared with quantum mechanics, in outer 3D slice it can be compared with general relativity. Actually, in AWT the equivalence principle is violated by electrostatic or dipole forces, Casimir force etc. which are acting in extra-dimensions and nothing strange is about it. All these forces depend on different quantities, then just mass.
fbdadf9c847f68b5
Electron configuration From Wikipedia, the free encyclopedia Jump to: navigation, search Electron atomic and molecular orbitals A Bohr Diagram of lithium In atomic physics and quantum chemistry, the electron configuration is the distribution of electrons of an atom or molecule (or other physical structure) in atomic or molecular orbitals.[1] For example, the electron configuration of the neon atom is 1s2 2s2 2p6. Electronic configurations describe electrons as each moving independently in an orbital, in an average field created by all other orbitals. Mathematically, configurations are described by Slater determinants or configuration state functions. According to the laws of quantum mechanics, for systems with only one electron, an energy is associated with each electron configuration and, upon certain conditions, electrons are able to move from one configuration to another by the emission or absorption of a quantum of energy, in the form of a photon. Knowledge of the electron configuration of different atoms is useful in understanding the structure of the periodic table of elements. The concept is also useful for describing the chemical bonds that hold atoms together. In bulk materials, this same idea helps explain the peculiar properties of lasers and semiconductors. Shells and subshells[edit] See also: Electron shell s (=0) p (=1) m=0 m=0 m=±1 s pz px py n=1 S1M0.png Electron configuration was first conceived of under the Bohr model of the atom, and it is still common to speak of shells and subshells despite the advances in understanding of the quantum-mechanical nature of electrons. An electron shell is the set of allowed states, which share the same principal quantum number, n (the number before the letter in the orbital label), that electrons may occupy. An atom's nth electron shell can accommodate 2n2 electrons, e.g. the first shell can accommodate 2 electrons, the second shell 8 electrons, and the third shell 18 electrons. The factor of two arises because the allowed states are doubled due to electron spin—each atomic orbital admits up to two otherwise identical electrons with opposite spin, one with a spin +1/2 (usually noted by an up-arrow) and one with a spin −1/2 (with a down-arrow). A subshell is the set of states defined by a common azimuthal quantum number, ℓ, within a shell. The values ℓ = 0, 1, 2, 3 correspond to the s, p, d, and f labels, respectively. The maximum number of electrons that can be placed in a subshell is given by 2(2ℓ + 1). This gives two electrons in an s subshell, six electrons in a p subshell, ten electrons in a d subshell and fourteen electrons in an f subshell. The numbers of electrons that can occupy each shell and each subshell arises from the equations of quantum mechanics,[2] in particular the Pauli exclusion principle, which states that no two electrons in the same atom can have the same values of the four quantum numbers.[3] See also: Atomic orbital Physicists and chemists use a standard notation to indicate the electron configurations of atoms and molecules. For atoms, the notation consists of a sequence of atomic orbital labels (e.g. for phosphorus the sequence 1s, 2s, 2p, 3s, 3p) with the number of electrons assigned to each orbital (or set of orbitals sharing the same label) placed as a superscript. For example, hydrogen has one electron in the s-orbital of the first shell, so its configuration is written 1s1. Lithium has two electrons in the 1s-subshell and one in the (higher-energy) 2s-subshell, so its configuration is written 1s2 2s1 (pronounced "one-s-two, two-s-one"). Phosphorus (atomic number 15) is as follows: 1s2 2s2 2p6 3s2 3p3. For atoms with many electrons, this notation can become lengthy and so an abbreviated notation is used, since all but the last few subshells are identical to those of one or another of the noble gases. Phosphorus, for instance, differs from neon (1s2 2s2 2p6) only by the presence of a third shell. Thus, the electron configuration of neon is pulled out, and phosphorus is written as follows: [Ne] 3s2 3p3. This convention is useful as it is the electrons in the outermost shell which most determine the chemistry of the element. For a given configuration, the order of writing the orbitals is not completely fixed since only the orbital occupancies have physical significance. For example, the electron configuration of the titanium ground state can be written as either [Ar] 4s2 3d2 or [Ar] 3d2 4s2. The first notation follows the order based on the Madelung rule for the configurations of neutral atoms; 4s is filled before 3d in the sequence Ar, K, Ca, Sc, Ti. The second notation groups all orbitals with the same value of n together, corresponding to the "spectroscopic" order of orbital energies which is the reverse of the order in which electrons are removed from a given atom to form positive ions; 3d is filled before 4s in the sequence Ti4+, Ti3+, Ti2+, Ti+, Ti. The superscript 1 for a singly occupied orbital is not compulsory. It is quite common to see the letters of the orbital labels (s, p, d, f) written in an italic or slanting typeface, although the International Union of Pure and Applied Chemistry (IUPAC) recommends a normal typeface (as used here). The choice of letters originates from a now-obsolete system of categorizing spectral lines as "sharp", "principal", "diffuse" and "fundamental" (or "fine"), based on their observed fine structure: their modern usage indicates orbitals with an azimuthal quantum number, l, of 0, 1, 2 or 3 respectively. After "f", the sequence continues alphabetically "g", "h", "i"... (l = 4, 5, 6...), skipping "j", although orbitals of these types are rarely required.[4][5] The electron configurations of molecules are written in a similar way, except that molecular orbital labels are used instead of atomic orbital labels (see below). Energy — ground state and excited states[edit] The energy associated to an electron is that of its orbital. The energy of a configuration is often approximated as the sum of the energy of each electron, neglecting the electron-electron interactions. The configuration that corresponds to the lowest electronic energy is called the ground state. Any other configuration is an excited state. As an example, the ground state configuration of the sodium atom is 1s22s22p63s, as deduced from the Aufbau principle (see below). The first excited state is obtained by promoting a 3s electron to the 3p orbital, to obtain the 1s22s22p63p configuration, abbreviated as the 3p level. Atoms can move from one configuration to another by absorbing or emitting energy. In a sodium-vapor lamp for example, sodium atoms are excited to the 3p level by an electrical discharge, and return to the ground state by emitting yellow light of wavelength 589 nm. Usually, the excitation of valence electrons (such as 3s for sodium) involves energies corresponding to photons of visible or ultraviolet light. The excitation of core electrons is possible, but requires much higher energies, generally corresponding to x-ray photons. This would be the case for example to excite a 2p electron to the 3s level and form the excited 1s22s22p53s2 configuration. The remainder of this article deals only with the ground-state configuration, often referred to as "the" configuration of an atom or molecule. Niels Bohr (1923) was the first to propose that the periodicity in the properties of the elements might be explained by the electronic structure of the atom.[6] His proposals were based on the then current Bohr model of the atom, in which the electron shells were orbits at a fixed distance from the nucleus. Bohr's original configurations would seem strange to a present-day chemist: sulfur was given as instead of 1s2 2s2 2p6 3s2 3p4 (2.8.6). The following year, E. C. Stoner incorporated Sommerfeld's third quantum number into the description of electron shells, and correctly predicted the shell structure of sulfur to be 2.8.6.[7] However neither Bohr's system nor Stoner's could correctly describe the changes in atomic spectra in a magnetic field (the Zeeman effect). Bohr was well aware of this shortcoming (and others), and had written to his friend Wolfgang Pauli to ask for his help in saving quantum theory (the system now known as "old quantum theory"). Pauli realized that the Zeeman effect must be due only to the outermost electrons of the atom, and was able to reproduce Stoner's shell structure, but with the correct structure of subshells, by his inclusion of a fourth quantum number and his exclusion principle (1925):[8] The Schrödinger equation, published in 1926, gave three of the four quantum numbers as a direct consequence of its solution for the hydrogen atom:[2] this solution yields the atomic orbitals that are shown today in textbooks of chemistry (and above). The examination of atomic spectra allowed the electron configurations of atoms to be determined experimentally, and led to an empirical rule (known as Madelung's rule (1936),[9] see below) for the order in which atomic orbitals are filled with electrons. Atoms: Aufbau principle and Madelung rule[edit] The Aufbau principle (from the German Aufbau, "building up, construction") was an important part of Bohr's original concept of electron configuration. It may be stated as:[10] The approximate order of filling of atomic orbitals, following the arrows from 1s to 7p. (After 7p the order includes orbitals outside the range of the diagram, starting with 8s.) The principle works very well (for the ground states of the atoms) for the first 18 elements, then decreasingly well for the following 100 elements. The modern form of the Aufbau principle describes an order of orbital energies given by Madelung's rule (or Klechkowski's rule). This rule was first stated by Charles Janet in 1929, rediscovered by Erwin Madelung in 1936,[9] and later given a theoretical justification by V.M. Klechkowski[11] 1. Orbitals are filled in the order of increasing n+l; This gives the following order for filling the orbitals: In this list the orbitals in parentheses are not occupied in the ground state of the heaviest atom now known (Uuo, Z = 118). The Aufbau principle can be applied, in a modified form, to the protons and neutrons in the atomic nucleus, as in the shell model of nuclear physics and nuclear chemistry. Periodic table[edit] Electron configuration table The form of the periodic table is closely related to the electron configuration of the atoms of the elements. For example, all the elements of group 2 have an electron configuration of [E] ns2 (where [E] is an inert gas configuration), and have notable similarities in their chemical properties. In general, the periodicity of the periodic table in terms of periodic table blocks is clearly due to the number of electrons (2, 6, 10, 14...) needed to fill s, p, d, and f subshells. The outermost electron shell is often referred to as the "valence shell" and (to a first approximation) determines the chemical properties. It should be remembered that the similarities in the chemical properties were remarked on more than a century before the idea of electron configuration.[12] It is not clear how far Madelung's rule explains (rather than simply describes) the periodic table,[13] although some properties (such as the common +2 oxidation state in the first row of the transition metals) would obviously be different with a different order of orbital filling. Shortcomings of the Aufbau principle[edit] The Aufbau principle rests on a fundamental postulate that the order of orbital energies is fixed, both for a given element and between different elements; in both cases this is only approximately true. It considers atomic orbitals as "boxes" of fixed energy into which can be placed two electrons and no more. However, the energy of an electron "in" an atomic orbital depends on the energies of all the other electrons of the atom (or ion, or molecule, etc.). There are no "one-electron solutions" for systems of more than one electron, only a set of many-electron solutions that cannot be calculated exactly[14] (although there are mathematical approximations available, such as the Hartree–Fock method). Ionization of the transition metals[edit] The naïve application of the Aufbau principle leads to a well-known paradox (or apparent paradox) in the basic chemistry of the transition metals. Potassium and calcium appear in the periodic table before the transition metals, and have electron configurations [Ar] 4s1 and [Ar] 4s2 respectively, i.e. the 4s-orbital is filled before the 3d-orbital. This is in line with Madelung's rule, as the 4s-orbital has n+l  = 4 (n = 4, l = 0) while the 3d-orbital has n+l  = 5 (n = 3, l = 2). After calcium, most neutral atoms in the first series of transition metals (Sc-Zn) have configurations with two 4s electrons, but there are two exceptions. Chromium and copper have electron configurations [Ar] 3d5 4s1 and [Ar] 3d10 4s1 respectively, i.e. one electron has passed from the 4s-orbital to a 3d-orbital to generate a half-filled or filled subshell. In this case, the usual explanation is that "half-filled or completely filled subshells are particularly stable arrangements of electrons". The apparent paradox arises when electrons are removed from the transition metal atoms to form ions. The first electrons to be ionized come not from the 3d-orbital, as one would expect if it were "higher in energy", but from the 4s-orbital. This interchange of electrons between 4s and 3d is found for all atoms of the first series of transition metals.[15] The configurations of the neutral atoms (K, Ca, Sc, Ti, V, Cr, ...) usually follow the order 1s, 2s, 2p, 3s, 3p, 4s, 3d, ...; however the successive stages of ionization of a given atom (such as Fe4+, Fe3+, Fe2+, Fe+, Fe) usually follow the order 1s, 2s, 2p, 3s, 3p, 3d, 4s, ... This phenomenon is only paradoxical if it is assumed that the energy order of atomic orbitals is fixed and unaffected by the nuclear charge or by the presence of electrons in other orbitals. If that were the case, the 3d-orbital would have the same energy as the 3p-orbital, as it does in hydrogen, yet it clearly doesn't. There is no special reason why the Fe2+ ion should have the same electron configuration as the chromium atom, given that iron has two more protons in its nucleus than chromium, and that the chemistry of the two species is very different. Melrose and Eric Scerri have analyzed the changes of orbital energy with orbital occupations in terms of the two-electron repulsion integrals of the Hartree-Fock method of atomic structure calculation.[16] Similar ion-like 3dx4s0 configurations occur in transition metal complexes as described by the simple crystal field theory, even if the metal has oxidation state 0. For example, chromium hexacarbonyl can be described as a chromium atom (not ion) surrounded by six carbon monoxide ligands. The electron configuration of the central chromium atom is described as 3d6 with the six electrons filling the three lower-energy d orbitals between the ligands. The other two d orbitals are at higher energy due to the crystal field of the ligands. This picture is consistent with the experimental fact that the complex is diamagnetic, meaning that it has no unpaired electrons. However, in a more accurate description using molecular orbital theory, the d-like orbitals occupied by the six electrons are no longer identical with the d orbitals of the free atom. Other exceptions to Madelung's rule[edit] There are several more exceptions to Madelung's rule among the heavier elements, and it is more and more difficult to resort to simple explanations, such as the stability of half-filled subshells. It is possible to predict most of the exceptions by Hartree–Fock calculations,[17] which are an approximate method for taking account of the effect of the other electrons on orbital energies. For the heavier elements, it is also necessary to take account of the effects of Special Relativity on the energies of the atomic orbitals, as the inner-shell electrons are moving at speeds approaching the speed of light. In general, these relativistic effects[18] tend to decrease the energy of the s-orbitals in relation to the other atomic orbitals.[19] The table below shows the ground state configuration in terms of orbital occupancy, but it does not show the ground state in terms of the sequence of orbital energies as determined spectroscopically. For example, in the transition metals, the 4s orbital is of a higher energy than the 3d orbitals; and in the lanthanides, the 6s is higher than the 4f and 5d. The ground states can be seen in the Electron configurations of the elements (data page). Electron shells filled in violation of Madelung's rule[20] (red) Period 4   Period 5   Period 6   Period 7 Element Z Electron Configuration   Element Z Electron Configuration   Element Z Electron Configuration   Element Z Electron Configuration         Lanthanum 57 [Xe] 6s2 5d1   Actinium 89 [Rn] 7s2 6d1         Cerium 58 [Xe] 6s2 4f1 5d1   Thorium 90 [Rn] 7s2 6d2         Praseodymium 59 [Xe] 6s2 4f3   Protactinium 91 [Rn] 7s2 5f2 6d1         Neodymium 60 [Xe] 6s2 4f4   Uranium 92 [Rn] 7s2 5f3 6d1         Promethium 61 [Xe] 6s2 4f5   Neptunium 93 [Rn] 7s2 5f4 6d1         Samarium 62 [Xe] 6s2 4f6   Plutonium 94 [Rn] 7s2 5f6         Europium 63 [Xe] 6s2 4f7   Americium 95 [Rn] 7s2 5f7         Gadolinium 64 [Xe] 6s2 4f7 5d1   Curium 96 [Rn] 7s2 5f7 6d1         Terbium 65 [Xe] 6s2 4f9   Berkelium 97 [Rn] 7s2 5f9 Scandium 21 [Ar] 4s2 3d1   Yttrium 39 [Kr] 5s2 4d1   Lutetium 71 [Xe] 6s2 4f14 5d1   Lawrencium 103 [Rn] 7s2 5f14 7p1 Titanium 22 [Ar] 4s2 3d2   Zirconium 40 [Kr] 5s2 4d2   Hafnium 72 [Xe] 6s2 4f14 5d2   Rutherfordium 104 [Rn] 7s2 5f14 6d2 Vanadium 23 [Ar] 4s2 3d3   Niobium 41 [Kr] 5s1 4d4   Tantalum 73 [Xe] 6s2 4f14 5d3     Chromium 24 [Ar] 4s1 3d5   Molybdenum 42 [Kr] 5s1 4d5   Tungsten 74 [Xe] 6s2 4f14 5d4     Manganese 25 [Ar] 4s2 3d5   Technetium 43 [Kr] 5s2 4d5   Rhenium 75 [Xe] 6s2 4f14 5d5     Iron 26 [Ar] 4s2 3d6   Ruthenium 44 [Kr] 5s1 4d7   Osmium 76 [Xe] 6s2 4f14 5d6     Cobalt 27 [Ar] 4s2 3d7   Rhodium 45 [Kr] 5s1 4d8   Iridium 77 [Xe] 6s2 4f14 5d7     Nickel 28 [Ar] 4s2 3d8 or [Ar] 4s1 3d9 (disputed)[21]   Palladium 46 [Kr] 4d10   Platinum 78 [Xe] 6s1 4f14 5d9     Copper 29 [Ar] 4s1 3d10   Silver 47 [Kr] 5s1 4d10   Gold 79 [Xe] 6s1 4f14 5d10     Zinc 30 [Ar] 4s2 3d10   Cadmium 48 [Kr] 5s2 4d10   Mercury 80 [Xe] 6s2 4f14 5d10     The electron-shell configuration of elements beyond rutherfordium has not yet been empirically verified, but they are expected to follow Madelung's rule without exceptions until element 120.[22] Electron configuration in molecules[edit] In molecules, the situation becomes more complex, as each molecule has a different orbital structure. The molecular orbitals are labelled according to their symmetry,[23] rather than the atomic orbital labels used for atoms and monatomic ions: hence, the electron configuration of the dioxygen molecule, O2, is 1σg2 1σu2 2σg2 2σu2 1πu4 3σg2 1πg2.[1] The term 1πg2 represents the two electrons in the two degenerate π*-orbitals (antibonding). From Hund's rules, these electrons have parallel spins in the ground state, and so dioxygen has a net magnetic moment (it is paramagnetic). The explanation of the paramagnetism of dioxygen was a major success for molecular orbital theory. The electronic configuration of polyatomic molecules can change without absorption or emission of a photon through vibronic couplings. Electron configuration in solids[edit] In a solid, the electron states become very numerous. They cease to be discrete, and effectively blend into continuous ranges of possible states (an electron band). The notion of electron configuration ceases to be relevant, and yields to band theory. The most widespread application of electron configurations is in the rationalization of chemical properties, in both inorganic and organic chemistry. In effect, electron configurations, along with some simplified form of molecular orbital theory, have become the modern equivalent of the valence concept, describing the number and type of chemical bonds that an atom can be expected to form. This approach is taken further in computational chemistry, which typically attempts to make quantitative estimates of chemical properties. For many years, most such calculations relied upon the "linear combination of atomic orbitals" (LCAO) approximation, using an ever larger and more complex basis set of atomic orbitals as the starting point. The last step in such a calculation is the assignment of electrons among the molecular orbitals according to the Aufbau principle. Not all methods in calculational chemistry rely on electron configuration: density functional theory (DFT) is an important example of a method which discards the model. For atoms or molecules with more than one electron, the motion of electrons are correlated and such a picture is no longer exact. A very large number of electronic configurations are needed to exactly describe any multi-electron system, and no energy can be associated with one single configuration. However, the electronic wave function is usually dominated by a very small number of configurations and therefore the notion of electronic configuration remains essential for multi-electron systems. A fundamental application of electron configurations is in the interpretation of atomic spectra. In this case, it is necessary to supplement the electron configuration with one or more term symbols, which describe the different energy levels available to an atom. Term symbols can be calculated for any electron configuration, not just the ground-state configuration listed in tables, although not all the energy levels are observed in practice. It is through the analysis of atomic spectra that the ground-state electron configurations of the elements were experimentally determined. See also[edit] 1. ^ a b IUPAC, Compendium of Chemical Terminology, 2nd ed. (the "Gold Book") (1997). Online corrected version:  (2006–) "configuration (electronic)". 2. ^ a b In formal terms, the quantum numbers n, and m arise from the fact that the solutions to the time-independent Schrödinger equation for hydrogen-like atoms are based on spherical harmonics. 4. ^ Weisstein, Eric W. (2007). "Electron Orbital". wolfram.  5. ^ Ebbing, Darrell D.; Gammon, Steven D. (2007-01-12). General Chemistry. p. 284. ISBN 978-0-618-73879-3.  6. ^ Bohr, Niels (1923). "Über die Anwendung der Quantumtheorie auf den Atombau. I". Zeitschrift für Physik 13: 117. Bibcode:1923ZPhy...13..117B. doi:10.1007/BF01328209.  7. ^ Stoner, E.C. (1924). "The distribution of electrons among atomic levels". Philosophical Magazine (6th Ser.) 48 (286): 719–36. doi:10.1080/14786442408634535.  8. ^ Pauli, Wolfgang (1925). "Über den Einfluss der Geschwindigkeitsabhändigkeit der elektronmasse auf den Zeemaneffekt". Zeitschrift für Physik 31: 373. Bibcode:1925ZPhy...31..373P. doi:10.1007/BF02980592.  English translation from Scerri, Eric R. (1991). "The Electron Configuration Model, Quantum Mechanics and Reduction". Br. J. Phil. Sci. 42 (3): 309–25. doi:10.1093/bjps/42.3.309.  9. ^ a b Madelung, Erwin (1936). Mathematische Hilfsmittel des Physikers. Berlin: Springer.  10. ^ IUPAC, Compendium of Chemical Terminology, 2nd ed. (the "Gold Book") (1997). Online corrected version:  (2006–) "aufbau principle". 11. ^ Wong, D. Pan (1979). "Theoretical justification of Madelung's rule". Journal of Chemical Education 56 (11): 714–18. Bibcode:1979JChEd..56..714W. doi:10.1021/ed056p714.  12. ^ The similarities in chemical properties and the numerical relationship between the atomic weights of calcium, strontium and barium was first noted by Johann Wolfgang Döbereiner in 1817. 13. ^ Scerri, Eric R. (1998). "How Good Is the Quantum Mechanical Explanation of the Periodic System?". Journal of Chemical Education 75 (11): 1384–85. Bibcode:1998JChEd..75.1384S. doi:10.1021/ed075p1384.  Ostrovsky, V.N. (2005). "On Recent Discussion Concerning Quantum Justification of the Periodic Table of the Elements". Foundations of Chemistry 7 (3): 235–39. doi:10.1007/s10698-005-2141-y.  14. ^ Electrons are identical particles, a fact that is sometimes referred to as "indistinguishability of electrons". A one-electron solution to a many-electron system would imply that the electrons could be distinguished from one another, and there is strong experimental evidence that they can't be. The exact solution of a many-electron system is a n-body problem with n ≥ 3 (the nucleus counts as one of the "bodies"): such problems have evaded analytical solution since at least the time of Euler. 15. ^ There are some cases in the second and third series where the electron remains in an s-orbital. 16. ^ Melrose, Melvyn P.; Scerri, Eric R. (1996). "Why the 4s Orbital is Occupied before the 3d". Journal of Chemical Education 73 (6): 498–503. Bibcode:1996JChEd..73..498M. doi:10.1021/ed073p498.  18. ^ IUPAC, Compendium of Chemical Terminology, 2nd ed. (the "Gold Book") (1997). Online corrected version:  (2006–) "relativistic effects". 19. ^ Pyykkö, Pekka (1988). "Relativistic effects in structural chemistry". Chem. Rev. 88 (3): 563–94. doi:10.1021/cr00085a006.  21. ^ Scerri, Eric R. (2007). The periodic table: its story and its significance. Oxford University Press. pp. 239–240. ISBN 0-19-530573-6.  23. ^ The labels are written in lowercase to indicate that they correspond to one-electron functions. They are numbered consecutively for each symmetry type (irreducible representation in the character table of the point group for the molecule), starting from the orbital of lowest energy for that type. • Jolly, William L. (1991). Modern Inorganic Chemistry (2nd ed.). New York: McGraw-Hill. pp. 1–23. ISBN 0-07-112651-1.  • Scerri, Eric (2007). The Periodic System, Its Story and Its Significance. New York: Oxford University Press. ISBN 0-19-530573-6.  External links[edit]
5723790110d93879
Wednesday, August 29, 2012 Quantum Gravity and Taxes The other day I got caught in a conversation about the Royal Institute of Technology and how it deals with value added taxes. After the third round of explanation, I still hadn’t quite understood the Swedish tax regulations. This prompted my conversation partner to remark Swedish taxes are more complicated than my research. The only thing I can say in my defense is that in a very real sense taxes are indeed more complicated than quantum gravity. True, the tax regulations you have to deal with to get through life are more a matter of available information than of understanding. Applying the right rule in the right place requires less knowledge than you need for, say, the singularity theorems in general relativity. In the end taxes are just basic arithmetic manipulations. But what’s the basis of these rules? Where do they come from? Tax regulations, laws in general, and also social norms have evolved along with our civilizations. They’re results of a long history of adaption and selection in a highly complex, partly chaotic, system. This result is based on vague concepts like “fairness”, “higher powers”, or “happiness”, that depend on context and culture and change with time. If you think about it too much, the only reason our societies’ laws and norms work is inertia. We just learn how our environment works and most of us most of the time play by the rules. We adapt and slowly change the rules along with our adaption. But ask where the rules come from or by what principles they evolve, and you’ll have a hard time coming up with a good reason for anything. If you make it more than five why’s down the line, I cheer for you. We don’t have the faintest clue how to explain human civilization. Nobody knows how to derive the human rights from the initial conditions of the universe. People in general, and men in particular, with all their worries and desires, their hopes and dreams, do not make much sense to me, fundamentally. I have no clue why we’re here or what we’re here for, and in comparison to understanding Swedish taxes, quantizing gravity seems like a neatly well-defined and solvable problem. Saturday, August 25, 2012 How to beat a cosmic speeding ticket xkcd: The Search After I had spent half a year doing little more than watching babies grow and writing a review article on the minimal length, I got terribly bored with myself. So I’m apparently one of the world experts on quantum field theories with a minimal length scale. That was not exactly among my childhood aspirations. As a child I had a (mercifully passing) obsession with science fiction. To this day contact to extraterrestrial intelligent beings is to me one of the most exciting prospects of technological progress. I think the plausible explanation why we have so far not made alien contact is that they use a communication method we have no yet discovered, and if there is any way to communicate faster than the speed of light, clearly that’s what they would use. Thus, we should work on building a receiver for the faster-than-light signals! Except, well, that our present theories don’t seem to allow for such signals to begin with. Every day is a winding road, and after many such days I found myself working on quantum gravity. So when the review was finally submitted, I thought it is time to come back to superluminal information exchange, which resulted in a paper that’s now published The basic idea isn’t so difficult to explain. The reason that it is generally believed nothing can travel faster than the speed of light is that Einstein’s special relativity sets the speed of light as a limit for all matter that we know. The assumptions for that argument are few, the theory is extremely well in agreement with experiment, and the conclusion is difficult to avoid. Strictly speaking, special relativity does not forbid faster-than-light propagation. However, since in special relativity a signal moving forward in time faster than the speed of light for one observer might appear like a signal moving backwards in time for another observer, this can create causal paradoxa. There are three common ways to allow superluminal signaling, and each has its problems: First, there are wormholes in general relativity, but they generically also lead to causality problems. And how creation, manipulation, and sending signals through them would work is unclear. I’ve never been a fan of wormholes. Second, one can just break Lorentz-invariance and avoid special relativity altogether. In this case one introduces a preferred frame and observer independence is violated. This avoids causal paradoxa because there’s now a distinguished direction “forward” in time. The difficulty here is that special relativity describes our observations extremely well and we have no evidence for Lorentz-invariance violation whatsoever. There is then explaining to do why we have not noticed violations of Lorentz-invariance before. Many people are working on Lorentz invariance violation, and that by itself limits my enthusiasm. Third, there are deformations of special relativity which avoid an explicit breaking of Lorentz-invariance by changing the Lorentz-transformations. In this case, the speed of light becomes energy-dependent so that photons with high energy can, in principle, move arbitrarily fast. Since in this case everybody agrees that a photon moves forward in time, this does not create causal paradoxa, at least not just because of the superluminal propagation. I was quite excited about this possibility for a while, but after some years of back and forth I’ve convinced myself that deformed special relativity creates more problems than it solves. It suffers from various serious difficulties that prevent a recovery of the standard model and general relativity in the suitable limits, notoriously the problem of multi-particle states and non-locality (which we discussed here). So, none of these approaches is very promising and one is really very constrained in the possible options. The symmetry-group of Minkowski-space is the Lorentz-group plus translations. It has one free parameter and that’s the speed of massless particles. It’s a limiting speed. End of story. There really doesn’t seem to be much wiggle room in that. Then it occurred to me that it is not actually difficult to allow several different speeds of lights to be invariant, as long as can never measure them at the same time. And that would be the case if one had particles propagating in a background that is a superposition of Minkowski-spaces with different speeds of light. Because in this case then you would use for each speed of light the Lorentz-transformation that belongs to it. In other words, you blow up the Lorentz-group to a one-parameter family of groups that acts on a set of spaces with different speeds of lights. You have to expect the probability for a particle to travel through an eigenspace that does not belong to the measured speed of light to be small, so that we haven’t yet noticed. To good precision, the background that we live in must be in an eigenstate, but it might have a small admixture of other speeds, faster and slower. Particles then have a small probability to travel faster than the speed of light through one of these spaces. If you measure a state that was in a superposition, you collapse the wavefunction to one eigenstate, or let us better say it decoheres. This decoherence introduces a preferred frame (the frame of the measurement) which is how causal paradoxa are avoided: there is a notion of forward that comes in through the measurement. In contrast to the case in which Lorentz invariance is violated though, this preferred frame does not appear on the level of the Lagrangian - it is not fundamentally present. And in contrast to deformations of special relativity, there is no issue here with locality because two observers never disagree on the paths of two photons with different speeds: Instead of there being two different photons, there’s only one, but it’s in a superposition. Once measured, all observers agree on the outcome. So there’s no Box Problem. That having been said, I found it possible to formulate this idea in the language of quantum field theory. (It wasn’t remotely as straight forward as this summary might make it appear.) In my paper, I then proposed a parameterization of the occupation probability of the different speed of light eigenspaces and the probability of particles to jump from one eigenstate to another upon interaction. So far so good. Next one would have to look at modifications of standard model cross-sections and see if there is any hope that this theoretical possibility is actually realized in nature. We still have a long way to go on the way to build the cell phone to talk to aliens. But at least we know now that it’s not incompatible with special relativity. Wednesday, August 22, 2012 How do science blogs change the face of science? The blogosphere is coming to age, and I’m doing my annual contemplation of its influence on science. Science blogs of course have an educational mission, and many researchers use them to communicate the enthusiasm they have for their research, may that be by discussing their own work or that of colleagues. But blogs were also deemed useful to demonstrate that scientists are not all dusty academics, withdrawn professors or introverted nerds who sit all day in their office, shielded by piles of books and papers. Physics and engineering are fields where these stereotypes are quite common – or should I say “used to be quite common”? Recently I’ve been wondering if not the perception of science that the blogosphere has created is replacing the old nerdy stereotype with a new stereotype. Because the scientists who blog are the ones who are most visible, yet not the ones who are actually very representative characters. This leads to the odd situation in which the avid reader of blogs, who otherwise doesn’t have much contact with academia, is left with the idea that scientists are generally interested in communicating their research. They also like to publicly dissect their colleagues’ work. And, judging from the photos they post, they seem to spend a huge amount of time travelling. Not to mention that, well, they all like to write. Don’t you also think they all look a little like Brian Cox? I find this very ironic. Because the nerdy stereotype for all its inaccuracy still seems to fit better. Many of my colleagues do spend 12 hours a day in their office scribbling away equations on paper or looking for a bug in their code. They’d rather die than publicly comment on anything. Their Facebook accounts are deserted. They think a hashtag is a drug, and the only photo on their iPhone shows that instant when the sunlight fell through the curtains just so that it made a perfect diffraction pattern on the wall. They're neither interested nor able to communicate their research to anybody except their close colleagues. And, needless to say, very few of them have even a remote resemblance to Brian Cox. So the funny situation is that my online friends and contacts think it’s odd if one of my colleagues is not available on any social networking platform. Do they even exist for real? And my colleagues still think I’m odd taking part in all this blogging stuff and so on. I’m not sure at all these worlds are going to converge any time soon. Sunday, August 19, 2012 Book review: “Why does the world exist?” by Jim Holt Why Does the World Exist?: An Existential Detective Story By Jim Holt Liveright (July 16, 2012) Yes, I do sometimes wonder why the world exists. I believe however it is not among the questions that I am well suited to find an answer to, and thus my enthusiasm is limited. While I am not disinterested in philosophy in principle, I get easily frustrated with people who use words as if they had any meaning that’s not a human construct, words that are simply ill-defined unless the humans themselves and their language are explained too. I don’t seem to agree with Max Tegmark on many points, but I agree that you can’t build fundamental insights on words that are empty unless one already has these fundamental insights - or wants to take the anthropic path. In other words, if you want to understand nature, you have to do it with a self-referential language like mathematics, not with English. Thus my conviction that if anybody is to understand the nature of reality, it will be a mathematician or a theoretical physicist. For these reasons I’d never have bought Jim Holt’s book. I was however offered a free copy by the editor. And, thinking that I should broaden my horizon when it comes to the origin of the universe and the existence or absence of final explanations, I read it. Holt’s book is essentially a summary of thoughts on the question why there isn’t nothing, covering the history of the question as well as the opinions of currently living thinkers. The narrative of the book is Holt’s own quest for understanding that lead him to visit and talk to several philosophers, physicists and other intellectuals, including Steven Weinberg, Alan Guth and David Deutsch. Many others are mentioned or cited, such as Stephen Hawking, Max Tegmark and Roger Penrose. The book is very well written, though Holt has a tendency to list exactly what he ate and drank when and where which takes up more space than it deserves. There are more bottles of wine and more deaths on the pages of his book than I had expected, though that is balanced with a good sense of humor. Since Holt arranges his narrative along his travel rather than by topic, the book is sometimes repetitive when he reminds the reader of something (eg the “landscape”) that was already introduced earlier. I am very impressed by Holt’s interviews. He has clearly done a lot of own thinking about the question. His explanations are open-minded and radiate well-meaning, but he is sharp and often critical. In many cases what he says is much more insightful than what his interview partners have to offer. Holt’s book is good summary of just how bizarre the world is. The only person quoted in this book who made perfect sense to me is Woody Allen. On the very opposite end is a philosopher named Derek Parfit who hates the “scientizing” of philosophy, and some of his colleagues who believe in “panpsychism”, undeterred by the total lack of scientific evidence. The reader of the book is also confronted with John Updike who belabors the miserable state of string theory “This whole string theory business… There’s never any evidence, right? There are men spending their whole careers working on a theory of something that might not even exist”, and Alex Vilenkin who has his own definition of “nothing,” which, if you ask me, is a good way to answer the question. Towards the end of the book Jim Holt also puts forward his own solution to the problem of why there is something rather than nothing. Let me give you a flavor of that proof: “Reality cannot be perfectly full and perfectly empty at the same time. Nor can it be ethically the best and causally the most orderly at the same time (since the occasional miracle could make reality better). And it certainly can’t be the ethically best and the most evil at the same time.” Where to even begin? Every second word in this “proof” is undefined. How can one attempt to make an argument along these lines without explaining “ethically best” in terms that are not taken out of the universe whose existence is supposed to be explained? Not to mention that all along his travel, nobody seems to have told Holt that, shockingly, there isn’t only system of logic, but a whole selection of them. This book has been very educational for me indeed. Now I know the names of many ism’s that I do not want to know more about. I hate the idea that I’d have missed this book if it hadn't been for the free copy in my mail box. That having been said, to get anything out of this book you need to come with an interest in the question already. Do not expect the book to create this interest. But if you come with this interest, you’ll almost surely enjoy reading it. Wednesday, August 15, 2012 "Rapid streamlined peer-review" and its results Contains 0% Quantum Gravity. "Scientific Reports" is a new open access journal from the Nature Publishing Group, which advertises its "rapid peer review and publication of research... with the support of an external Editorial Board and a streamlined peer-review system." In this journal I recently found this article "Testing quantum mechanics in non-Minkowski space-time with high power lasers and 4th generation light sources" B. J. B. Crowley et al Scientific Reports 2, Article number: 491 Note the small volume number, all fresh and innocent. It's a quite interesting article that calculates the cross-section of photons scattering off electrons that are collectively accelerated by a high intensity laser. The possibility to maybe test Unruh radiation in a similar fashion has lately drawn some attention, see eg this paper. But this is explicitly not the setup that the authors of the present paper are after, as they write themselves in the text. What is remarkable about this paper is the amount of misleading and wrong statements about exactly what it is they are testing and what not. In the title it says they are testing "quantum mechanics in non-Minkowski space-time." What might that mean, I was wondering? Initially I thought it's another test of space-time non-commutativity, which is why I read the paper in the first place. The first sentence of the abstract reads "A common misperception of quantum gravity is that it requires accessing energies up to the Planck scale of 1019GeV, which is unattainable for any conceivable particle collider." Two sentences later, the authors no longer speak of quantum gravity but "a semiclassical extension of quantum mechanics ... under the assumption of weak gravity." So what's non-Minkowski then? And where's quantum gravity? What they do in fact in the paper is that they calculate the effect of the acceleration on the electrons and argue that via the equivalence principle this should be equivalent to testing the influence of gravity. (At least locally, though there's not much elaboration on this point in the paper.) Now, strictly speaking we do of course never make any experiment in Minkowski space - after all we sit in a gravitational field. In the same sense we have countless tests of the semi-classical limit of Einstein's field equations. So I read and I am still wondering, what is it that they test? In the first paragraph then the reader learns that the Newton-Schrödinger equation (which we discussed here) is necessary "to obtain a consistent description of experimental findings" with a reference to Carlip's paper and a paper by Penrose on state reduction. Clearly a misunderstanding, or maybe they didn't actually read the papers they cite. They also don't actually use the Schrödinger-Newton equation however - as I said, there isn't actually a gravitational field in their setup. "We do not concern ourselves with the quantized nature of the gravitational field itself." Fine, no need to quantize what's not there. Then on page two the reader learns "Our goal is to design an experiment where it may be possible to test some aspects of general relativity..." Okay, so now they're testing neither quantum mechanics nor quantum gravity, nor the Schrödinger-Newton equation, nor semi-classical gravity, but general relativity? Though, since there's no curvature involved, it would be more like testing the equivalence principle, no? But let's move on. We come across the following sentence: "[T]he most prominent manifestation of quantum gravity is that black holes radiate energy at the universal temperature - the Hawking temperature." Leaving aside that one can debate how "prominent" an effect black hole evaporation is, it's also manifestly wrong. Black hole evaporation is an effect of quantum field theory in curved spacetime. It's not a quantum gravitational effect, that's the exact reason why it's been dissected since decades. The authors then go on to talk about Unruh radiation and make an estimate showing that they are not testing this regime. It follows the actual calculation, which, as I said, is in principle interesting. But at the end of the calculation we are then informed that this "provid[es], for the first time, a direct way to determine the validity of the models of quantum mechanics in curved space-time, and the specific details of the coupling between classical and quantized fields." Except that there isn't actually any curved space-time in this experiment, unless they mean the gravitational field of the Earth. And the coupling to this has been tested for example in this experiment (and in some follow-up experiements to this), which the authors don't seem to be aware of or at least don't cite. Again, at the very best I think they're proposing to test the equivalence principle. In the closing paragraph they then completely discard the important qualifier that the space-time is not actually curved and that it's in the best case an indirect test by claiming that, on the contrary, "[T]he scientific case described in this letter is very compelling and our estimates indicate that a direct test of the semiclassical theory of quantum mechanics in curved space-time will become possible." Emphasis mine. So, let's see what have we. We started with a test of quantum mechanics in non-Minkowski space, came across some irrelevant mentioning of quantum gravity, a misplaced referral to the Schrödinger-Newton equation, testing general relativity in the lab, further irrelevant and also wrong comments about quantum gravity, to direct tests of quantum mechanics in curved space time. All by looking at a bunch of electrons accelerated in a laser beam. Misleading doesn't even begin to capture it. I can't say I'm very convinced by the quality standard of this new journal. Sunday, August 12, 2012 What is transformative research and why do we need it? Why do we need it? How can we support potentially transformative research? So what can be done? Thursday, August 09, 2012 Book review: “Thinking, fast and slow” by Daniel Kahneman Thinking, Fast and Slow By Daniel Kahneman Farrar, Straus and Giroux (October 25, 2011) I am always on the lookout for ways to improve my scientific thinking. That’s why I have an interest in the areas of sociology concerned with decision making in groups and how the individual is influenced by this. And this is also why I have an interest in cognitive biases - intuitive judgments that we make without even noticing; judgments which are just fine most of the time but can be scientifically fallacious. Daniel Kahneman’s book “Thinking, fast and slow” is an excellent introduction to the topic. Kahneman, winner of the Nobel Price for Economics in 2002, focuses mostly on his own work, but that covers a lot of ground. He starts with distinguishing between two different modes in which we make decisions, a fast and intuitive one, and a slow, more deliberate one. Then he explains how fast intuitions lead us astray in certain circumstances. The human brain does not make very accurate statistical computations without deliberate effort. But often we don’t make such an effort. Instead, we use shortcuts. We substitute questions, extrapolate from available memories, and try to construct plausible and coherent stories. We tend to underestimate uncertainty, are influenced by the way questions are framed, and our intuition is skewed by irrelevant details. Kahneman quotes and summarizes a large amount of studies that have been performed, in most cases with sample questions. He offers explanations for the results when available, and also points out where the limits of present understanding are. In the later parts of the book he elaborates on the relevance of these findings about the way humans make decision for economics. While I had previously come across a big part of the studies that he summarizes in the early chapters, the relation to economics had not been very clear to me, and I found this part enlightening. I now understand my problems trying to tell economists that humans do have inconsistent preferences. The book introduces a lot of terminology, and at the end of each chapter the reader finds a few examples for how to use them in everyday situations. “He likes the project, so he thinks its costs are low and its benefits are high. Nice example of the affect heuristic.” “We are making an additional investment because we not want to admit failure. This is an instance of the sunk-cost fallacy.” Initially, I found these examples somewhat awkward. But awkward or not, they serve very well for the purpose of putting the terminology in context. The book is well written, reads smoothly, is well organized, and thoroughly referenced. As a bonus, the appendix contains reprints of Kahneman’s two most influential papers that contain somewhat more details than the summary in the text. He narrates along the story of his own research projects and how they came into being which I found a little tiresome after he elaborated on the third dramatic insight that he had about his own cognitive bias. Or maybe I'm just jealous because a Nobel Prize winning insight in theoretical physics isn't going to come by that way. I have found this book very useful in my effort to understand myself and the world around me. I have only two complaints. One is that despite all the talk about the relevance of proper statistics, Kahneman does not mention the statistical significance of any of the results that he talks about. Now, this is all research which started two or three decades ago, so I have little doubt that the effects he talks about are indeed meanwhile well established, and, hey, he got a Nobel Prize after all. Yet, if it wasn’t for that I’d have to consider the possibility that some of these effects will vanish as statistical artifacts. Second, he does not at any time actually explain to the reader the basics of probability theory and Bayesian inference, though he uses it repeatedly. This, unfortunately, limits the usefulness of the book dramatically if you don’t already know how to compute probabilities. It is particularly bad when he gives a terribly vague explanation of correlation. Really, the book would have been so much better if it had at least an appendix with some of the relevant definitions and equations. That having been said, if you know a little about statistics you will probably find, like I did, that you’ve learned to avoid at least some of the cognitive biases that deal with explicit ratios and percentages, and different ways to frame these questions. I’ve also found that when it comes to risks and losses my tolerance apparently does not agree with that of the majority of participants in the studies he quotes. Not sure why that is. Either way, whether or not you are subject to any specific bias that Kahneman writes about, the frequency by which they appear make them relevant to understand the way human society works, and they also offer a way to improve our decision making. In summary, it’s a well-written and thoroughly useful book that is interesting for everybody with an interest in human decision-making and its shortcomings. I'd give this book four out of five stars. Below are some passages that I marked that gave me something to think. This will give you a flavor what the book is about. “A reliable way of making people believe in falsehoods is frequent repetition because familiarity is not easily distinguished from truth.” “[T]he confidence that people experience is determined by the coherence of the story they manage to construct from available information. It is the consistency of the information that matters for a good story, not its completeness.” “It is useful to remember […] that neglecting valid stereotypes inevitably results in suboptimal judgments. Resistance to stereotyping is a laudable moral position, but the simplistic idea that the resistance is cost-less is wrong.” “A general limitation of the human mind is its imperfect ability to reconstruct past states of knowledge, or beliefs that have changed. Once you adopt a new view of the world (or any part of it), you immediately lose much of your ability to recall what you used to believe before your mind changed.” “The brains s of humans and other animals contain a mechanism that is designed to give priority to bad news.” “When it comes to rare probabilities, our mind is not designed to get things quite right. For the residents of a planet that maybe exposed to events no one has yet experienced, this is not good news.” “We tend to make decisions as problems arise, even when we are specifically instructed to consider them jointly. We have neither the inclination not the mental resources to enforce consistency on our preferences, and our preferences are not magically set to be coherent, as they are in the rational-agent model.” “The sunk-cost fallacy keeps people for too long in poor jobs, unhappy marriages, und unpromising research projects. I have often observed young scientists struggling to salvage a doomed project when they would be better advised to drop it and start a new one.” Tuesday, August 07, 2012 Why does the baby cry? Fact sheet. Gloria at 2 months, crying. Two weeks after delivery, when the husband went back to work and my hemoglobin level had recovered enough to let me think about anything besides breathing, I seemed to be spending a lot of time on The One Question: Why does the baby cry? We had been drowned in baby books that all had something helpful to say. Or so I believe, not having read them. But what really is the evolutional origin of all that crying to begin with? That’s what I was wondering. Is there a reason to begin with? You don’t need a degree to know that baby cries if she’s unhappy. After a few weeks I had developed a trouble-shooting procedure roughly like this: Does she have a visible reason to be unhappy? Does she stop crying if I pick her up? New diaper? Clothes comfortable? Too warm? Too cold? Is she bored? Is it possible to distract her? Hungry? When I had reached the end of my list I’d start singing. The singing almost always helped. After that, there’s the stroller and white noise and earplugs. Yes, the baby cries when she’s unhappy, no doubt about that. But both Lara and Gloria would sometimes cry for no apparent reason, or at least no reason that Stefan and I were able to figure out. The crying is distressing for the parents and costs the baby energy. So why, if it’s such an inefficient communication channel, does the baby cry so much? If the baby is trying to tell us something, why haven't hundred thousands of years of evolution been sufficient to teach caregivers what it is that she wants? I came up with the following hypotheses: A) She doesn’t cry for any reason, it’s just what babies do. I wasn’t very convinced of this because it doesn’t actually explain anything. B) She cries so I don’t misplace or forget about her. I wasn’t very convinced of this either because after two months or so, my brain had classified the crying as normal background noise. Also, babies seem to cry so much it overshoots the target: It doesn’t only remind the caregivers, it frustrates them. C) It’s a stress-test. If the family can’t cope well, it’s of advantage for future reproductive success of the child if the family breaks up sooner rather than later. D) It’s an adaption delay. The baby is evolutionary trained to expect something else than what it gets in modern western societies. If I’d just treat the baby like my ancestors did, she wouldn’t cry so much. So I went and looked what the scientific literature has to say. I found a good review by Joseph Soltis from the year 2004 which you can download here. The below is my summary of these 48 pages. First, let us clarify what we’re talking about. The crying of human infants changes after about 3 months because the baby learns to make more complex sounds and also becomes more interactive. In the following we’ll only consider the first three months that are most likely to be nature rather than nurture. Here are some facts about the first three months of baby’s crying that seem to be established pretty well. All references can be found in Soltis’ paper. • Crying increases until about 6 weeks after birth, followed by a gradual decrease in crying until 3 or 4 months, after which it remains relatively stable. Crying is more frequent in the later afternoon and early evening hours. These crying patterns have been found in studies of very different cultures, from the Netherlands, from South African hunter-gatherers, from the UK, Manilia, Denmark, and North America. • Chimpanzees too have a peak in crying frequency at approximately 6 weeks of life, and a substantial decline in crying frequency by 12 weeks. • The cries of healthy, non-stressed infants last on the average 0.5-1.5 seconds with a fundamental pitch in the range of 200-600 Hz. The melody is either falling or rising/falling (as opposed to rising, falling/rising or flat). • Serious illness, both genetic and acquired, is often accompanied by abnormal crying. The most common cry characteristic indicating serious pathology is an unusually high pitched cry, in one case study above 2000 Hz, and in many other studies exceeding 1500 Hz. (That’s higher than most sopranos can sing.) Examples are: bacterial meningitis 750-1000 Hz, Krabbe’s disease up to 1120 Hz, hypoglycemia up to 1600 Hz. Other abnormal cry patters that have been found in illness is biphonation (the simultaneous production of two fundamental frequencies), too low pitch, and deviations from the normal cry melodies. • Various studies have been conducted to find out how well adults are able to tell the reason for a baby’s cry by playing them previously recorded cries. These studies show mothers are a little bit better than random chance when given a predefined selection of choices (eg pain, anger, other, in one study), but by and large mothers as well as other adults are pretty bad at figuring out the reason for a baby’s cry. Without being given categories, participants tend to attribute all cries to hunger. • It has been reported in several papers that parents described a baby’s crying as the most proximate cause triggering abuse and infanticide. It has also been shown that especially the high pitched baby cries produce a response of the autonomic nervous system, measureable for example by the heart rate or skin conductance (the response is higher than for smiling babies). It has also been shown that abusers exhibit higher autonomic responses to high-pitched cries than non-abusers. • Excessive infant crying is the most common clinical complaint of mothers with infants under three months of age. • Excessive infant crying that begins and ends without warning is called “colic.” It is often attributed to organic disorders, but if the baby has no other symptoms it is estimated that only 5-10% of “colic” go back to an organic disorder, the most common one being lactose intolerance. If the baby has other symptoms (flexed legs, spasm, bloating, diarrhea), the ratio of organic disorder goes up to 45%. The rest cries for unknown reasons. Colic usually improves by 4 months, or so they tell you. (Lara’s didn’t improve until she was 6 months. Gloria never had any.) • Colic is correlated with postpartum depression which is in turn robustly associated with reduced maternal care. • Records and media reports kept by the National Center on Shaken Baby Syndrome implicate crying as the most common trigger. • In a survey among US mothers, more infant crying was associated with lower levels of perceived infant health, more worry about baby’s health, and less positive emotion towards the infant. • Some crying bouts are demonstrably unsoothable to typical caregiving responses in the first three months. Well, somebody has to do these studies. • In studies of nurses judging infant pain, the audible cry was mostly redundant to facial activity in the judgment of pain. Now let us look at the hypotheses researchers have put forward and how well they are supported by the facts. Again, let me mention that everybody agrees the baby cries when in distress, the question is if that’s the entire reason. 1. Honest signal of need. The baby cries if and only if she needs or wants something, and she cries to alert the caregivers of that need. This hypothesis is not well supported by the facts. Baby’s cries are demonstrably inefficient of bringing the baby the care it allegedly needs because caregivers don’t know what she wants and in many cases there doesn’t seem to be anything they can do about it. This is the scientific equivalent of my hypothesis D which I found not so convincing. 2. Signal of vigor. This hypothesis says that the baby cries to show she’s healthy. The more the baby cries (in the “healthy” pitch and melody range), the stronger she is and the more the mother should care because it’s a good investment of her attention to raise offspring that’s likely to reproduce successfully. Unfortunately, there’s no evidence linking a high amount of crying to good health of the child. In contrast, as mentioned above, parents perceive children as more sickly if they cry more, which is exactly the opposite of what the baby allegedly “wants” to signal. Also, lots of crying is apparently maladaptive according to the evidence listed above, because it can cause violence against the child. It’s also unclear why, if the baby isn’t seriously sick and too weak to cry, a not-so-vigorous child should alert the caregivers to his lack of vigor and trigger neglect. It doesn’t seem to make much sense. This is the scientific equivalent of my hypothesis B which I didn’t find very convincing either. 3. Graded signal of distress. The baby cries if she’s in distress, and the more distress the more she cries. This hypothesis is, at least for what pain is concerned, supported by evidence. Pretty much everybody seems to agree on that. As mentioned above however, while distress leads to crying, this leaves open the question why the baby is in distress to begin with and why it cries if caregivers can’t do anything about it. Thus, while this hypothesis is the least controversial one, it’s also the one with the smallest explanatory value. 4. Manipulation: The baby cries so mommy feeds her as often as possible. Breastfeeding stimulates the production of the hormone prolactin; prolactin inhibits estrogen production, which often (though not always) keeps the estrogen level below the threshold necessary for the menstrual cycle to set it. This is called lactational amenorrhea. In other words, the more the baby gets mommy to feed her, the smaller the probability that a younger sibling will compete for resources, thus improving the baby’s own well-being. The problem with this hypothesis is that it would predict the crying to increase when the mother’s body has recovered, some months after birth, and is in shape to carry another child. Instead however, at this time the babies cry less rather than more. (It also seems to say that having siblings is a disadvantage to one’s own reproductive success, which is quite a bold statement in my opinion.) 5. Thermoregulatory assistance. An infant’s thermoregulation is not very well developed, which is why you have to be so careful to wrap them warm when it’s cold and to keep them in the shade when it’s hot. According to this hypothesis the baby cries to make herself warm and also to alert the mother that it needs assistance with thermoregulation. It’s an interesting hypothesis that I hadn’t heard of before and it doesn’t seem to have been much studied. I would expect however that in this case the amount of crying depends on the external temperature, and I haven’t come across any evidence for that. 6. Inadequacy of central arousal. The infant’s brain needs a certain level of arousal for proper development. Baby starts crying if not enough is going on, to upset herself and her parents. If there’s any factual evidence speaking for this I don’t know of it. It seems to be a very young hypothesis. I’m not sure how this is compatible with my finding that the Lara after excessive crying would usually fall asleep, frequently in the middle of a cry, and that excitement (people, travel, noise) were a cause for crying too. 7. Underdeveloped circadian rhythm. The infant’s sleep-wake cycle is very different from an adult’s. Young babies basically don’t differentiate night from day. It’s only at around two to three months that they start sleeping through the night and develop a daily rhythm. According to this hypothesis it’s the underdeveloped circadian rhythm that causes the baby distress, probably because certain brain areas are not well synched with other daily variations. This makes a certain sense because it offers a possible explanation for the daily return of crying bouts in the late afternoon, and also for why they fade when the babies sleep through the night. This too is a very young hypothesis that is waiting for good evidence. 8. Behavioral state. The baby’s mind knows three states: Sleep, awake, and crying. It’s a very minimalistic hypothesis, but I’m not sure it explains anything. This is the scientific equivalent of my hypothesis A, the baby just cries. Apparently nobody ever considered my hypothesis D, that baby cries to move herself into an optimally stable social environment which would have developmental payoffs. It’s probably very difficult a case to make. The theoretical physicist in me is admittedly most attracted to one of the neat and tidy explanations in which the crying is a side-effect of a physical development. So if your baby is crying and you don’t know why, don’t worry. Even scientists who have spent their whole career on this question don’t actually know why the baby cries. Sunday, August 05, 2012 Erdös and amphetamines: check Some weeks ago I wrote a review on Jonah Lehrer's book "Imagine," in which I complained about missing references. Now that it turns out Lehrer fabricated quotes and facts on various occasions (see eg here and here), I recalled that I meant to look up a reference on an interesting story he told, that the famous mathematician Paul Erdös kept up his productivity by taking benzedrine. Benzedrine belongs to the amphetamines, also known as speed. Lehrer did not quote any source for this story. So I did look it up, and it turns out it's true. In Paul Hoffman's biography of Erdös one finds: Erdös first did mathematics at the age of three, but for the last twenty-five years of his life, since the death of this mother, he put in nineteen-hour days, keeping himself fortified with 10 to 20 milligrams of Benzedrine or Ritalin, strong espresso, and caffeine tablets. "A mathematician," Erdös was fond of saying, "is a machine for tuning coffee into theorems." When friends urged him to slow down, he always had the same response: "There'll be plenty of time to rest in the grave." (You can read chapter 1 from the book, which contains this paragraph, here). Benzedrine was available on prescription in the USA during this time. Erdös lived to the age of 83. During his lifetime, he wrote or co-authored 1,475 academic papers. Lehrer also relates the following story in his book Ron Graham, a friend and fellow mathematician, once bet Erdos five hundred dollars that he couldn't abstain from amphetamines for thirty days. Erdos won the wager but complained that the progress of mathematicians had been set back by a month: "Before, when I looked at a piece of blank paper, my mind was filled with ideas," he complained. "Now all I see is a blank piece of paper. (Omitted umlauts are Lehrer's, not mine.) Lehrer does not mention Erdös was originally prescribed benzedrine to treat depression after his mother's death. I'm not sure exactly what the origin of this story is. It is mentioned in a slightly different wording in this PDF by Joshua Hill: Erdős's friends worried about his drug use, and in 1979 Graham bet Erdős $500 that he couldn't stop taking amphetamines for a month. Erdős accepted, and went cold turkey for a complete month. Erdős's comment at the end of the month was "You've showed me I'm not an addict. But I didn't get any work done. I'd get up in the morning and stare at a blank piece of paper. I'd have no ideas, just like an ordinary person. You've set mathematics back a month." He then immediately started taking amphetamines again. Hill's article is not quoted by Lehrer, and there's no reference in Hill's article. It also seems to go back to Paul Hoffman's book (same chapter). (Note added: I revised the above paragraph, because I hadn't originally seen it in Hoffman's book.) Partly related: Calculate your Erdős number here, mine is 4. Friday, August 03, 2012 Lara and Gloria are presently very difficult. They have learned to climb the chairs and upwards from there; I constantly have to pick them off the furniture. Yesterday, I turned my back on them for a second, and when I looked again Lara was sitting on the table, happily pulling a string of Kleenex out of the box, while Gloria was moving away the chair Lara had used to climb up. During the last month, the girls have added a few more words to their vocabulary. The one that's most obvious to understand is "lallelalle," which is supposed to mean "empty", and usually a message to me to refill the apple juice. Gloria also has found a liking in the word "Haar" (hair), and she's been saying "Goya" for a while, which I believe means "Gloria". Or maybe yogurt. They both can identify most body parts if you name them. Saying "feet" will make them grab their feet, "nose" will have them point at their nose, and so on. If Gloria wants to make a joke, she'll go and grab her sister's nose instead. Gloria also announces that she needs a new diaper by padding her behind, alas after the fact. I meanwhile am stuck in proposal writing again. The organization for the conference in October and the program in November is going nicely, and I'm very much looking forward to both events. My recent paper was accepted for publication in Foundations of Physics, and I've wrapped up another project that had been in my drawer for a while. Besides this, I've spent some time reading up the history of Nordita, which is quite interesting actually, maybe I'll have a post on this at some point. I finally said good bye to my BlackBerry and now have an iPhone, which works so amazingly smoothly I'm deeply impressed. Below a little video of the girls that I took the other day. YouTube is offering a fix for shaky videos, which is why you might see the borders moving around. I hope your summer is going nicely and that you have some time to relax! Wednesday, August 01, 2012 Letter of recommendation 2.0
90f2dc88d95620ee
Monday, April 30, 2012 Spring came late to Germany, but it seems it finally has arrived. The 2012 Riesling has the first leaves and the wheat is a foot high. Lara and Gloria are now 16 months old, almost old enough so we should start counting their age in fraction of years. This month's news is Lara's first molar, and Gloria's first word: I have been busy writing a proposal for the Swedish Research Council, which is luckily submitted now, and I also had a paper accepted for publication. Ironically, from all the papers that I wrote in the last years, it's the one that is the least original and cost me the least amount of time, yet it's the only one that smoothly went through peer review. Besides this, I'm spending my time with the organization of a workshop, a conference, and a four-week long program. I'm also battling a recurring ant infection of our apartment, which is complicated by my hesitation to distribute toxins where the children play. Friday, April 27, 2012 The Nerdly Painter's Blog In expecto weekendum, I want to share with you the link of Regina Valluzzi'a blog Nerdly Painter. Regina has a BS in Materials Science from MIT and PhD in Polymer Science from University of Massachusetts Amherst, and she does the most wonderful science-themed paintings I've seen. A teaser below. Go check out her blog and have a good start into the weekend! Wednesday, April 25, 2012 The Cosmic Ray Composition Problem A recent arXiv paper provides an update on the cosmic ray composition problem: First the basics: We're talking about the ultra-high energetic end of the cosmic ray spectrum, with total energies of about 106 TeV. That's the energy of the incident particles in the Earth rest frame, not the center-of-mass energy of their collision with air molecules (ie mostly nucleons), which is "only" of the order 10 TeV, and thus somewhat larger than what the LHC delivers. After the primary collision, the incoming particles produce a cascade of secondary particles, known as a "cosmic ray shower" which can be detected on the ground. These showers are then reconstructed from the data with suitable software so that, ideally, the physics of the initial high energy collison can be extracted. For some more details on cosmic ray showers, please read this earlier post. Cosmic ray shower, artist's impression. Source: ASPERA The Pierre Auger Cosmic Ray Observatory is a currently running experiment that measures cosmic ray showers on the ground. One relevant quantity about the cosmic rays is the "penetration depth," that is the distance the primary particle travels through the atmosphere till it makes the first collision. The penetration depth can be reconstructed if the shower on the ground can be measured sufficiently precise, and is relatively new data. The penetration depth depends on the probability of the primary particle to interact, and with that on the nature of the particle. While we have never actually tested the collisions at the center-of-mass energies of the highest energetic cosmic rays, we think we have a pretty good understanding of what's going on by virtue of the standard model of particle physics. All the knowledge that we have, based on measurements at lower energies, is incorporated into the numerical models. Since the collisions involve nucleons rather than elementary particles, this goes together with an extrapolation of the parton distribution function by the DGLAP equation. This sounds complicated, but since QCD is asymptotically free, it should actually get easier to understand at high energies. Shaham and Piran in their paper argue that this extrapolation isn't working as expected, which might be a signal for new physics. The reason is that the penetration depth data shows that at high energies the probability of the incident particles to interact peaks at a shorter depth and is also more strongly peaked than one expects for protons. Now it might be that at higher energies the cosmic rays are dominated by other primary particles, heavier ones, that are more probable to interact, thus moving the peak of the distribution to a shorter depth. However, if one adds a contribution from other constituents (heavier ions: He, Fe...) this also smears out the distribution over the depth, and thus doesn't fit the width of the observed penetration depth distribution. This can be seen very well from the figure below (Fig 2 from Shaham and Piran's paper) which shows the data from the Pierre Auger Collaboration, and the expectation for a composition of protons and Fe nuclei. You can see that adding a second component does have the desired effect of moving the average value to a shorter depth. But it also increases the width. (And, if the individual peaks can be resolved, produces a double-peak structure.) Fig 2 from arXiv:1204.1488. Shown is the number of events in the energy bin 1 to 1.25 x 106 TeV as a function of the penetration depth. The red dots are the data from the Pierre Auger Collaboration (arXiv:1107.4804), the solid blue line is the expectation for a combination of protons and Fe nuclei. The authors thus argue there is no compositions for the ultra high energetic primary cosmic ray particles that fits the data well. Shaham and Piram think that this mismatch should be taken seriously. While different simulations yield slightly different results, the results are comparable and neither code fits the data. If it's not the simulation, the mismatch comes about either from the data or the physics. "There are three possible solutions to this puzzling situation. First, the observational data might be incorrect, or it is somehow dominated by poor statistics: these results are based on about 1500 events at the lowest energy bin and about 50 at the highest one. A mistake in the shower simulations is unlikely, as different simulations give comparable results. However, the simulations depend on the extrapolations of the proton cross sections from the measured energies to the TeV range of the UHECR collisions. It is possible that this extrapolation breaks down. In particular a larger cross section than the one extrapolated from low energies can explain the shorter penetration depth. This may indicates new physics that set in at energies of several dozen TeV." The authors are very careful not to jump to conclusions, and I won't either. To be convinced there is new physics to find here, I would first like to see a quantification of how bad the best fit from the models actually is. Unfortunately, there's no chi-square/dof in the paper that would allow such a quantification, and as illustrative as the figure above is, it's only one energy bin and might be a misleading visualization. I am also not at all sure that the different simulations are actually independent from each other. Since scientific communities exchange information rapidly and efficiently, there exists a risk for systematic bias even if several models are considered. Possibly there's just some cross-section missing or wrong. Finally, there's nothing in the paper about how the penetration depth data is obtained to begin with. Since that's not a primary observable, there must be some modeling involved too, though I agree that this isn't a likely source of error. With these words of caution ahead, it is possible that we are looking here at the first evidence for physics beyond the standard model. Monday, April 23, 2012 Can we probe planck-scale physics with quantum optics? You might have read about this some weeks ago on Chad Orzel's blog or at Ars Technica: Nature published a paper by Pikovski et al on the possibility to test Planck scale physics with quantum optics. The paper is on the arXiv under arXiv:1111.1979 [quant-ph]. I left a comment at Chad's blog explaining that it is implausible the proposed experiment will test any Planck scale effects. Since I am generally supportive of everybody who cares about quantum gravity phenomenology, I'd have left it at this, and be happy that Planck scale physics made it into Nature. But then I saw that Physics Today picked it up, and before this spreads further, here's an extended explanation of my skepticism. Igor Pikovski et al have proposed a test for Planck scale physics using recent advances in quantum optics. The framework they use is a modification of quantum mechanics, expressed by a deformation of the canonical commutation relation, that takes into account that the Planck length plays the role of a minimal length. This is one of the most promising routes to quantum gravity phenomenology, and I was excited to read the article. In their article, the authors claim that their proposed experiment is feasible to "probe the possible effects of quantum gravity in table-top quantum optics experiment" and that it reaches a "hitherto unprecedented sensitivity in measuring Planck-scale deformations." The reason for this increased sensitivity for Planck-scale effects is, according to the authors own words, that "the deformations are enhanced in massive quantum systems." Unfortunately, this claim is not backed up by the literature the authors refer to. The underlying reason is that the article fails to address the question of Lorentz-invariance. The deformation used is not invariant under normal Lorentz-transformations. There are two ways to deal with that, either breaking Lorentz-invariance or deforming it. If it is broken, there exists a multitude of very strong constraints that would have to be taken into account and are not mentioned in the article. Presumably then the authors implicitly assume that Lorentz-symmetry is suitably deformed in order to keep the commutation relations invariant - and in order to test something actually new. This can in fact be done, but comes at a price. Now the momenta transform non-linearly. Consequently, a linear sum of momenta is no longer Lorentz-invariant. In the appendix however, the authors have used the normal sum of momenta to define the center-of-mass momentum. This is inconsistent. To maintain Lorentz-invariance, the modified sum must be used. This issue cannot be ignored for the following reason. If a suitably Lorentz-invariant sum is used, it contains higher-order terms. The relevance of these terms does indeed increase with the mass. This also means that the modification of the Lorentz-transformations become more relevant with the mass. Since this is a consequence of just summing up momenta, and has nothing in particular to do with the nature of the object that is being studied, the increasing relevance of corrections prevents one from reproducing a macroscopic limit that is in agreement with our knowledge of Special Relativity. This behavior of the sum, whose use, we recall, is necessary for Lorentz-invariance, is thus highly troublesome. This is known in the literature as the "soccer ball problem." It is not mentioned in the article. If the soccer-ball problem persists, the theory is in conflict with observation already. While several suggestions have been made how this problem can be addressed in the theory, no agreement has been reached to date. A plausible and useful ad-hoc suggestion that has been made by Magueijo and Smolin is that the relevant mass scale, the Planck mass, for N particles is rescaled to N times the Planck mass. Ie, the scale where effects become large moves away when the number of particles increases. Now, that this ad-hoc solution is correct is not clear. What is clear however is that, if the theory makes sense at all, the effect must become less relevant for systems with many constituents. A suppression with the number of constituents is a natural expectation. If one takes into account that for sums of momenta the relevant scale is not the Planck mass, but N times the Planck mass, the effect the authors consider is suppressed by roughly a factor 1010. This means the existing bounds (for single particles) cannot be significantly improved in this way. This is the expectation that one can have from our best current understanding of the theory. This is not to say that the experiment should not be done. It is always good to test new parameter regions. And, who knows, all I just said could turn out to be wrong. But it does mean that based on our current knowledge, it is extremely unlikely that anything new is to be found there. And vice versa, if nothing new is found, this cannot be used to rule out a minimal length modification of quantum mechanics. (This is not the first time btw, that somebody tried to exploit the fact that the deviations get larger with mass by using composite systems, thereby promoting a bug to a feature. In my recent review, I have a subsection dedicated to this.) Sunday, April 22, 2012 Experimental Search for Quantum Gravity 2012 It is my great pleasure to let you know that there will be a third conference on Experimental Search for Quantum Gravity, October 22 to 25, this year, at Perimeter Institute. (A summary of the ESQG 2007 is here, and a summary from 2010 is here.) Even better is that this time it wasn't my initiative but Astrid Eichhorn's, who is also to be credited for the theme "The hard facts." The third of the organizers is Lee Smolin, who has been of great help also with the last meeting. But most important, the website of the ESQG 2012 is here. We have an open registration with a moderate fee of CAN$ 115, which is mostly to cover catering expenses. There is a limit to the number of people we can accommodate, so if you are interested in attending, I recommend you register early. If time comes, I'll tell you some more details about the meeting. Thursday, April 19, 2012 Schrödinger meets Newton In January, we discussed semi-classical gravity: Classical general relativity coupled to the expectation value of quantum fields. This theory is widely considered to be only an approximation to the still looked-for fundamental theory of quantum gravity, most importantly because the measurement process messes with energy conservation if one were to take it seriously, see earlier post for details. However, one can take the point of view that whatever the theorists think is plausible or not should still be experimentally tested. Maybe the semi-classical theory does in fact correctly describe the way a quantum wave-function creates a gravitational field; maybe gravity really is classical and the semi-classical limit exact, we just don't understand the measurement process. So what effects would such a funny coupling between the classical and the quantum theory have? Luckily, to find out it isn't really necessary to work with full general relativity, one can instead work with Newtonian gravity. That simplifies the issue dramatically. In this limit, the equation of interest is known as the Schrödinger-Newton equation. It is the Schrödinger-equation with a potential term, and the potential term is the gravitational field of a mass distributed according to the probability density of the wave-function. This looks like this Inserting a potential that depends on the expectation value of the wave-function makes the Schrödinger-equation non-linear and changes its properties. The gravitational interaction is always attractive and thus tends to contract pressureless matter distributions. One expects this effect to show up here by contracting the wave-packet. Now the usual non-relativistic Schrödinger equation results in a dispersion for massive particles, so that an initially focused wave-function spreads with time. The gravitational self-coupling in the Schrödinger-Newton equation acts against this spread. Which one wins, the spread from the dispersion or the gravitational attraction, depends on the initial values. However, the gravitational interaction is very weak, and so is the effect. For typical systems in which we study quantum effects, either the mass is not large enough for a collapse, or the typical time for it to take place is too long. Or so you are lead to think if you make some analytical estimates. The details are left to a numerical study though because the non-linearity of the Schrödinger-Newton equation spoils the attempt to find analytical solutions. And so, in 2006 Carlip and Salzmann surprised the world by claiming that according to their numerical results, the contraction caused by the Schrödinger-Newton equation might be possible to observe in molecule interferometry, many orders of magnitude off the analytical estimate. It took five years until a check of their numerical results came out, and then two papers were published almost simultaneously: • Schrödinger-Newton "collapse" of the wave function J. R. van Meter arXiv:1105.1579 [quant-ph] • Gravitationally induced inhibitions of dispersion according to the Schrödinger-Newton Equation Domenico Giulini and André Großardt arXiv:1105.1921 [gr-qc] They showed independently that Carlip and Salzmann's earlier numerical study was flawed and the accurate numerical result fits with the analytical estimate very well. Thus, the good news is one understands what's going on. The bad news is, it's about 5 orders of magnitude off today's experimental possibilities. But that's in an area of physics were progress is presently rapid, so it's not hopeless! It is interesting what this equation does, so let me summarize the findings from the new numerical investigation. These studies, I should add, have been done by looking at the spread of a spherical symmetric Gaussian wave-packet. The most interesting features are: • For masses smaller than some critical value, m less than ~ (ℏ2/(G σ))1/3, where σ is the width of the initial wave-packet, the entire wave-packet expands indefinitely. • For masses larger than that critical value, the wave-packet fragments and a fraction of the probability propagates outwards to infinity, while the rest remains localized in a finite region. • From the cases that eventually collapse, the lighter ones expand initially and then contract, the heavier ones contract immediately. • The remnant wave function approaches a stationary state, about which it performs dampened oscillations. That the Schrödinger-Newton equation leads to a continuous collapse might lead one to think it could play a role for the collapse of the wave-function, an idea that has been suggested already in 1984 by Lajos Diosi. However, this interpretation is questionable because it became clear later that the gravitational collapse that one finds here isn't suitable to be interpreted as a wave-function collapse to an eigenstate. For example, in this 2002 paper, it was found that two bumps of probability density, separated by some distance, will fall towards each other and meet in the middle, rather than focus on one of the two initial positions as one would expect for a wave-function collapse. Monday, April 16, 2012 The hunt for the first exoplanet The little prince Today, extrasolar planets, or exoplanets for short, are all over the news. Hundreds are known, and they are cataloged in The Extrasolar Planets Encyclopaedia, accessible for everyone who is interested. Some of these extrasolar planets orbit a star in what is believed to be a habitable zone, fertile ground for the evolution of life. Planetary systems, much like ours, have turned out to be much more common results of stellar formation than had been expected. But the scientific road to this discovery has been bumpy. Once one knows that stars on the night sky are suns like our own, it doesn't take a big leap of imagination to think that they might be accompanied by planets. Observational evidence for exoplanets was looked for already in the 19th century, but the field had a bad start. Beginning in the 1950s, several candidates for exoplanets made it into the popular press, yet they turned out to be data flukes. At that time, the experimental method used relied on detecting minuscule changes in the motion of the star caused by a heavy planet of Jupiter type. If you recall the two-body problem from 1st semester: It's not that one body orbits the other, but they both orbit around their common center-of-mass, just that, if one body is much heavier than the other, it might almost look like the lighter one is orbiting the heavier one. But if a sufficiently heavy planet orbits a star, one might in principle find out by watching the star very closely because it wobbles around the center-of-mass. In the 50s, watching the star closely meant watching its distance to other stellar objects. The precision which could be achieved this way simply wasn't sufficient to reliably tell the presence of a planet. In the early 80s, Gordon Walker and his postdoc Bruce Campbell from British Columbia, Canada, pioneered a new technique that improved the possible precision by which the motion of the star could be tracked by two orders of magnitude. Their new technique relied on measuring the star's absorption lines, whose frequency depends on the motion of the star relative to us because of the Doppler effect. To make that method work, Walker and Campbell had to find a way to precisely compare spectral images taken at different times so they'd know how much the spectrum had shifted. They found an ingenious solution to that: They would used the, very regular and well-known, molecular absorption lines of hydrogen fluoride gas. The comb-like absorption lines of hydrogen fluoride served as a ruler relative to which they could measure the star's spectrum, allowing them to detect even smallest changes. Then, together with astronomer Stephenson Yang, they started looking at candidate stars which might be accompanied by Jupiter-like planets. To detect the motion of the star due to the planet, they would have to record the system for several completed orbits. Our planet Jupiter needs about 12 years to orbit the sun, so they were in for a long-term project. Unfortunately, they had a hard time finding support for their research. In his recollection “The First High-Precision Radial Velocity Search for Extra-Solar Planets” (arXiv:0812.3169), Gordon Walker recounts that it was difficult to get time for their project at observatories: “Since extra-solar planets were expected to resemble Jupiter in both mass and orbit, we were awarded only three or four two-night observing runs each year.” And though it is difficult to understand today, back then many of Walker's astronomer colleagues thought the search for exoplanets a waste of time. Walker writes: “It is quite hard nowadays to realise the atmosphere of skepticism and indifference in the 1980s to proposed searches for extra-solar planets. Some people felt that such an undertaking was not even a legitimate part of astronomy. It was against such a background that we began our precise radial velocity survey of certain bright solar-type stars in 1980 at the Canada France Hawaii 3.6-m Telescope.” After years of data taking, they had identified several promising candidates, but were too cautious to claim a discovery. At the 1987 meeting of the American Astronomical Society in Vancouver, Campbell announced their preliminary results. The press reported happily yet another discovery of an exoplanet, but the astronomers regarded even Walker and Campbell's cautious interpretation of the data with large skepticism. In his article “Lost world: How Canada missed its moment of glory,” Jacob Berkowitz describes the reaction of Walker and Campbell's colleagues: “[Campbell]'s professional colleagues weren't as impressed [as the press]. One astronomer told The New York Times he wouldn't call anything a planet until he could walk on it. No one even attempted to confirm the results.” Walker's gifted postdoc Bruce Campbell suffered most from the slow-going project that lacked appreciation and had difficulties getting continuing funding. In 1991, after more than a decade of data taking, they still had no discovery to show up with. Campbell meanwhile had reached age 42, and was still sitting on a position that was untenured, was not even tenure-track. Campbell's frustration built up to the point where he quit his job. When he left, he erased all the analyzed data in his university account. Luckily, his (both tenured) collaborators Walker and Yang could recover the data. Campbell made a radical career change and became a personal tax consultant. But in late 1991, Walker and Yang were finally almost certain to have found sufficient evidence of an exoplanet around the star gamma Cephei, whose spectrum showed a consistent 2.5 year wobble. In a fateful coincidence, when Walker just thought they had pinned it down, one of his colleagues, Jaymie Matthews, came by his office, looked at the data and pointed out that the wobble in the data coincided with what appeared to be periods of heightened activity on the star's surface. Walker looked at the data with new eyes and, mistakenly, believed that they had been watching all the time an oscillating star rather than a periodic motion of the star's position. Briefly after that, in early 1992, Nature reported the first confirmed discovery of an exoplanet by Wolszczan and Frail, based in the USA. Yet, the planet they found orbits a millisecond pulsar (probably a neutron star), so for many the discovery doesn't score highly because the star's collapse would have wiped out all life in that planetary system long ago. In 1995 then, astronomers Mayor and Queloz of the University of Geneva announced the first definitive observational evidence for an exoplanet orbiting a normal star. The planet has an orbital period of a few days only, no decade long recording was necessary. It wasn't until 2003 that the planet that Walker, Campbell and Yang had been after was finally confirmed. There are three messages to take away from this story. First, Berkowitz in his article points out that Canada failed to have faith in Walker and Campbell's research at the time when just a little more support would have made them first to discover an exoplanet. Funding for long-term projects is difficult to obtain and it's even more difficult if the project doesn't produce results before it's really done. That can be an unfortunate hurdle for discoveries. Second, it is in hindsight difficult to understand why Walker and Campbell's colleagues were so unsupportive. Nobody ever really doubted that exoplanets exist, and with the precision of measurements in astronomy steadily increasing, sooner or later somebody would be able to find statistically significant evidence. It seems that a few initial false claims had a very unfortunate backlash that did exceed the reasonable. Third, in the forest of complaints about lacking funding for basic research, especially for long-term projects, every tree is a personal tragedy. Saturday, April 14, 2012 Book review: “How to Teach Relativity to Your Dog” by Chad Orzel How to Teach Relativity to Your Dog By Chad Orzel Basic Books (February 28, 2012) Let me start with three disclaimers: First, I didn’t buy the book, I got a free copy from the editor. Second, this is the second of Chad Orzel’s dog physics books and I didn’t read the first. Third, I’m not a dog person. Chad Orzel from Uncertain Principles is a professor for physics at Union College and the best known fact about him is that he talks to his dog, Emmy. Emmy is the type of dog large enough to sniff your genitals without clawing into your thighs, which I think counts in her favor. That Chad talks to his dog is of course not the interesting part. I mean, I talk to my plants, but who cares? (How to teach hydrodynamics to your ficus.) But Chad imagines his dog talks back, and so the book contains conversations between Emmy and Chad about physics. In this book, Chad covers the most important aspects of special and general relativity: time dilatation and length contraction, space-time diagrams, relativistic four-momentum, the equivalence principle, space-time curvature, the expansion of the universe and big bang theory. Emmy and Chad however go beyond that by introducing the reader also to the essentials of black holes, high energy particle collisions, the standard model of particle physics and Feynman diagrams. They even add a few words on grand unification and quantum gravity. The physics explanations are very well done, and there are many references to recent observations and experiments, so the reader is not left with the impression that all this is last century’s stuff. The book contains many helpful figures and even a few equations. It also comes with a glossary and a guide to further reading. Emmy’s role in the book is to engage Chad in a conversation. These dialogues are very well suited to introduce unfamiliar subjects because they offer a natural way to ask and answer questions, and Chad uses them masterfully. Besides Emmy the dog, the reader also meets Nero the cat and there are a lot of squirrels involved too. The book is written very well, in unique do..., oops, Orzel-style, with a light sense of humor. It is difficult for me to judge this book. I must have read dozens of popular science introductions to special and general relativity, but most of them 20 years ago. Chad explains very well, but then all the dog stuff takes up a lot of space (the book has 300 pages) and if you are, like me, not really into dogs, the novelty wears off pretty fast and what’s left are lots of squirrels. I did however learn something from this book, for example that dogs eat cheese, which was news to me. I also I learned that Emmy is partly German shepherd and thus knows the word “Gedankenexperiment,” though Stefan complains that she doesn’t know the difference between genitive and dative. In summary, Chad Orzel’s book “How to Teach Relativity to Your Dog” is a flawless popular science book that gets across a lot of physics in an entertaining way. If you always wanted to know what special and general relativity is all about and why it matters, this is a good starting point. I’d give this book 5 out of 5 tail wags. Thursday, April 12, 2012 Some physics-themed ngram trends Tuesday, April 10, 2012 Be careful what you wish for Michael Nielsen in his book “Reinventing Discovery” relates the following anecdote from the history of science. In the year 1610, Galileo discovered that the planet Saturn, the most distant then known planet, had a peculiar shape. Galileo’s telescope was not good enough to resolve Saturn’s rings, but he saw two bumps on either side of the main disk. To make sure this discovery would be credited to him, while still leaving him time to do more observations, Galileo followed a procedure common at the time: He sent the announcement of the discovery to his colleagues in form of an anagram This way, Galileo could avoid revealing his discovery, but would still be able to later claim credit by solving the anagram, which meant “Altissimum planetam tergeminum observavi,” Latin for “I observed the highest of the planets to be three-formed.” Among Galileo’s colleagues who received the anagram was Johannes Kepler. Kepler had at this time developed a “theory” according to which the number of moons per planet must follow a certain pattern. Since Earth has one moon and from Jupiter’s moons four were known, Kepler concluded that Mars, the planet between Earth and Jupiter, must have two moons. He worked hard to decipher Galileo’s anagram and came up with “Salve umbistineum geminatum Martia proles” Latin for “Be greeted, double knob, children of Mars,” though one letter remained unused. Kepler interpreted this as meaning Galileo had seen the two moons of Mars, and thereby confirmed Kepler’s theory. Psychologists call this effort which the human mind makes to brighten the facts “motivated cognition,” more commonly known as “wishful thinking.” Strictly speaking the literature distinguishes both in that wishful thinking is about the outcome of a future event, while motivated cognition is concerned with partly unknown facts. Wishful thinking is an overestimate of the probability that a future event has a desirable outcome, for example that the dice will all show six. Motivated cognition is an overly optimistic judgment of a situation with unknowns, for example that you’ll find a free spot in a garage whose automatic counter says “occupied,” or that you’ll find the keys under the streetlight. There have been many small-scale psychology experiments showing that most people are prone to overestimate a lucky outcome (see eg here for a summary), even if they know the odds, which is why motivated cognition is known as a “cognitive bias.” It’s an evolutionary developed way to look at the world that however doesn’t lead one to an accurate picture of reality. Another well-established cognitive bias is the overconfidence bias, which comes in various expressions, the most striking one being “illusory superiority”. To see just how common it is for people to overestimate their own performance, consider the 1981 study by Svenson which found that 93% of US American drivers rate themselves to be better than the average. The best known bias is maybe confirmation bias, which leads one to unconsciously pay more attention to information confirming already held believes than to information contradicting it. And a bias that got a lot attention after the 2008 financial crisis is “loss aversion,” characterized by the perception of a loss being more relevant than a comparable gain, which is why people are willing to tolerate high risks just to avoid a loss. It is important to keep in mind that these cognitive biases serve a psychologically beneficial purpose. They allow us to maintain hope in difficult situations and a positive self-image. That we have these cognitive biases doesn’t mean there’s something wrong with our brain. In contrast, they’re helpful to its normal operation. However, scientific research seeks to unravel the truth, which isn’t the brain’s normal mode of operation. Therefore scientists learn elaborate techniques to triple-check each and every conclusion. This is why we have measures for statistical significance, control experiments and double-blind trials. Despite that, I suspect that cognitive biases still influence scientific research and hinder our truth-seeking efforts because we can’t peer review scientists motivations, and we’re all alone inside our heads. And so the researcher who tries to save his model by continuously adding new features might misjudge the odds of being successful due to loss aversion. The researcher who meticulously keeps track of advances of the theory he works on himself, but only focuses on the problems of rival approaches, might be subject to confirmation bias, skewing his own and other people’s evaluation of progress and promise. The researcher who believes that his prediction is always just on the edge of being observed is a candidate for motivated cognition. And above all that, there’s the cognitive meta-bias, the bias blind spot: I can’t possibly be biased. Scott Lilienfeld in his SciAm article “Fudge Factor” argued that scientists are particularly prone to conformation bias because As I scientist, I regard my brain the toolbox for my daily work, and so I am trying to learn what can be done about its shortcomings. It is to some extent possible to work on a known bias by rationalizing it: By consciously seeking out the information that might challenge ones beliefs, asking a colleague for a second opinion on whether a model is worth investing more time, daring to admit to being wrong. And despite that, not to forget the hopes and dreams. Mars btw has to our best current knowledge indeed two moons. Sunday, April 08, 2012 Happy Easter! Stefan honors the Easter tradition by coloring eggs every year. The equipment for this procedure is stored in a cardboard shoe-box labeled "Ostern" (Easter). The shoe-box dates back to the 1950s and once contained a pair of shoes produced according to the newest orthopedic research. I had never paid much attention to the shoe-box but as Stefan pointed out to me this year, back then the perfect fit was sought after by x-raying the foot inside the shoe. The lid of the box contains an advertisement for this procedure which was apparently quite common for a while. Click to enlarge. Well, they don't xray your feet in the shoe stores anymore, but Easter still requires coloring the eggs. And here they are: Happy Easter everybody! Friday, April 06, 2012 Book Review: "The Quest for the Cure" by B.R. Stockwell The Quest for the Cure: The Science and Stories Behind the Next Generation of Medicines By Brent R. Stockwell Columbia University Press (June 1, 2011) As a particle physicist, I am always amazed when I read about recent advances in biochemistry. For what I am concerned, the human body is made of ups and downs and electrons, kept together by photons and gluons - and that's pretty much it. But in biochemistry, they have all these educated sounding words. They have enzymes and aminoacids, they have proteases, peptides and kineases. They have a lot of proteins, and molecules with fancy names used to drug them. And these things do stuff. Like break up and fold and bind together. All these fancy sounding things and their interactions is what makes your body work; they decide over your health and your demise. With all that foreign terminology however, I've found it difficult to impossible to read any paper on the topic. In most cases, I don't even understand the title. If I make an effort, I have to look up every second word. I do just fine with the popular science accounts, but these always leave me wondering just how do they know this molecule does this and how do they know this protein breaks there, fits there, and that causes cancer and that blocks some cell-function? What are the techniques they use and how do they work? When I came across Stockwell's book "The Quest for the Cure" I thought it would help me solve some of these mysteries. Stockwell himself is a professor for biology and chemistry at Columbia university. He's a guy with many well-cited papers. He knows words like oligonucleotides and is happy to tell you how to pronounce them: oh-lig-oh-NOOK-lee-oh-tide. Phosphodiesterase: FOS-foh-dai-ESS-ter-ays. Nicotinonitrile: NIH-koh-tin-oh-NIH-trayl. Erythropoitin: eh-REETH-roh-POIY-oh-ten. As a non-native speaker I want to complain that this pronunciation help isn't of much use for a non-phonetic language; I can think of at least three ways to pronounce the syllable "lig." But then that's not what I bought the book for anyway. The starting point of "The Quest for the Cure" is a graph showing the drop in drug approvals since 1995. Stockwell sets out to first explain what is the origin of this trend and then what can be done about it. In a nutshell, the issue is that many diseases are caused by proteins which are today considered "undruggable" which means they are folded in a way that small molecules, that are suitable for creating drugs, can't bind to the proteins' surfaces. Unfortunately, it's only a small number of proteins that can be targeted by presently known drugs: "Here is the surprising fact: All of the 20,000 or so drug products that ever have been approved by the U.S. Food and Drug Administration interact with just 2% of the proteins found in human cells." And fewer than 15% are considered druggable at all. Stockwell covers a lot of ground in his book, from the early days of genetics and chemistry to today's frontier of research. The first part of the book, in which he lays out the problem of the undruggable proteins, is very accessible and well-written. Evidently, a lot of thought went into it. It comes with stories of researchers and patients who were treated with new drugs, and how our understanding of diseases has improved. In the first chapters, every word is meticulously explained or technical terms are avoided to the level that "taken orally" has been replaced by "taken by mouth." Unfortunately, the style deteriorates somewhat thereafter. To give you an impression, it starts more reading like this "Although sorafenib was discovered and developed as an inhibitor of RAF, because of the similarity of many kinases, it also inhibits several other kinases, including the patelet-derived growth factor, the vascular endothelia growth factor (VEGF) receptors 2 and 3, and the c-KIT receptor." Now the book contains a glossary, but it's incomplete (eg it neither contains VEGF nor c-KIT). With the large number of technical vocabulary, at some point it doesn't matter anymore if a word was introduced, because if it's not something you deal with every day it's difficult to keep in mind the names of all sorts of drugs and molecules. It gets worse if you put down the book for a day or two. This doesn't contribute to the readability of the book and is somewhat annoying if you realize that much of the terminology is never used again and one doesn't really know why it was necessary to use to begin with. The second part of the book deals with the possibilities to overcome the problem of the undruggable molecules. In that part of the book, the stories of researchers curing patients are replaced with stories of the pharmaceutical industry, the start-up of companies and the ups and downs of their stock price. Stockwell's explanations left me wanting in exactly the points that I would have been interested in. He writes for example a few pages about nuclear magnetic resonance and that it's routinely used to obtain high resolution 3-d pictures of small proteins. One does not however learn how this is actually done, other than that it requires "complicated magnetic manipulations" and "extremely sophisticated NMR methods." He spends a paragraph and an image on light-directed synthesis of peptides that is vague at best, and one learns that peptides can be "stapled" together, which improves their stability, yet one has no clue how this is done. Now the book is extremely well referenced, and I could probably go and read the respective papers in Science. But then I would have hoped that Stockwell's book saves me exactly this effort. On the upside, Stockwell does an amazingly good job communicating the relevance of basic research and the scientific method, and in my opinion this makes up for the above shortcomings. He tells stories of unexpected breakthroughs that came about by little more than coincidence, he writes about the relevance of negative results and control experiments, and how scientific research works: "There is a popular notion about new ideas in science springing forth from a great mind fully formed in a dazzling eureka moment. In my experience this is not accurate. There are certainly sudden insights and ideas that apear to you from time to time. Many times, of course, a little further thought makes you realize it is really an absolutely terrible idea... But even when you have an exciting new idea, it begins as a raw, unprocessed idea. Some digging around in the literature will allow you to see what has been done before, and whether this idea is novel and likely to work. If the idea survives this stage, it is still full of problems and flaws, in both the content and the style of presenting it. However, the real processing comes from discussing the idea, informally at first... Then, as it is presented in seminars, each audience gives a series of comments, suggestions, and questions that help mold the idea into a better, sharper, and more robust proposal. Finally, there is the ultimate process of submission for publication, review and revision, and finally acceptance... The scientific process is a social process, where you refine your ideas through repeated discussions and presentations." He also writes in a moderate dose about his own research and experience with the pharmaceutical industry. The proposals that Stockwell has how to deal with the undruggable proteins have a solid basis in today's research. He isn't offering dreams or miracle cures, but points out hopeful recent developments, for example how it might be possible to use larger molecules. The problem with large molecules is that they tend to be less stable and don't enter cells readily, but he quotes research that shows possibilities to overcome this problem. He also explains the concept of a "privileged structure," structures that have been found with slight alterations to bind to several proteins. Using such privileged structures might allow one to sort through a vast parameter space of possible molecules with a higher success rate. He also talks about using naturally occurring structures and the difficulties with that. He ends his book by emphasizing the need for more research on this important problem of the undruggable proteins. In summary: "The Quest for the Cure" is a well-written book, but it contains too many technical expressions, and in many places scientific explanations are vague or lacking. It comes with some figures which are very helpful, but there could have been more. You don't need to read the blurb to figure out that the author isn't a science writer but a researcher. I guess he's done his best, but I also think his editor should have dramatically sorted out the vocabulary or at least have insisted on a more complete glossary. Stockwell makes up for this overdose of biochemistry lingo with communicating very well the relevance of basic research and the power of the scientific method. I'd give this book four out of five stars because I appreciate Stockwell has taken the time to write it to begin with. Wednesday, April 04, 2012 On the importance of being wrong Some years ago, I attended a seminar by a young postdoc who spoke about an extension of the standard model of particle physics. Known as “physics beyond the standard model,” this is a research area where theory is presently way ahead of experiment. In the hope to hit something by shooting in the dark, theorists add stuff that we haven’t seen to the stuff we know, and then explain why we haven’t seen the additional stuff – but might see it with some experiment which is about to deliver result. Ie, the theorists tell experimentalists where to look. Due to the lack of observational evidence, the main guide in this research area is mathematical consistency combined with intuition. This type of research is absolutely necessary to make progress in the present situation, but it’s also very risky. Most of the models considered today will turn out to be wrong. The content of the seminar wasn’t very memorable. The reason I still recall it is that, after the last slide had flashed by, somebody asked what the motivation is to consider this extension of the standard model, to which the speaker replied “There is none, except that it can be done.” This is a remarkably honest answer, especially since it came from a young researcher who had still ahead of him the torturous road to tenure. You don’t have to look far in the blogosphere or on Amazon to find unsolicited advice for researchers for how to sell themselves. There now exist coaching services for scientists, and some people make money writing books about “Marketing for Scientists.” None of them recommends that when you’ve come to the conclusion that a theory you looked at wasn’t as interesting as you might have thought, you go and actually say that. Heaven forbid: You’re supposed to be excited about the interesting results. You were right all along that the result would be important. And there are lots of motivations why this is the one and only right thing to do. You have won great insights in your research that are relevant for the future of mankind, at least, if not for all mankinds in all multiverses. It’s advice well meant. It’s advice for how to reach your presumed personal goal of landing a permanent position in academia, taking into account the present mindset of your older peers. It is not advice for how to best benefit scientific research in the long run. In fact, unfortunately, the both goals can be in conflict. Of course any researcher should in the first line work on something interesting, well motivated, and something that will deliver exciting results! But most often it doesn’t work as you wish it should. To help move science forward, the conclusion that the road you’ve been on doesn’t seem too promising should be published to prevent others from following you into a dead end, or at least telling them where the walls are. Say it, and start something new. It’s also important for your personal development. If you advertise your unexciting research as the greatest thing ever, you might eventually come to believe it and waste your whole life on it. The reason nobody advises you to say your research project (which might not even have been your own choice) is unexciting is that it’s difficult if not impossible to publish a theoretical paper that examines an approach just to come to the conclusion that it’s not a particularly convincing description of nature. The problem with publishing negative results might be familiar to you from medicine, but it exists in theoretical physics as well. Even if you get it published, and even if it’s useful in saving others the time and work that you have invested, it will not create a research area and it’s unlikely to become well-cited. If that’s all you think matters, for what your career is concerned it would be a waste of your time indeed. So, they are arguably right with their career advice. But as a scientist your task is to advance our understanding of nature, even if that means concluding you’ve wasted your time – and telling others about it. If you make everybody believe in the excitement of an implausible model, you risk getting stuck on a topic you don’t believe in. And, if you’re really successful, you get others stuck on it too. Congratulations. This unexciting seminar speaker some years ago, and my own yawn, made me realize that we don’t value enough those who say: “I tried this and it was a mistake. I thought it was exciting, but I was wrong.” Basic research is a gamble. Failure is normal and being wrong is important. Monday, April 02, 2012 In the past month, Lara and Gloria have learned to learn. They try to copy and repeat everything we do. Lara surprised me by grabbing a brush and pulling it through her hair and Gloria, still short on hair, tries to put on her shoes. They haven't yet learned to eat with a spoon, but they've tried to feed us. They both understand simple sentences. If I ask where the second shoe is, they'll go and get it. If I tell them lunch is ready, they'll both come running and try to push the high chairs towards the table. If we tell them we'll go for a walk, they run to the door. If we do as much as mention cookies, they'll point at the bag and insist on having one. Lara is still the more reserved one of the two. Faced with something new, she'll first watch from a distance. Gloria has no such hesitations. Last week, I childproofed the balcony. Lara, who was up first, saw the open door and froze. She stood motionless, staring at the balcony for a full 10 minutes. Then Gloria woke up, came running while yelling "Da,da" - and stumbled over the door sill, landing on her belly. Lara then followed her, very carefully. Now that spring is coming and the girls are walking well, we've been to the playground several times. Initially Lara and Gloria just sat there, staring at the other children. But meanwhile they have both made some contacts with other children, though not without looking at me every other minute to see if I approve. Gloria, as you can guess, is the more social one. She'll walk around with her big red bucket and offer it to others, smiling brightly. She's 15 months and has at least 3 admirers already, all older boys who give her toys, help her to walk, or even carry her around. (The boys too look at me every other minute to see if I approve.) Lara and I, we watch our little social butterfly, and build sand castles. From my perspective, the playground is a new arena too. Weekdays, the adult population is exclusively female and comes in two layers of generations, either the mothers or the grandmothers. They talk about their children and pretty much nothing but their children, unless you want to count pregnancies separately. After some initial mistakes, I now bring a book, paper, or a magazine with me to hide behind. Another piece of news from the past month is that I finally finished the review on the minimal length in quantum gravity that I've been working on since last year. It's now on the arXiv. The first 10 pages should be understandable for pretty much everybody, and the first half should be accessible also for undergraduates. So if you were wondering what I'm doing these days besides running after my daughters, have a look at my review. Sunday, April 01, 2012 Computer Scientists develop Software for Virtual Member of Congress A group of computer scientists from Rutgers university have published a software intended for crowd-sourcing the ideal candidate. "We were asking ourselves: Why do we waste so much time with candidates who disagree with themselves, aren't able to recall their party's program, and whose intellectual output is inferior even to Shit Siri Says?" recalls Arthur McTrevor, who lead the project, "Today, we have software that can perform better." McTrevor and his colleagues then started coding what they refer to as the "unopinionated artifical intelligence" of the virtual representative, the main information processing unit. The unopinionated intelligence is a virtual skeleton which comes alive by crowd-sourcing opinions from a selected group of people, for example party members. Members feed the software with opinions, which are then aggregated and reformulated to minimize objectionable statements. The result: The perfect candidate. The virtual candidate also has a sophisticated speech assembly program, a pleasant looking face, and a fabricated private life. Visual and audial appearance can be customized. The virtual candidate has a complete and infallible command of the constitution, all published statistical data, and can reproduce quotations from memorable speeches and influential books in the blink of an eye. "80 microseconds, actually," said McTrevor. The software moreover automatically creates and feeds its own Facebook account and twitter feed. The group from Rutgers tested the virtual representative in a trial run whose success is reported in a recent issue of Nature. In their publication, the authors point out that the virtual representative is not a referendum that aggregates the opinions of the general electorate. Rather, it serves a selected group to find and focus their identity, which can then be presented for election. In an email conversation, McTrevor was quick to point out that the virtual candidate is made in USA, and its patent dated 2012. The candidate will be thus be eligible to run for congress at the "age" of 25, in 2037.
726a0dd5835dd67a
Bloch's Theorem is one of the foundations of solid-state physics. It states the following: Given a periodic potential V(r), the solutions to the time-independent Schrödinger equation are of the form Ψ(r) = eikru(r), where u(r) has the same periodicity as V(r). Since the position of atoms in perfect crystals is periodic (neglecting thermal vibrations--see phonon), the potential in a crystal is periodic as well. Therefore the electron wavefunctions in a crystal obey Bloch's Theorem and are sometimes called Bloch functions. The vectors k, called Bloch wavevectors, are of great importance. The vectors k are said to belong to the reciprocal lattice space, or k-space of a crystal.
c479ae028e406e13
63 808 Assignments Done Successfully Done In August 2018 Quantum Mechanics Answers Questions: 274 Free Answers by our Experts: 185 Stuck in the quantum mechanics problems? Don’t know how to get rid of all your quantum mechanics problems and find all the necessary quantum mechanics answers? We know how to help you! Our service gives you an opportunity to solve all the quantum mechanics problems that may appear during the studying process. Ask all your quantum mechanics questions here and our highly professional team will provide you with the quantum mechanics answers with pleasure and will make your studying process easier and more exciting. Don’t exhaust yourself with brain-cracking tasks – trust all your quantum mechanics problems to our qualified professionals. Ask Your question Need a fast expert's response? Submit order and get a quick answer at the best price for any assignment or question with DETAILED EXPLANATIONS! Search & Filtering If the same force is applied to a ping-pong ball and to a tennis ball that are initially at rest, which ball will move faster?(Assume that there is no friction between the ball and the table.)Explain. If sigma x, sigma y, and sigma z are three components of a pauli spin matrix sigma, then show that [sigma x, sigma y]=2i sigma z; [sigma y, sigma z]= 2i sigma x If quantum physics formulas are not applicable at subatomic levels, why do we still use them for deriving relations at the subatomic level ? Ex. if equations of motions are not applicable at subatomic level , why do we use the equations to explain the path of an electron moving perpendicular to electric field ? 4. A boy stands at the centre of a turn table with his two arms stretched. The turn table is set rotating with an angular with an angular speed of 40 r.p.m. How much is the angular speed of the boy if he folds his hands back and thereby reduces his moment of inertia to 2/5times the initial? Assume the turn table to rotate without friction. What is meant by a well-behaved function ? Illustrate with the help of a suitable diagram. Consider the potential V(x) = V0. Write down the solutions to the 1D, time- independent-independent Schrödinger equation when E > V0 and when E < V0. A particle has the wave function ψ(r) = Ne −ar.where N is a normalization factor and" a " is a known real parameter Question _calculate the probability of finding the particles in the region of r>Δr How can we calculate the value of normalisation constant when the electron wave function is given with the range In an ideal 3:17 step-up transformer, the primary power is 34 kW and the secondary current is 30 A. The primary voltage is: A. 6.25 V B. 200 V C. 1133.3 V D. 6422.2 V E. 51 V In a region of space a particle of mass m has a wavefunction: Omega(x)= Nxe (to the power -alpha x) for x>0 and x<0 where α is a positive constant. Calculate: i) the normalization constant N ii) the potential energy of the particle if the total energy of the particle is zero Privacy policy Terms and Conditions
3e5369e4f1330d3b
Unsolved Mysteries Of Science : Quantum Mechanics And The Paradoxes Of Our Physical Reality Quantum mechanics is considered one of the greatest theories of physics ever produced by man alongside general relativity, this is because all it's predictions have been confirmed with great precision and it accurately describes events in the atomic world. Despite it's success, it poses mysteries that till today science cannot resolve, especially the nature of our physical reality and it stems from how we interpret the theory - quantum mechanics, from a physical perspective. At the heart of quantum mechanics is it's probabilistic nature, this suggests that the atomic/quantum world is inherently nondeterministic - unpredictable, but we know that the classical/macroscopic world is deterministic - predictable, and this presents a puzzling question which is, what is the nature of reality as a whole ?, Is it deterministic or nondeterministic ? Some scientists and philosophers including some early founders of quantum physics such as Albert Einstein and Erwin Schrödinger doubts/doubted that quantum mechanics is/was not a true theory of the quantum world despite it's successful predictions, they believe that the true theory of the atomic world is yet to arrive and that if it arrives, it would be a deterministic one, so that it corresponds with the classical world. However, till today, no new theory has been able to completely eliminate the nondeterministic nature of the quantum world, though research is still ongoing. Let's assume the quantum world is inherently nondeterministic, another puzzling question arises which is, how did a deterministic reality arise from a nondeterministic one ? - since the macroscopic world arose from the quantum world, this question is closely related to the question of how a particle in multiple states transitions to a particle in a single state, as seen in our previous article . There are other aspects of quantum mechanics that raises puzzling questions, an example is the wave function - an abstract mathematical entity used to represent particle states and calculate quantum probabilities. Electromagnetic wave functions have a corresponding physical entity which is the electromagnetic wave (light), likewise every other physical waves but it is not certain if the wave function in quantum mechanics has a corresponding physical wave. It is for now a philosophical debate, some say it has while others say it does not. These puzzling questions have led to what is now called "the interpretations of quantum mechanics" and it originally began during the early stages of the development of quantum mechanics with the debates between Albert Einstein and Neils Bohr - both being among the early founders of quantum mechanics, the former (Einstein) opposed quantum mechanics because of it's probabilistic nature while the latter (Bohr) supported it. The interpretations of quantum mechanics are explanations/theories developed to solve some of the mysteries and also give proper meaning to what quantum mechanics actually says. Some of these interpretations are discussed below. Copenhagen interpretation This interpretation is one of the oldest and the most accepted interpretation in the scientific community, this is because it directly (without further addition of concepts) accounts for why quantum mechanics is probabilistic - it strongly supports the nondeterministic nature of the quantum world. In the classical world, equations can be used to predict the future outcome of an event, measurement/observation is only done to confirm the prediction as per the standard of science but in the quantum world, equations aren't enough to predict the future outcome of an event and it is due to the fact that the equations of quantum mechanics predicts particles can be in multiple states at the same time when only a single state at time is expected. According to the Copenhagen interpretation, a particle is initially in multiple states in a universe and when measurement/observation is made, the particle "randomly" takes a single state and other states collapses/varnishes. By random, it means that any state out of the many states can come into existence and this means that theoretically we cannot tell with certainty which state is going to come into existence, it is only observation that can decide. But, we could try using statistical/probabilistic approach, only that our prediction would be less than 100 per cent certain, it is only in classical physics that predictions are very close to or equal to a 100 per cent certainty. Also according to the interpretation, the wave function does not have a corresponding physical wave, it is merely a mathematical tool for representing states and calculating probabilities. The double-slit experiment seems to support the Copenhagen interpretation but from a logical perspective, this interpretation seems absurd as can be seen from the Schrödinger's cat experiment - a thought experiment that places a classical entity (a cat) in a quantum event. According to the Copenhagen interpretation, the reality we observe does not exist when we are not observing it and it contradicts what the currently accepted origin of the universe says - the universe came into existence before us. If the universe came into existence before us, then what did the observation ? This interpretation also raises other puzzling questions like, what really is an observer ?, Can an observer be unconscious ? - like the measuring instrument or nonliving things. Many worlds interpretation This interpretation is the second most accepted interpretation and it was proposed in 1957 by Hugh Everett. According to this interpretation, each unique state in the many states of the particle corresponds to the state of the particle in different universes therefore, when we talk about the particle being in multiple states at the same time, we are simply talking about a multiverse - not multiple states at the same time in a universe. This eliminates the problematic idea of collapse to a random state as seen in the Copenhagen interpretation but not totally. The many worlds interpretation suggests that quantum events are actually deterministic but the theory it is trying to interpret - quantum mechanics, suggests otherwise - nondeterminism, the question now is, how do we reconcile what both says ? In order to resolve this issue, another concept known as quantum decoherence was introduced, this new concept brings back the idea of observation causing collapse to a random state, which in turn gives rise to nondeterminism but it is, according to the many worlds interpretation a false belief appearing as a true act (observation causing collapse to a random state) and it is due to quantum decoherence, it is the Copenhagen interpretation that treats the collapse to random state as a true act. We should note that the double-slit experiment seems to also support the many worlds interpretation. Pilot wave interpretation This interpretation was first proposed by Louis de Broglie in 1927 and later extended by David Bohm. This interpretation is not only an interpretation, unlike the previous two, this interpretation is also another formalism of quantum mechanics - it's equations are different from that of standard quantum mechanics. According to this new formalism, the wave function is associated to a real wave just like the electromagnetic wave function is associated to the electromagnetic wave, this quantum wave guides the motion of the particles and the state of the particles can be predicted using what is called a "guiding equation" which is the nonlocal part of the standard wave equation - Schrödinger equation. This new formalism suggests the quantum world is deterministic but there's a problem, which is that for the guiding equation to predict the next state of the particle, it's initial state must be known. Unfortunately, the initial state is not always known and this brings into play nondeterminism, in this case one begins to discuss about conditional wave functions and how it apparently leads to observation causing collapse to a random state. We should note that this new formalism also predicts all what standard quantum mechanics predicts and it seems to be supported also by the same double-slit experiment, especially when it comes to the concept of wave-particle duality observed in the double-slit experiment. The interpretations discussed above are the oldest and very popular ones, there are many others and as physicist David Mermin puts it, New interpretations appear every year. None ever disappear. What is so surprising is that the same experiments seems to support most of the interpretations, which means no interpretation can rule out the other, it is now a matter of choice for us - you can pick whatever interpretation that suites you, as far as it is self-consistent. However, this presents another puzzling question which is that, is reality objective or subjective ? Experiment which is supposed to be based on objectivity is now supporting different views - subjectivity, and a subjective reality can only exist if consciousness exists. Another way quantum mechanics shows consciousness could be very essential in the existence of reality can be seen from the different interpretations themselves, they all take into account directly or indirectly, observation causing collapse to random state to account for nondeterminism and observation is mostly done by us, the supposed conscious entity, whether directly or indirectly. Whether reality depends on consciousness or not cannot be answered now by science and in fact, the question of whether reality is objective or subjective has been in existence long before modern science or quantum mechanics came into existence, it has and still a philosophical topic. At the quantum world, existence of reality seems to be strongly dependent on consciousness but at the classical world, it is otherwise and at the same time, consciousness is yet to be understood. Whatever the answers to these weird questions might be, we would have to wait to find out, that is if the answers to such questions exists - the universe seems to be inherently paradoxical. For further reading Interpretations of quantum mechanics De Broglie–Bohm theory Bohr–Einstein debates images (3)~2.jpeg Lastly, please don't forget to do the needful If you enjoyed my jargons. 3 columns 2 columns 1 column
aca8a4ee58ede9b6
[ MEMPAVOPHYS ] VL Physical Principles of Mechatronics Workload Education level Study areas Responsible person Hours per week Coordinating university 4,5 ECTS M1 - Master's programme 1. year Physics Thomas Klar 3 hpw Johannes Kepler University Linz Detailed information Original study plan Master's programme Mechatronics 2021W Objectives Lecture on physical fundamentals of mechatronics with selected topics from the fields of optics, thermodynamics, quantum mechanics and solid state physics as wave optics, interference, polarization, kinetic theory of gases, quantum phenomena, Schrödinger equation, Bloch theorem - Band structure, insulator - metal - semiconductor conductivity. Subject The educational content of the lecture physical fundamentals of mechatronics breaks down as follows: 1. Optics 1. Electromagnetic waves in vacuum 2. Light in matter: the speed of light absorption, scattering 3. Radiation optics 4. Interference and diffraction 2. Elements of thermodynamics 1. Fundamentals 2. Kinetic theory of gases 3. Elements of quantum mechanics 1. Heat radiation, Planck's constant, photon as a particle 2. Wave - particle - dualism 3. Uncertainty 4. Stimulated emission and laser 5. Basic concepts of quantum mechanics 6. Atomic orbitals 7. The hydrogen molecule ion 4. Introduction to solid state physics 1. Structure of solids 2. Bloch theorem and the electron energy bands 3. Metals 4. Semiconductors Criteria for evaluation Written test, re-examination oral (optional) Methods Lecture and lecture experiments Language German Changing subject? No Further information none On-site course Maximum number of participants - Assignment procedure Assignment according to sequence
b31863cff5fc5e43
Scientists achieve reliable quantum teleportation for first time A mass of optic equipment rigged by the research team at Delft to guide photos between the entangled particles. Hanson lab / Delft University of Technology Albert Einstein once told a friend that quantum mechanics doesn't hold water in his scientific world view because "physics should represent a reality in time and space, free from spooky actions at a distance." That spooky action at a distance is entanglement, a quantum phenomenon in which two particles, separated by any amount of distance, can instantaneously affect one another as if part of a unified system. Now, scientists have successfully hijacked that quantum weirdness -- doing so reliably for the first time -- to produce what many sci-fi fans have long dreamt up: teleportation. No, not beaming humans aboard the USS Enterprise, but the teleportation of data. Physicists at the Kavli Institute of Nanoscience, part of the Delft University of Technology in the Netherlands, report that they sent quantum data concerning the spin state of an electron to another electron about 10 feet away. Quantum teleportation has been recorded in the past, but the results in this study have an unprecedented replication rate of 100 percent at the current distance, the team said. Thanks to the strange properties of entanglement, this allows for that data -- only quantum data, not classical information like messages or even simple bits -- to be teleported seemingly faster than the speed of light. The news was reported first by The New York Times on Thursday, following the publication of a paper in the journal Science. Proving Einstein wrong about the purview and completeness of quantum mechanics is not just an academic boasting contest. Proving the existence of entanglement and teleportation -- and getting experiments to work efficiently, in larger systems and at greater distances -- holds the key to translating quantum mechanics to practical applications, like quantum computing. For instance, quantum computers could utilize that speed to unlock a whole new generation of unprecedented computing power. Quantum teleportation is not teleportation in the sense one might think. It involves achieving a certain set of parameters that then allow properties of one quantum system to get tangled up with another so that observations are reflected simultaneously, thereby "teleporting" the information from one place to another. To do this, researchers at Delft first had to create qubits out of classical bits, in this case electrons trapped in diamonds at extremely low temperatures that allow their quantum properties, like spin, to be observed. A qubit is a unit of quantum data that can hold multiple values simultaneously thanks to an equally integral quantum phenomenon called superposition, a term fans of the field will accurately associate with the Schrödinger equation, as well as Heisenberg's uncertainty principle that says something exists in all possible states until it is observed. It's the same way quantum computing may one day surpass the speeds of classical computing by allowing calculations to spread bit values between 0, 1 or any probabilistic value between the two numbers -- in other words, a superposition of both figures. With quibits separated by a distance of three meters, the researchers were able to observe and record the spin of one electron and see that reflected in the other qubit instantly. It's an admittedly wonky conception of data teleportation that requires a little head scratching before it begins to clear up. Still, its effects could be far reaching. The researchers are attempting to increase that distance to more than a kilometer, which would be ample leeway to test whether or not entanglement was a consistent phenomenon and that the information was traveling faster than the speed of light. Such experiments would more definitively knock down Einstein's disqualification of entanglement due to its violation of classical mechanics. "There is a big race going on between five or six groups to prove Einstein wrong," Ronald Hanson, a physicist leading the research at Delft, told The New York Times. "There is one very big fish." Update at 10:08 p.m. PT: Added photos from Delft University and the research team's explanatory YouTube video.
ed590b5c6700cb48
Quantum Logic in Historical and Philosophical Perspective Quantum Logic (QL) was developed as an attempt to construct a propositional structure that would allow for describing the events of interest in Quantum Mechanics (QM). QL replaced the Boolean structure, which, although suitable for the discourse of classical physics, was inadequate for representing the atomic realm. The mathematical structure of the propositional language about classical systems is a power set, partially ordered by set inclusion, with a pair of operations that represent conjunction and disjunction. This algebra is consistent with the discourse about both classical and relativistic phenomena, but inconsistent in a theory that prohibits, for example, giving simultaneous truth values to the following propositions: “The system possesses this velocity” and “The system is in this place.” The proposal of the founding fathers of QL was to replace the Boolean structure of classical logic by a weaker structure which relaxed the distributive properties of conjunction and disjunction. During its development, QL started to refer not only to a logic, but also to the multiple lines of research that attempted to understand QM from a logical perspective. This article provides a map of these multiple approaches in order to introduce the very different strategies and problems discussed in the QL literature. When possible, unnecessary formulas are avoided in order to give an intuitive grasp of the concepts before deriving or introducing the associated mathematics. However, for those readers who wish to engage more profoundly with the subject of QL, the article provides an extensive bibliography. Table of Contents 1. Logic and Physics 2. The Logical Structure of Quantum Mechanics 3. The Origin of Quantum Logic 4. Quantum Logic in Historical and Philosophical Perspective 1. The Neo-Kantian Logical Path 2. Quantum Logical Operationalism 3. Is Quantum Logic Empirical? 4. Modal Interpretations 5. The Czech-Slovakian and Italian Schools 6. The Brazilian School 5. Ongoing Developments and Debates 1. New Quantum Structures 2. Dynamical Logics, Category Theory and Quantum Computation 3. Paraconsistency and Quantum Superpositions 4. Contradiction and Modality in the Square of Opposition 5. Quantum Probability 6. Potentiality and Actuality 6. Final Remarks 7. References and Further Reading 1. Logic and Physics QL relates the two seemingly different disciplines of physics and logic. These disciplines have been intimately related since their origin. It was Aristotle who created classical logic and used it in order to develop his own physical and metaphysical scheme, providing an answer to the problem of movement and knowledge set down by the Heraclitean and Eleatic schools of thought. Movement was then regarded by Aristotle in terms of his hylomorphic scheme, as the path from a potential (undetermined, contradictory and non-identical) realm to an actual (determined, non-contradictory and identical) realm of existence. The notion of entity was then characterized by three main logical and ontological principles: The Principle of Existence (PE), which allowed Aristotle to claim existence about that which is predicated, the Principle of Non-Contradiction (PNC), which permitted him to argue that which exists possesses non-contradictory properties, and the Principle of Identity (PI), which allowed him to claim that the predicated existent is “the same,” or remains identical to itself, through time. Aristotle’s architectonic determined the fate of both classical and medieval physics, as well as metaphysics. The transformation from medieval to modern science coincides with the abolition of the Aristotelian hylomorphic metaphysical scheme as the foundation of knowledge. However, the basic structure of his metaphysical scheme and his logic still remained the basis for correct reasoning. As noted by Karin Verelst and Bob Coecke: Dropping Aristotelian metaphysics, while at the same time continuing to use Aristotelian logic as an empty ‘reasoning apparatus’ implies therefore losing the possibility to account for change and motion in whatever description of the world that is based on it. The fact that Aristotelian logic transformed during the twentieth century into different formal, axiomatic logical systems used in today’s philosophy and science doesn’t really matter, because the fundamental principle, and therefore the fundamental ontology, remained the same ([40], p. xix). This ‘emptied’ logic actually contains an Eleatic ontology, that allows only for static descriptions of the world. [231, p. 173] It was Isaac Newton who was able to translate into a closed mathematical formalism both the ontological presuppositions present in Aristotelian (Eleatic) logic, and the materialistic ideal of ‘res extensa’ together with actuality as its mode of existence. The term ‘actual’ refers here to preexistence (within the transcendent representation) and not to the observation hic et nunc. Every physical system may be described exclusively by means of its actual properties. The change of the system may be accounted for by the change of its actual properties. Potential or possible properties are then only considered as the points to which the system might arrive in a future instant of time. As Dennis Dieks states: “In classical physics the most fundamental description of a physical system (a point in phase space) reflects only the actual, and nothing that is merely possible. It is true that sometimes states involving probabilities occur in classical physics: think of the probability distributions ρ in statistical mechanics. But the occurrence of possibilities in such cases merely reflects our ignorance about what is actual. The statistical states do not correspond to features of the actual system, but quantify our lack of knowledge of those actual features.” [98, p. 124-125] In QM however, the different structure of the physical properties of the system determines a change of nature regarding the meaning of possibility and potentiality. Indeed, QM has been related to modality since 1926 when Max Born interpreted the quantum wave function Ψ in terms of a density of probability. However, it was clear from the very beginning that this new quantum possibility was something completely different from that considered in classical theories. [The] concept of the probability wave [in quantum mechanics] was something entirely new in theoretical physics since Newton. Probability in mathematics or in statistical mechanics means a statement about our degree of knowledge of the actual situation. In throwing dice we do not know the fine details of the motion of our hands which determine the fall of the dice and therefore we say that the probability for throwing a special number is just one in six. The probability wave function, however, meant more than that; it meant a tendency for something. [152, p. 42] According to Werner Heisenberg, the concept of the probability wave “was a quantitative version of the old concept of ‘potentia’ in Aristotelian philosophy. It introduced something standing in the middle between the idea of an event and the actual event, a strange kind of physical reality just in the middle between possibility and reality.” [152, p. 42] Indeed, contrary to classical possibility which only refers to our incomplete knowledge of an actual state of affairs, quantum possibilities interact between each other. This fact, completely foreign to classical theories, is exploited by present technological developments in quantum information processing for example, quantum computation, quantum cryptography, quantum teleportation. However, apart from this very fundamental question regarding the realm of existence which the logical structure of QM forces us to consider, there are many other aspects which have been a subject of discussion in the literature since the origin of QM. As a matter of fact, the interpretation of Planck’s quantum postulate, the superposition principle, the non-commutativity of observables or the identity of quantum particles—just to mention a few—pose important problems which help us to coherently consider what QM is talking about. QL has been an important tool for discussing all these fascinating subjects. 2. The Logical Structure of Quantum Mechanics In logical terms, Newtonian mechanics may be described through “the logic of an omniscient mind in a deterministic universe” [54] because in such a universe any assertion is semantically decided. That is, either proposition p or its negation ¬p is true (excluded middle principle), both assertions p and ¬p cannot be simultaneously true (PNC), meanings are sharp and unambiguous, and the meaning of a compound expression is determined by the meanings of its parts. From a mathematical perspective, both the syntactic and the semantic aspects of classical propositional logic can be described completely in terms of Boolean algebra. However, the structure of QM does not fit these features. The main reason for this is that in physical theories the information about the state of affairs is encoded in what is called “the physical state.” Both in classical and QM there are states of maximal knowledge, but the logical implications that may be grasped from each situation are not the same. While in classical mechanics maximal information about a situation implies logical completeness, meaning that every assertion about the situation represented by the state is either true or false, in QM a state cannot decide the truth or falsity of all propositions about events. This is because there are states related with both a property and its negation called “superposition states.” In classical physics every system can be described by specifying its actual properties. Mathematically, this happens by representing the state of a system of mass m by a point (p, q) in its corresponding phase space Γ of positions q and momenta p. Newton’s law tells us how this point moves along the path determined by the initial conditions. Physical magnitudes are represented by real functions over Γ. These functions commute between each other and can be interpreted as all possessing definite values at any time, independently of physical observations. Physical events are represented by subsets of Γ. The power set of Γ endowed with set theoretical operations: intersection (∩), union (∪) and set-complement gives rise to a Boolean algebra. Interpreting these operations as the logical connectives, they represent and (∧), or (∨) and not (¬). The link between the algebraic structure of classical mechanics and classical logic is obvious. When dealing with many degrees of freedom, a statistical description is useful. The logical-algebraic structure associated with classical mechanics admits the definition of a probability measure over it with its elements considered as events. The resulting probability is a classical Kolmogorovian probability. According to John von Neumann’s axiomatization of QM, the mathematical interpretation of a physical system is a complex separable Hilbert space H, and a pure state is represented by a ray in H. Differently from the classical scheme, physical magnitudes are represented by self-adjoint operators on H that, in general, do not commute under multiplication. The values that any magnitude may take are the eigenvalues of the corresponding operator, each one of which comes with its associated eigenstate. The non-commutativity of operators has problematic interpretational consequences, for it is then difficult to affirm that the quantum magnitudes thus represented are simultaneously pre-existent to observation. The evolution of the state is given by the Schrödinger equation that, due to its linearity, implies the formal existence of quantum superpositions of states. The fact that states may be linearly combined forbids the use of mere subsets as representatives of propositions, they are instead well represented by closed subspaces of H. Historically, the first approach to an idea of QL is in Chapter 3 of von Neumann’s book on the mathematical formulation of QM [234] where he relates linear operators, namely the projections on state space H, with the representatives of “experimental propositions” affiliated with the system: “[…] the relation between the properties of a physical system on the one hand, and the projections on the other, makes possible a sort of logical calculus with these.” In fact, closed subspaces are in one-to-one correspondence with the projectors over them: “If we introduce, along with the projections E, the closed linear manifold R belonging to them (E = PR), then the closed linear manifolds correspond equally to the properties of S [S is the system].” [234, p. 250] The set of closed subspaces of H, ordered by inclusion and equipped with adequate definitions of algebraic operations, gives rise to a lattice [180], namely a partially ordered set (L,,∧) in which every pair of elements has a supremum called join (∨) and an infimum called meet (∧) that satisfy: 1. commutative laws for the meet and join operations: xy = yx, xy = y x 2. absorption laws: x ∨ (x y) = x, x ∧ (x y) = x 3. associative laws: x ∨ (y z) = (x y) ∨ z, x ∧ (y z) = (x y) ∧ z The lattice may have a maximum (or top) 1, which is the identity for the ∧ operation, and a minimum (or bottom) 0, the identity for the ∨ operation. A lattice (L,,,1,0) is said to be modular when for all elements x, y and z, if x z, then x ∨ (y z) = (x y) ∧ z An orthocomplement x of the element x is defined in such a way that they satisfy: 1. the complement law: xx = 1 and xx = 0 2. the involution law: x⊥⊥ = x 3. the order-reversing law: if x y then yx. The modular lattice is called orthomodular if it is equipped with an orthocomplementation. The lattice of subspaces of H, denoted by L(H), is called the Hilbert lattice associated to H and motivates the standard QL [41]. This is the proposal of Garret Birkhoff and J. von Neumann for the algebraic structure that organizes the propositions of the language of QM. This is a quite different structure than the classical one. In fact, as mentioned above, in classical logic the propositions organize themselves in the power set with operations ∧, ∨ and ¬ representing the classical language connectives and, or and not. This structure constitutes a Boolean algebra that satisfies the distributive laws of and and or: (x y) ∨ z = (x z) ∨ (y z) (x y) ∧ z = (x z) ∧ (y z) Closed subspaces of Hilbert space H form an algebra called a Hilbert lattice denoted as L(H). In any Hilbert lattice the meet operation ∧ corresponds to set theoretical intersection between subspaces, and the join operation ∨ corresponds to the smallest closed subspace of H containing the set theoretical union of subspaces. In this way, the ordering relation ≤ associated to the lattice corresponds to the set-theoretical inclusion of subspaces. Note that L(H) is a bounded lattice where H is the maximum, denoted by 1, and the empty subspace is the minimum, denoted by 0. This lattice equipped with the relation of orthogonal complement can be described as an ortholattice [162]. 3. The Origin of Quantum Logic The official birth of QL was produced with the 1936 seminal paper “The logic of quantum mechanics,” where Birkhoff and von Neumann made the proposal of a non-classical logic for the theory, arguing that the problem of whether the Hilbert space formalism displayed a logical structure could prove useful to the understanding of QM. In the introduction to the paper they make the point: One of the aspects of quantum theory which has attracted the most general attention is the novelty of the logical notions it presupposes. It asserts that even a complete mathematical description of a physical system S does not in general enable one to to predict with certainty the result of an experiment on S, and that in particular one can never predict with certainty both the position and the momentum of S (Heisenberg’s uncertainty principle). It further asserts that most pairs of observations cannot be made on S simultaneously (Principle of Non-commutativity of Observations). […] The object of the present paper is to discover what logical structure one may hope to find in physical theories which, like quantum mechanics, do not conform to classical logic. [41] As said above, the propositional structure that gave rise to QL was the ortholattice <L(H), , , , 1, 0>. The different characters proposed for the representatives of the logical connectives completely changes the meaning of these connectives. A relevant feature of ∨ is that, differently from the case in classical semantics, a quantum disjunction may be true even if neither of it members is true. This reflects, for example, the case in which we are dealing with a state such as that of a spin 1/2 system which is in a linear combination of states up and down. Both propositions, “the state is up” and “the state is down,” may have no definite truth value (the excluded middle principle is violated), but the disjunction “the state is up or the state is down” is a tautology. The distinguishing character of the structure is the failure of the distributive law, a law that holds in classical logic. This means that if p, q and r are propositions, x ∧ (y z) ≠ (x y) ∨ (x z) Birkhoff and von Neumann remarked on this fact in their paper: “[…] whereas logicians have usually assumed that properties of negation were the ones least able to withstand a critical analysis, the study of mechanics points to the distributive identities as the weakest link in the algebra of logic.” And concluded that “the propositional calculus of quantum mechanics has the same structure as an abstract projective geometry.” However, L(H) satisfies a kind of weak distributivity. In case of a finite-dimensional Hilbert space H, the ortholattice L(H) is modular, that is, satisfies the following condition known as the modular law: x x ∨ (y z) = y ∧ (x z) The modular law is equivalent to the identity (xy)∨(yz) = y∧((xy)∨z). In the case of an infinite-dimensional Hilbert space the modular law is not satisfied. In 1937, Kodi Husimi [156] showed that a weaker law, the so called orthomodular law is satisfied in the ortholattice L(H). The orthomodular law says: x x ∨ (xy) = y and it is equivalent to the identity x y = ((x y) ∧ y) ∨ y [180]. This is an important point for the purpose of defining a probability measure that could be interpreted in terms of relative frequencies (see for example [210, ch. 7]). But, when taking the lattice elements as events, this is not possible. Josef Maria Jauch’s remarked that: Birkhoff and von Neumann […] have tried to justify modularity by pointing out that on finite modular lattices one can define a dimension function […] Such a function has the characteristic properties of a probability measure, and d(a) would represent the a priori probability for finding the system with property a when nothing is specified as to its preparation. It is known that there are systems for which such a finite a priori probability does not exist. [159, p. 83] Since the lattice of subspaces (or projection operators) L(H) was not in general a modular one—precluding a nice definition of probability (see for example [210, Ch. 7] and [212])—von Neumann abandoned the Hilbert space structure for the formulation of QM and turned to the study of rings of operators, that in turn gave rise to von Neumann’s algebras [235]. Before discussing the historical development it has to be said that the name “quantum logic” is somewhat misleading. As Dalla Chiara et al. remark: “by standard quantum logic one usually means the complete orthomodular lattice based on the closed subspaces in a Hilbert space. Needless to observe, such a terminology that identifies a logic with a particular example of an algebraic structure turns out to be somewhat misleading from the strict logical point of view.” [78] Different forms of QL may be constructed by building algebraic or Kripkean semantics over the algebraic structure of the Hilbert space (see for example [81]). 4. Quantum Logic in Historical and Philosophical Perspective QL has been a field of debate in philosophy as well as Quantum Physics. Within QL many different philosophical approaches and lines of research have been developed, discussed and addressed. From neo-Kantism to empiricism and Aristotelian realism, quantum logical research has opened the door to one of the most interesting debates in both physics and philosophy of physics in the second half of the last century. However, even though there are many different perspectives regarding QL, one might characterize its most general interpretational characteristic in terms of a strategically subversive attitude towards classical logic and the very foundations of metaphysical understanding. In this respect, in order to clarify the vast map of interpretations about QM and to discover the physical meaning of the theory, one can consider the strategies that different interpreters have taken. While the first group started from a set of (classical) metaphysical presuppositions and intended to change the formalism in order to fit QM into their desired metaphysical picture [for example, Bohmian mechanics, Ghirardi-Rimini-Weber theory], a second group concentrated their efforts—taking as a standpoint the orthodox formalism—on trying to understand the symmetries and characteristics of the formalism in order to derive a suitable interpretation of the theory. Much more open to an original metaphysical development that would allow us to understand what the world is like according to QM, QL—apart from some minor exceptions—is clearly part of the latter group. a. The Neo-Kantian Logical Path As recalled by Heisenberg in Physics and Philosophy [152], the concern about objectivity and the use of ordinary language for quantum concepts was an important focus of discussion during the development of the theory: The most difficult problem, however, concerning the use of language arises in quantum theory. Here we have at first no simple guide for correlating the mathematical symbols with concepts of ordinary language: and the only thing we know from the start is the fact that our common concepts cannot be applied to the structure of the atoms. […] The analysis can now be carried further in two entirely different ways. We can either ask which language concerning the atoms has actually developed among physicists in the thirty years that have elapsed since the formulation of QM. Or we can describe the attempts for defining a precise scientific language that corresponds to the mathematical scheme. In answer to the first question one may say that the concept of complementarity introduced by Bohr into the interpretation of quantum theory has encouraged physicists to use an ambiguous language rather than an unambiguous language. [152, p. 153] Formulating critics to this use Heisenberg argues that: “it seems rather doubtful whether an expectation [referring to the use of classical concepts] should be called objective.” A different approach, initiated by Birkhoff and von Neumann and continued by Carl Friedrich von Weizsäcker in the fifties, would be “to define a different precise language which follows definite logical patterns in conformity with the mathematical scheme.” Carl Friedrich von Weizsäcker, as well as Hans Reichenbach [208], did so by modifying the principle of excluded middle. As this principle is used in everyday conversation, von Weizsäcker proposed to distinguish different levels of language: one level referring to objects, a second level to statements about objects, a third level to statements about statements about objects and so on. The modification of classical logic has to refer, first of all, to the level of objects. As the state of a system allows us to predict with some probability the different properties it could possess, von Weizsäcker introduced the concept of “degrees of truth.” For each pair of properties, the question about its truth is not decided. But ‘not decided’ is by no means equivalent to ‘not known’. This kind of many valued logic may be extended to the successive levels of language. As Heisenberg remarks, it is not clear at first sight which kind of ontology would underpin these modified logical patterns; the main concern in the project of finding a logical system associated to the algebraic structure of the theory. Von Weizsäcker advanced this approach from the idea of reconstructing physics in terms of yes-no-alternatives, called ur-alternatives (from the German prefix ‘Ur’: original) and establishing a connection between quantum structures and the structure of space-time. These ur-alternatives are considered the fundamental objects in physics from which, in principle, any physical object can be built. Thus, from a notion related to information a turn is made to the notion of physical object: objects are reduced or even “made out of” information [178]. Later on, also Holger Lyre would argue in favor of this possibility: In quantum theory in particular, this view has a lot of plausibility. Quantum objects are represented in terms of their Hilbert state spaces, their quantum states correspond to empirically decidable alternatives. Any quantum object may further be de-composed or embedded into the tensor product of two objects, nowadays called quantum bits or qubits. Urs, therefore, are in fact nothing but qubits. [178] In the seventies, Pieter Mittelstäedt, a student of Heisenberg and von Weizsäcker, continued QL research framed within the neo-Kantian tradition [48]. Contrary to classical physics, where all propositions about a system can be predicated together, quantum properties may be assigned values only in a contextual manner [167], thus forbidding an interpretation in terms of substance. According to this view, the category of substance can be only applied to compatible observables; that is, in the case in which the state of the system is such that these observables may be assigned definite values. Classical logic, in turn, allows truth values for all propositions and thus it is not adequate for propositions about a quantum system, where the empirical content of propositions is relevant when applying the rules of logic. With the assumption that the laws of logic ought to be universally valid, Mittelstäedt turned to search for a different foundation of logic that could allow proofs to be independent of the empirical content of statements. First, he called attention to the fact that commensurability between any two propositions is implicit in classical logic. Then, starting from elementary propositions that assert that a system has a certain property, which can be valued by testing the property in an experiment, the concept of dialog-game was introduced. Several kinds of compound propositions may be defined by specifying the dialog-game. By adding a commensurability relation to the Hilbert lattice before constructing a formal propositional logic, Mittelstäedt  was able to complete a calculus that is a model of L. By means of this concept of commensurability, the dialog-game gives a complete frame for argumentation [185, Ch. 4]. Then he introduced modalities and probability as metalinguistic concepts [186, 187, 188] as well as establishing that only by employing adequate notions of ‘temporal identity’ and ‘transworld identity’ might a Kripke-like semantics be formulated in QL [189, 190]. During the eighties and nineties, in line with the neo-Kantian QL line of research, the French philosopher Michel Bitbol analyzed the different alternatives of the language of physical properties and their role in objectivity. Although he admitted that Kant’s reasoning had to be greatly altered to become applicable to QM, he nevertheless outlined a derivation of QL from transcendental arguments [43]. First, contextuality is pointed to as the main characteristic that has to be focused on when applying the program. In the classical case: […] a phenomenon is usually (or even always) relative to a certain context which defines the range of possible phenomena to which it belongs. […] As long as the context can be combined, or at least as long as the phenomena can be made indifferent to the order and chronology of use of the contexts, nothing prevents one from merging the distinct of possible phenomena relative to each context into a single range of possible conjunctions of phenomena. This being done, one may consider that the new range of possible compound phenomena is relative to a single ubiquitous context which is not even worth mentioning. [43] In classical physics, the rules of classical logic hold in every context, but they also hold when merging the contexts. This is not the case in QM. Although Boolean algebra and the corresponding laws of classical logic may be used to deal with propositions about qualities in each context, when considering them all together the structure is that of L(H). To manifestly show how the different languages link together, classical languages using classical connectives are implemented in each context, then a meta-language is constructed using a relation of implication, that is, one language implies another one if and only if every sentence in the first is also a sentence in the other. This implication is broader than the mere ‘union’ of both languages because it contains not only the propositions of each contextual language, their conjunctions and disjunctions but also new ones. The combination of contexts has more consequences than the ones that occur when they are used separately. This construction is shown to be nothing but an orthocomplemented non-distributive lattice [42, Annexe I]. Thus, Bitbol [43] concludes that “the specific structure of QL is unavoidable when unification of contextual languages at a meta-linguistic level is demanded. In this sense, one can say that QL has been derived by means of a transcendental argument: it is a condition of possibility of a meta-language able to unify context-dependent experimental languages.” For a complete revision of the neo-Kantian line of research within QM we refer to [163]. b. Quantum Logical Operationalism The Birkhoff-von Neumann paper initiated the search for an axiomatic theory where the, physically non-justified, Hilbert space structure would be derived from a set of physically motivated axioms, giving particular importance to the concept of experimental propositions. Following this line of thought, George Mackey published in 1963 a monograph [179] in which he recovered von Neumann’s idea of “projections as propositions” [234, p. 247]. As projections have only two eigenvalues, 0 and 1, one may think of the proposition associated to a projection as the answer “yes” or “no” to the corresponding question. Thus, Mackey referred to the propositions affiliated with a physical system as questions [179, p. 64] and, under a reasonable axiomatization, Mackey showed that the questions form an orthomodular lattice. In this frame, the question of “which measures on questions are to be regarded as states?” [179, p. 85] was answered by Mackey’s student Andrew Gleason: A measure on the closed subspaces means a function µ which assigns to every closed subspace a nonnegative real number, such that if {Ai} is a countable collection of mutually orthogonal subspaces having closed linear spam B, then µ(B) = Σµ(Ai). It is easy to see that such a measure can be obtained by selecting a vector v and, for each closed subspace A, taking µ(A) as the square of the norm of the projection of v on A. Positive linear combinations of such measures lead to more examples and, passing to the limit, one finds that, for every positive semi-definite self-adjoint operator T of the trace class µ(A) = tr(TPA), where PA denotes the orthogonal projection on A, defines a measure on the closed subspaces. It is the purpose of this paper to show that, in any separable Hilbert space of dimension at least three, whether real or complex, every measure on the closed subspaces is derived in this fashion. [144] In some sense, Mackey’s program is a reconstruction of QM as non-classical probability calculus. Mackey’s investigations on the foundations of QM renewed interest in the somewhat forgotten subject of QL, and also in its connection with the study of orthomodular lattices. Varadarajan’s and Jauch’s books [230, 159] follow from this. For example, some mathematical aspects of the notion of probability involved by the density operator have been studied by Veeravalli Varadarajan [229]. But it was the representation theorem of Constantin Piron [194] which clarified the field. The theorem states that if L is a complete orthocomplemented atomic lattice which is weakly modular and satisfies the covering law, then each irreducible component of the lattice L can be represented as the lattice of all biorthogonal subspaces of a vector space V over a division ring K. The Solèr theorem then proves that an infinite dimensional orthomodular space over a division ring which is the real or complex numbers or the quaternions, is a Hilbert space [219]. In the sixties, Jauch and Piron [194, 159] also aimed at reconstructing the formalism of QM from first principles with special interest in the relation between concepts and real physical operations that can be performed in the laboratory. For example, states are defined as “the result of a series of physical manipulations on the system which constitute the preparation of the state.” And it is emphasized that “[t]wo states are identical if the relevant conditions in the preparation of the state are identical. (The distinction between the system and its states cannot be maintained under all circumstances with the precision implied by this definition. The reason is that systems which we regard under normal circumstances as different may be considered as two different states of the same system. An example is a positronium and a system of two photons.)” [159, p. 92] The same prescriptions follow for propositions: “the composed proposition a b denotes the measurement of a and b.” [159, Sect. 5.3] Due to the prescription that every notion should be defined in terms of operations, this line of research is called operationalism. Operational QL involves the fact that the yes-no answers to the elementary questions, or the “experimental propositions” of Birkhoff and von Neumann, may be regarded as the propositions of a non-classical logic. Moreover, its purpose is to attempt to give an independent motivation to the general program to understand QM [58]. According to [12], the main operationalist lines of research are the following: The Geneva school commanded by Jauch and Piron [159, 195, 197] in Geneva, and continued by Piron’s student, Diederik Aerts [7, 9, 10, 11], in Brussels; the Amherst approach which in words of David Foulis and Charles Randall should be called “empirical logic” [122, 123, 124, 127]; and finally the Marburg approach directed by Günter Ludwig [176, 177]. One of the main results of the operational line of research is due to Aerts in 1981. Orthodox QL faces a deep problem for treating composite systems. In fact, when considering two classical systems, it is meaningful to organize the whole set of propositions about them in the corresponding Boolean lattice built up as the Cartesian product of the individual lattices. Informally one may say that each factor lattice corresponds to the properties of each physical system. But the quantum case is completely different. When two or more systems are considered together, the state space of their pure states is taken to be the tensor product of their Hilbert spaces. Given the Hilbert state spaces H1 and H2 as representatives of two systems, the pure states of the compound system are given by rays in the tensor product space H = H1⊗ H2. But it is not true, as a naive classical analogy would suggest, that any pure state of the compound system factorizes after the interaction in pure states of the subsystems, and that they evolve with their own Hamiltonian operators. It was shown, in a non-separability theorem by Aerts [7], that when trying to repeat the classical procedure of taking the tensor product of the lattices of the properties of two systems, to obtain the lattice of the properties of the composite, the procedure fails [5, 6, 8, 57, 125, 126]. Attempts to vary the conditions that define the product of lattices have been made but in all cases it results that the Hilbert lattice factorizes only in the case in which one of the factors is a Boolean lattice, or when the systems have never interacted. Using the operationalist approach two Belgian students of Aerts, Bob Coecke and Sonja Smets, outlined a research program on dynamic QL. [62, 60, 218] (see Section 5.2). c. Is Quantum Logic Empirical? During the late sixties and beginning of the seventies there was a radical philosophical view initiated by David Finkelstein [120, 121] and Hilary Putnam [202, 203] arguing that logic is in a certain sense empirical. According to Putnam’s famous paper [202]: “Logic is as empirical as geometry. We live in a world with a non-classical logic.” For Putnam in that specific period, the elements of L(H) represent categorical properties that an object does or does not possess, independently of whether or not we look. Inasmuch as this picture of physical properties is confirmed by the empirical success of QM, this view means we must accept that the way in which physical properties actually hang together is not Boolean. Since logic is, for Putnam, very much the study of how physical properties actually hang together, he concludes that classical logic is simply mistaken: the distributive law is not universally valid. d. Modal Interpretations The study of the modal character of QM was explicitly formalized in the seventies and eighties by a group of physicists and philosophers of science. Bas van Fraassen was the first to formally include the reasoning of modal logic in QM. He presented a modal interpretation (MI) of QL in terms of its semantical analysis [224, 225, 226, 227]. The purpose of which was to clarify which properties among those of the complete set structured in the lattice of subspaces of Hilbert space pertain to the system. Van Fraassen’s position remains close to the tradition introduced by Niels Bohr and his interpretation of QM. Indeed, the relation of van Fraassen’s interpretation to the orthodox view can be seen as a consequence of maintaining a “conservative” position regarding the values of definite properties [228, p. 280]. In 1985, Simon Kochen presented his own modal version [166] at one of the famous conferences on the foundations of QM organized by Kalervo Laurikainen in Finland. This interpretation of QM also has a direct link to the discussions between the founding fathers of the theory. Von Weizsäcker and Thomas Görnitz referred specifically to it in a paper entitled “Remarks on S. Kochen’s Interpretation of Quantum Mechanics”: We consider it is an illuminating clarification of the mathematical structure of the theory, especially apt to describe the measuring process. We would, however feel that it means not an alternative but a continuation to the Copenhagen interpretation (Bohr and, to some extent, Heisenberg). [236, p. 357] Dennis Dieks’ interpretation can be considered as a continuation and a formal account of Bohr’s ideas on complementarity and measurement. Taking as a standpoint the work done by van Fraassen, Dieks went further in relation to the metaphysical presuppositions involved, making explicit the idea that MIs [94, 95, 96, 97] could be also considered from a realist stance as describing systems with properties. If considered from this perspective, MIs face the problem of finding an objective reading of the accepted mathematical formalism of the theory, a reading “in terms of properties possessed by physical systems, independently of consciousness and measurements (in the sense of human interventions).” [97] Thus the main problem they must face is the determination of the set of definite valued properties possessed by a physical system, avoiding the constraints imposed by the Kochen-Specker (KS) theorem [167] (for a discussion see [220]). Of course, the way in which MIs attack the problem rests on the distinction between the realms of possibility and actuality. […] a state, which is in the scope of quantum mechanics, gives us only probabilities for actual occurrence of events which are outside that scope. They can’t be entirely outside the scope, since the events are surely described if they are assigned probabilities; but at least they are not the same things as the states which assign the probability. In other words, the state delimits what can and cannot occur, and how likely it is—it delimits possibility, impossibility, and probability of occurrence—but does not say what actually occurs. [228, p. 279] So, van Fraassen distinguishes propositions about events and propositions about states. Propositions about events are value-attributing propositions < A,σ >, they say that ‘observable A has a certain value belonging to a set σ.’ Propositions about states are of the form ‘the system is in a state of this or that type (in a pure state, in some mixture of pure states, in a state such that…).’ A state-attribution proposition [A,σ] gives a probability of the value-attribution proposition, it states that A will have a value in σ, with a certain probability. Value-states are specified by stating which observables have values and what these values are. Dynamic-states state how the system will develop. This is endowed with the following interpretation: The interpretation says that, if a system X has dynamic state ρ at t, then the state-attributions [A,σ] which are true are those that Tr(ρPσA) = 1 [have probability equal to one]. [PσA is the projector over the corresponding subspace.] About the value-attributions, it says that they cannot be deduced from the dynamic state, but are constrained in three ways: 1. If [A,σ] is true then so is the value-attribution < A,σ >: observable A has value in σ. 2. All the true value-attributions should have Born probability 1 together. 3. The set of true value-attributions is maximal with respect to the feature (2.) [228, p. 281] This interpretation informs the consideration of possibility in the realm of QL [228, chapter 9]. In fact, the probabilities are of events, each describable as ‘an observable having a certain value’, corresponding to value states. If w is a physical situation in which system X exists, then X has both a dynamic state ϕ and a value state λ, that is, w =< ϕ,λ >. A value state λ is a map of observable A into non-empty Borel sets σ such that it assigns {1} to 1σA. 1σ is the characteristic function of the set σ of values. So, if the observable 1σA has value 1, then it is impossible that A has a value outside σ. The proposition < A,σ >= {w : λ(w)(A) ⊆ σ} assigns values to physical magnitudes. This is a value-attribution proposition and is read as ‘A (actually) has value in σ’. V is called the set of value attributions V = {< A,σ >: A an observable and σ a Borel set}. The logic operations among value-attribution propositions are defined as: <A,σ >=< A, Rσ >, < A,σ > < A,θ >=< A,σ θ >, < A,σ > < A,θ >=< A,σθ > and ∧{< A,σi >: i ∈ N} =< A,∩{σi : i ∈ N} >. With all this, V is the union of a family of Boolean sigma algebras < A > with common unit and zero equal to < A,S(A) > and < A,> respectively. The Law of Excluded Middle is satisfied: every situation w belongs to q q, but not the Law of Bivalence: situation w may belong neither to q nor to q. A dynamic state ϕ is a function from V into [0, 1], whose restriction to each Boolean sigma algebra < A > is a probability measure. The relation between dynamic and value states is the following: ϕ and λ are a dynamic state and a value state respectively, only if there exists possible situations w and w‘ such that ϕ = ϕ(w), λ = λ(w’). Here, ϕ is an eigenstate of A, with corresponding eigenvalue a, exactly if ϕ(< A,{a} >) = 1. The state-attribution proposition [A,σ] is defined as: [A,σ] = {w : ϕ(w)(< A,σ >) = 1} and means ‘A must have value in σ’. P denotes the set of state-attribution propositions: P = {[A,σ] : A an observable, σ a Borel set}. Partial order between them is given by [A,σ] ⊆ [A’ ,σ’ ] only if, for all dynamical states ϕ, ϕ(< A,σ >) ≤ ϕ(< A’,σ>) and the logic operations are (well) defined as: [A,σ] = [A, Rσ], [A,σ]\uplus[A,θ] = [A,σθ] and [A,σ]∩[A,θ] = [A,σθ]. With all this, < P,, > is an orthoposet. The orthoposet is formed by ‘pasting together’ a family of Boolean algebras in which whole operations coincide in overlapping areas. It may be enriched to approach the lattice of subspaces of Hilbert space. One may recognize a modal relation between both kind of propositions. For example, one starts denying the collapse in the measurement process and recognizing that the observable has one of the possible eigenvalues. Then it may be asked what may be inferred with respect to those values when one knows the dynamic state. The answer van Fraassen gives is that, in the case that ϕ(w) is an eigenstate of the observable A with eigenvalue a, then A actually does have value a. This means that in this case, the measurement ‘reveals’ the value the observable already had. He generalizes this idea and postulates that [A,σ] implies < A,σ >. With this assumption and the rejection of an ignorance interpretation of the uncertainty principle, he is able to prove that [A,σ] = < A,σ >. The necessity operator is defined by Q = {w : for all w‘, if wRw‘ then w‘ ∈ Q}, where Q is any proposition and R is the relative possibility relation: w‘ is possibly relative to w exactly if, for all Q in V, if w is in Q then w‘ is in Q. So, [A,σ] may be read as ‘necessarily, < A,σ >’. This says that the dynamic state assigns 1 to < A,σ > if and only if the value state that accompanies any relatively possible dynamic state makes < A,σ > true. Instead of the transitive possibility relation R, one may use an equivalence relation to define □’, the negated necessity operator. In this case, van Fraassen maintains that the map [A,σ] < A,σ > is an isomorphism of posets < P,> and < V,> and, when orthocomplementation is defined, it becomes an isomorphism between the orthoposets. Thus, the logic of V is that of P, that is, QL. Endowed with these tools, van Fraassen gives an interpretation of the probabilities of the measurement outcomes which is in agreement with the Born rule. The MI proposed by Kochen and Dieks (K-D, for short), proposes to use the so called biorthogonal decomposition theorem (also called Schmidt theorem) in order to describe the correlations between the quantum system and the apparatus in the measurement process. From a realistic perspective, an interpretational issue which MIs need to take into account is the assignment of definite values to properties. But if we try to interpret eigenvalues which pertain to different sets of observables as the actual (pre-existent) values of the physical properties of a system, we are faced with all kind of no-go theorems that preclude this possibility. Regarding the specific scheme of the MI, Bacciagaluppi and Clifton were able to derive KS-type contradictions in the K-D interpretation which showed that one cannot extend the set of definite valued properties to non-disjoint sub-systems [26, 56]. In order to escape KS type contradictions, Jeffrey Bub’s modal version recalls David Bohm’s interpretation and proposes to take some observable, R, as always possessing a definite value. In this way one can avoid KS contradictions and maintain a consistent discourse about statements which pertain to the sublattice determined by the preferred observable R. As with van Fraassen’s and Vermaas and Dieks’ interpretations, Bub’s proposal distinguishes between dynamical states and property or value states, in his case with the purpose of interpreting the wave function as defining a Kolmogorovian probability measure over a restricted sub-algebra of the lattice L(H) of projection operations (corresponding to yes-no experiments) over the state space. It is this distinction between property states and dynamical states which according to Bub provides the modal character to the interpretation: The idea behind a ‘modal’ interpretation of quantum mechanics is that quantum states, unlike classical states, constrain possibilities rather than actualities—which leaves open the question of whether one can introduce property states […] that attribute values to (some) observables of the theory, or equivalently, truth values to the corresponding propositions. [47, p. 173] In precise terms, as L(H) does not admit a global family of compatible valuations, and thus not all propositions about the system are determinately true or false, probabilities defined by the (pure) state cannot be interpreted epistemically [47] (p. 119). But, if one chooses, for a given state |e>, a preferred observable R, these properties can be taken as determinate since the propositions associated with R, that is, with the projectors in which R decomposes, generate a Boolean algebra. Bub constructs the maximal sublattices D(|e>, R) ⊆ L(H) to which truth values can be assigned via a 2-valued homomorphism and demonstrates a uniqueness theorem that allows the construction of the preferred observable. In Bub’s proposal, a property state is a maximal specification of the properties of the system at a particular time, defined by a Boolean homomorphism from the determined sublattice to the Boolean algebra of two elements. On the other hand, a dynamical state is an atom of L(H) that evolves unitarily in time following the Schrödinger equation. So, dynamical states do not coincide with property states. Given a dynamical state represented by the atom |e> ∈ L(H), one constructs the sublattice D(|e>, R) with Kolmogorovian probabilities defined over alternative subsets of properties in the sublattice. They are the properties of the system, and the probabilities defined by |e> evolve (via the evolution of |e>) in time. If the preferred observable is the identity operator I, the atoms in D(|e>, I) may be pictured as a ‘fan’ of its projectors generated by the ‘handle’ |e> [46, p. 751] or an ‘umbrella’ with state |e> again as the handle and the rays in (|e>) as the spines. When observable R ≠ I, there is a set of handles {|eri>,i = 1…k} given by the nonzero projections of |e> onto the eigenspaces of R and the spines represented by all the rays in the orthogonal complement of the subspace generated by the handles. When dim(H) > 2, there are k 2-valued homomorphisms which map each of the handles onto 1 and the remaining atoms onto 0. The determinate sublattice, which changes with the dynamics of the system, is a partial Boolean algebra, that is, the union of a family of Boolean algebras pasted together in such a way that the maximum and minimum elements of each one, and eventually other elements, are identified and, for every n-tuple of pair-wise compatible elements, there exists a Boolean algebra in the family containing the n elements. The possibility of constructing a probability space with respect to which the Born probabilities generated by |e> can be thought of as measures over subsets of property states, depends on the existence of sufficiently many property states defined as 2-valued homomorphisms over D(|e>, R). This is guaranteed by a uniqueness theorem that characterizes D(|e>, R) [47, p. 126]. Thus constructed, the structure avoids KS-type theorems. Then, given a system S and a measuring apparatus M, […] if some quantity R of M is designated as always determinate, and M interacts with S via an interaction that sets up a correlation between the values of R and the values of some quantity A of S, then A becomes determinate in the interaction. Moreover, the quantum state can be interpreted as assigning probabilities to the different possible ways in which the set of determinate quantities can have values, where one particular set of values represents the actual but unknown values of these quantities. [46, p. 750] The problem with this interpretation is that, in the case of an isolated system, there is no single element in the formalism of QM that allows us to choose an observable R, rather than another. This is why the move seems flagrantly ad hoc. Were we dealing with an apparatus, there would be a preferred observable, namely the pointer position, but the quantum wave function contains in itself mutually incompatible representations (choices of apparatuses) each of which provides non-trivial information about the state of affairs. The Bohmian proposal of Bub, has been extended by Guido Bacciagaluppi and Michael Dickson in their atomic version of the MI [27]. The authors of this work have also contributed to the understanding of modality in the context of orthodox QL [102, 103, 104, 105]. From our investigation there are several conclusions which can be drawn. We started our analysis with a question regarding the contextual aspect of possibility. As it is well known, the KS theorem does not talk about probabilities, but rather about the constraints of the formalism to actual definite valued properties considered from multiple contexts. What we found via the analysis of possible families of valuations is that a theorem which we called, for obvious reasons, the Modal KS (MKS) theorem can be derived which proves that quantum possibility, contrary to classical possibility, is also contextually constrained [102]. This means that, regardless of its use in the literature, quantum possibility is not classical possibility. In a paper written in 2014 [88], we concentrated on the analysis of actualization within the orthodox frame and interpreted, following the structure, the logical realm of possibility in terms of ontological potentiality. e. The Czech-Slovakian and Italian Schools The study of the structure of tensor products [57, 199, 112, 113, 114] motivated a fruitful development of different algebraic structures that could represent quantum propositions, which in turn became a line of investigation by itself. Beginning with the proposal of test spaces by Foulis and Randall [122, 123, 124, 204, 205, 206, 207], which are related to orthoalgebras, the theory of structures as orthomodular lattices, partial Boolean algebras, orthomodular posets, effect algebras, quantum MV-algebras and the like became widely discussed. The Czech school led by Pavel Ptak, the Slovak school initiated by Anatolij Dvurečenskij and Sylvia Pulmannová and the Italian school organized by Enrico Beltrametti and Maria Luisa Dalla Chiara and continued by Roberto Giuntini were pioneers in the subject, see for example [32, 33, 36, 37, 52, 51, 54, 78, 79, 81, 115, 113, 130, 128, 129, 145, 141, 142, 148, 147, 168, 162, 169, 198, 200, 237, 238]. The weakened structures allow consideration of unsharp propositions related, not to projections, but to the elements of the more general set of linear bounded operators—called effects—over which the probability measure given by the Born rule may be defined. And this in turn gave rise to the consideration of paraconsistent QL, partial QL and Łukasiewickz QL [79]. An important line of research in the subject of quantum structures is the application of QL methods to languages of information processing and, more specifically, to quantum computational logic (QCL) [53, 80, 101, 82, 135, 136, 138, 143, 149, 193, 192]. In this way several logical systems associated to quantum computation were developed. They provide a new form of quantum logic strongly connected with the fuzzy logic of continuous t-norms [151]. The groups in Firenze directed by Dalla Chiara, and Cagliari directed by Giuntini, have also developed different languages for quantum computation. A sentence in QL may be interpreted as a closed subspace of H. Instead, the meaning of an elementary sentence in QCL is a quantum information quantity encoded in a collection of qbits—unit vectors pertaining to the tensorial product of two dimensional complex Hilbert spaces—or qmixes—positive semi-definite Hermitian operators of trace one over Hilbert space. Conjunction and disjunction are not associated to the join and meet lattice operations. Instead, the number of conjunctions and disjunctions involved in a sentence determines the dimension of the space of its ‘meanings’, the dimension varying with the number and nature of the logical connectives, thus the ‘meaning’ of the sentence reflects the logical form of the sentence itself (for a complete discussion see [80]). f. The Brazilian School Newton da Costa and Décio Krause at Florianópolis have begun investigations on Non-Reflexive Logics (NRL) and Paraconsistent Logics (PL) related to several foundational issues regarding QM. On the one hand, NRL is, in a wide sense, a logic in which the relation of identity (or equality) is restricted, eliminated, replaced, at least in part, by a weaker relation, or employed together with a new non-reflexive implication or equivalence relation. In classical logic, one of the basic principles is the Principle of Identity (PI), expressing the reflexive property of identity, whose usual formulation is x = x or ∀ x (x = x), where x is a first order variable. There are other versions in higher-order logic, in which higher order variables appear. There are also propositional formulations of the principle: p p (p implies p) or p p (p is equivalent to p), where p is a propositional variable. If propositional quantification is allowed, then we have other forms of the principle: ∀p (p p) as well as: ∀p (p p). Some of the above principles are not in general valid in non-reflexive logics. They are total or partially eliminated, restricted, or not applied to the relation that is employed instead of identity. Several of these principles are the motivations for the development of non-reflexive logics. The application of the PI is controversial in the quantum domain not only due to the so called “indistinguishability of quantum particles” but, more deeply, when applying it to “something” that does not respect the classical definition of object. In particular, the search for a set theory that could be adequate to QM goes as far back as the 1974 Congress of the American Mathematical Society, which was devoted to the evaluation of the status of Hilbert’s problems for the century, posed in Paris in 1900. In the 1974 Congress, Manin proposed as one of the new set of problems for the next century: […] we should consider possibilities of developing a totally new language to speak about infinity. […] I would like to point out that this [the concept of set] is rather an extrapolation of common-place physics, where we can distinguish things, count them, put them in order, etc. New quantum physics has shown us models of entities with quite different behaviour. Even ‘sets’ of photons in a looking-glass box, or electrons in a nickel piece are much less Cantorian that the ‘set’ of grains of sand. [181] The “new language to speak about infinity” is obviously a new ‘set’ theory, since set theory is usually known as “the theory of the (actual) infinite.” For a discussion about the necessity of a new set theory see for example [171, 134, 170, 77]. Within this context, the weakening of the concept of identity—substituted by that of indiscernibility—allows the development of non-reflexive logics which, in a wide sense, are logics in which the relation of identity (or equality) is restricted, eliminated, replaced, at least in part, by a weaker relation, or employed together with a new non-reflexive implication or equivalence relation [68, 73, 172, 75]. There are also different approaches to the logic related to quantum set theories. Gaisi Takeuti proposed a quantum set theory developed in the lattice of projections-valued universe [221, 222] and Satoko Titani formulated a lattice valued logic corresponding to general complete lattices developed in the classical set theory based on the classical logic [223]. On the other hand, PL are the logics of inconsistent but non-trivial theories. The origins of PL go back to the first systematic studies dealing with the possibility of rejecting the PNC. PL was elaborated, independently, by Stanislaw Jaskowski in Poland, and by Newton da Costa in Brazil, around the middle of the last century (on PL, see, for example: [72]). A theory T founded on the logic L, which contains a symbol for negation, is called inconsistent if it has among its theorems a sentence A and its negation ¬A; otherwise, it is said to be consistent. T is called trivial if any sentence of its language is also a theorem of T; otherwise, T is said to be non-trivial. In classical logics and in most usual logics, a theory is inconsistent if, and only if, it is trivial. L is paraconsistent when it can be the underlying logic of inconsistent but non-trivial theories. Clearly, no classical logic is paraconsistent. In the context of QM, da Costa and Krause have put forward [71] a PL in order to provide a suitable formal scheme to consider the notion of complementarity introduced in 1927 by Niels Bohr during his famous ‘Como Lecture’. The notion of complementarity was developed by Bohr in order to consider the contradictory representations of wave representation and corpuscular representation found in the double-slit experiment (see for example [174]). According to Bohr: “We must, in general, be prepared to accept the fact that a complete elucidation of one and the same object may require diverse points of view which defy a unique description.” The proposal of da Costa and Krause has been further analyzed by Jean-Yves Béziau [39, 40] taking into account the Square of Opposition (see section 6.4 below). 5. Ongoing Developments and Debates a. New Quantum Structures The importance of quantum structures as a field of research gave rise to its own association: The International Quantum Structures Association (IQSA). As Dvurečenskij relates in the Foreword to the Handbook of Quantum Logic and Quantum Structures: […] in the early nineties, a new organization called International Quantum Structures Association (IQSA) was founded. IQSA gathers experts on quantum logic and quantum structures from all over the world under its umbrella. It organisms regular biannual meetings: Castiglioncello 1992, Prague 1994, Berlin 1996, Liptovsky Mikulas 1998, Cesenatico 2001, Vienna 2002, Denver 2004, Malta 2006. In spring 2005, Dov Gabbay, Kurt Engesser, Daniel Lehmann and Jane Spurr had an excellent idea—to ask experts on quantum logic and quantum structures to write long chapters for the Handbook of Quantum Logic and Quantum Structures. [117, p. viii] In fact, in the subject of quantum structures, MV-algebras, effect algebras, pseudo-effect algebras and related structures are being developed in relation to their use in QM. See [55, 116, 131, 132, 133, 184, 201], just to cite a few examples. b. Dynamical Logics, Category Theory and Quantum Computation As mentioned above (see Section 4.2), Smets and Coecke initiated a line of research that considers the possibility of regarding QL in a dynamical manner. This research is connected to the tradition of computer science, interested in the semantic notion of process, and thinks about the quantum realm in terms of change, instead of taking concepts like ‘particle’, ‘system’, ‘property’ and so on as fundamental. The standpoint of this approach is the observation that QL is essentially a dynamical logic, that it is about actions rather than propositions [30]. It is also connected to the interpretation of the ‘Sasaki hook’—namely, the quantum implication that is the closest to the classical one—which may be understood in terms of a dynamic modality instead of in terms of deduction—in fact, it does not satisfy the deduction theorem [60]. Smets together with Alexandru Baltag have proposed two axiomatizations of the logic of quantum actions [218]. One of them takes the notion of action as fundamental and axiomatizes the underlying algebra, giving a quantale [22, 59]. The other takes the notion of state as fundamental and represents actions as relations between states. Contrary to orthomodular QL [78], these axiomatizations fulfill completeness with respect to infinite dimensional Hilbert spaces and have applications in computational science [28]. In fact, the application to computational science and more broadly to information processing needs to manage composite systems, one of the profound difficulties that faces orthodox QL. Also, the relation between category theory and QL is being explored from different perspectives. On the one side, there is a line of investigation initiated by Chris Isham and continued by Andreas Döring with Chris Heunen, Klaas Landsman and Bas Spitters among others, whose main interest is to link the construction of a physical theory and its representation in a topos [146] of the formal language attached to the theory [107, 108, 109, 110, 157, 111, 154, 50]. They make claims about the necessity of reviewing the basic suppositions that are taken from granted, for example, the nature of space-time, the use of real numbers as values of physical quantities and the meaning of probability. From a logical point of view, contrary to the intractable QL, any topos in which the physical theory is represented comes with an intrinsic intuitionistic logic that is obviously more tractable. Moreover, compound systems also find their place in the topos approach [111]. Classical theories are included in this new formalization and for all of them the corresponding topos is that of sets endowed with classical logic as a trivial intuitionistic one. Also Elias Zafiris and Vassilios Karakostas are making new research in categorial semantics [239]. On the other side, the line of investigation initiated at Oxford by Samson Abramsky and continued by Coecke among others proposed an axiomatization which may be useful for managing the formal language of physical processes involved in new quantum technologies as quantum computation and teleportation. Quantum computers exploit the existence of superpositions to drastically decrease the time and recourses required to deal with certain problems such as triangle-finding, integer factorization or the searching of an entry in an unordered list [164, 191]. Teleportation uses non-separability to safely transmit information from one place to another by means of an entangled state and a classical communication channel [45]. The categorical approach of the Oxford group uses monoidal categories [1, 2, 3, 4, 67, 165] and simple diagrams to view quantum processes and composite systems in a consistent manner. They apply these tools to research in the subject of computing semantics [63, 64, 65], in particular in the subject of linear logic [140] which is essential for computing science. Also Cristina and Amilcar Sernadas in Lisbon are working on the connection of category theory and linear logic [182, 183, 49]. Research on computational semantics is being developed in connection with epistemic logics by members of the Italian group. For example, they model operators such as “to understand” or “to know” by irreversible quantum operations, thus allowing us to reflect on characteristic limitations in the process of acquiring information [34, 35]. The relation between quantum structures and epistemic logics is also being studied by a group in Amsterdam. They are applying a modal dynamic-epistemic QL for reasoning about quantum algorithms and, in general, for considering quantum systems as codifying actions of information production and processing [29, 30, 31]. Dynamics of concepts as studied by cognitive science are also being considered with the aid of quantum structures. In fact, D. Aerts and co-workers have applied the formalism of QM for modeling the combination of concepts, showing the indeterministic and holistic characters of this process [13, 16, 17, 20, 21]. This approach has technological applications in connection with quantum computation and robotics [18, 19]. c. Paraconsistency and Quantum Superpositions As remarked by Coecke the meaning of the superposition principle might be the key to understand QM: Birkhoff and von Neumann crafted quantum logic in order to emphasize the notion of quantum superposition. In terms of states of a physical system and properties of that system, superposition means that the strongest property which is true for two distinct states is also true for states other than the two given ones. In order-theoretic terms this means, representing states by the atoms of a lattice of properties, that the join p q of two atoms p and q is also above other atoms. From this it easily follows that the distributive law breaks down: given atom p,q with r < p q we have r ∧(p q) = r while (r p)∨(r q) = 0∨0 = 0. Birkhoff and von Neumann as well as many others believed that understanding the deep structure of superposition is the key to obtaining a better understanding of quantum theory as a whole. [66] In line with this intuition, in [74], one of the authors of this paper together with N. da Costa argued in favor of the possibility of considering quantum superpositions in terms of a PL approach. It was claimed that, even though most interpretations of QM attempt to escape contradictions, there are many hints—coming mainly from present technical and experimental developments in QM—that indicate it could be worthwhile to engage in a research of this kind. Arenhart and Krause [23, 24, 25] have raised several arguments against the paraconsistent approach to quantum superpositions which have been further analyzed in [86]. Recently, some new proposals to consider quantum superpositions from a logical perspective have been put forward [76, 173]. d. Contradiction and Modality in the Square of Opposition In Aristotelian classical logic, categorical propositions are divided in Universal Affirmative, Universal Negative, Particular Affirmative and Particular Negative. Possible relations between two of the mentioned types of propositions are encoded in the square of opposition. The square expresses the essential properties of monadic first order quantification which, in an algebraic approach may be represented by taking into account monadic Boolean algebras. The square of opposition has been considered, in relation to QL, as a useful tool to identify paraconsistent negations [38, 40]. The square also expresses the essential properties of the monadic first order quantifiers ∃ and ∀ that, in an algebraic approach, can be represented within the frame of monadic Boolean algebras by considering quantifiers as modal operators acting on a Boolean algebra [150]. This representation is called the modal square of opposition. An extension of the square to a case in which the underlying structure is replaced by the algebra of QL has been provided in [137] and it may be useful to identify paraconsistent negations in the structure of QM (see also for discussion [89]). The square of opposition has also recently been considered in relation to the meaning of quantum superpositions and the interpretation of the terms that compose it (Section 5.3). On the one hand, according to [74], it has been argued that one might consider some of the terms that compose the superposition as contradictory. On the other hand, Arenhart and Krause [23, 24, 25] have defended the idea that, taking into account the square of opposition, contrariety is a more suitable notion to describe the physical meaning of superpositions (see also [85, 86, 87]). e. Quantum Probability The subject of probability in QM appears in the early discussions and analysis provided by the founding fathers of the theory. On the one hand, there is the question about its interpretation, already stressed by Schrödinger in a letter to Einstein: “It seems to me that the concept of probability is terribly mishandled these days. Probability surely has as its substance a statement as to whether something is or is not the case—of an uncertain statement, to be sure. But nevertheless it has meaning only if one is indeed convinced that the something in question quite definitely is or is not the case. A probabilistic assertion presupposes the full reality of its subject.” [47, p. 115]. On the other hand, one faces the problem of its very definition: The Born rule was incorporated in the axiomatization of QM as a noncommutative measure over the lattice of events by von Neumann in the early thirties, but this measure needs a modular lattice to be well posed, while L(H) is an orthomodular one. As Miklos Rédei states: To see why von Neumann insisted on the modularity of quantum logic, one has to understand that he wanted quantum logic to be not only the propositional calculus of a quantum mechanical system but also wanted it to serve as the event structure in the sense of probability theory. In other words, what von Neumann aimed at was establishing the quantum analogue of the classical situation, where a Boolean algebra can be interpreted both as the Tarski-Lindenbaum algebra of a classical propositional logic and as the algebraic structure representing the random events of a classical probability theory, with probability being an additive normalized measure on the Boolean algebra. [212, p. 157] In fact, the difficulties with a rigorous definition of probability were well known to von Neumann [212]. When he was invited to the 1954 Congress of Mathematicians held in Amsterdam, dedicated to unsolved problems in mathematics—in a similar flavor to the 1900 Paris meeting in which Hilbert gave his famous lecture—von Neumann sketched his (ungiven) conference on the role of continuous rings of operators for a better understanding of QM, QL and quantum probability [211]. The difficulties with the definition of a “good measure” over the Hilbert lattice made von Neumann abandon the orthodox formalism of QM in Hilbert space, to which he himself had contributed a great deal, and face the classification of the factors and their dimension functions which led to the subject of von Neumann’s algebras. Nowadays, the definition of probability still faces various challenges and the subject is under debate. On the one hand, type II1 factor (the one whose projection lattice is a continuous geometry, and thus an orthomodular modular lattice as required by the definition of measure of probability) is not an adequate structure to represent quantum events. On the other hand, there exists different candidates for defining conditional probability and there is not a unique criterion for choosing among them [81, 209]. With respect to interpretation, the frequency interpretation is untenable for all non-commutative probabilities [213]. As Rédei remarks, “yet, a satisfactory interpretation of non-commutative measure as probability and the relation of this non-commutative (quantum) probability to (quantum) logic is still lacking.” [211] f. Potentiality and Actuality As we have discussed above, QL has been related to actuality since its origin. The operationalist perspective of Birkhoff and von Neumann was implicitly related to the measurement problem (MP). In QM “a complete mathematical description of a physical system S does not in general enable one to predict with certainty the result of an experiment.” [41] As a matter of fact, QM describes mathematically the state in terms of a superposition, thus the question raises: why do we observe a single result (that corresponds to a single eigenstate) instead of something related to a superposition of them? Although the MP accepts the fact that there is something very weird about quantum superpositions, leaving aside its problematic meaning, it focuses on the justification of the actualization process. Taking as a standpoint the single outcome it asks how we get to the actual result from the multiplicity of possible states. The MP is thus an attempt to justify why, regardless of QM, we only observe actuality. The problem places the result in the origin, what needs to be justified is the already known answer. QL distinguishes in general between ‘actual’ properties and ‘possible’ or ‘potential’ ones, opening the door to discuss a realm of existence beyond actuality. The notion of potentiality was introduced by Heisenberg in QM, and later developed and related through the operationalist approach to QL by Piron in [196] and more recently by Aerts in [14, 15] (see also for discussion [217]). Within such interpretations the collapse is accepted, and potentialities are defined in terms of their “becoming actual.” A different notion of potentiality which attempts to escape the limits of actuality has been also developed in [83, 84]. According to this approach one should turn things upside-down; we do not need to explain the actual via the potential but rather, we need to use the actual in order to develop the potential. From different perspectives, the development of the notion of potentiality in QM is related to an attempt to provide a realistic physical representation of the theory going beyond the discourse about mere “actual results.” Such proposals are in line with trying to understand what is a quantum superposition, which is the main theoretical tool which has opened the door to the most outstanding technological developments and experiments in early 21st century physics. 6. Final Remarks Quantum logic has deeply influenced our understanding of the formal structure of QM. It has also played an important role within the foundational debates about the theory. In the early 21st century, the rise of a new technological era grounded on the processing of quantum information is posing original questions and challenges to all researchers close to the field. In this respect, the ongoing research in QL (section 6) can prove to be an important guide to try to advance our comprehension of the phenomena implied by these technologies. 7. References and Further Reading [1] Abramsky, S., 1996, “Retracing some paths in process algebra”, in Proceedings of CONCUR 96, Lecture Notes in Computer Science, vol. 1119, 1-17, Springer-Verlag, Berlin. [2] Abramsky, S. and Coecke, B., 2008, “Categorical Quantum Mechanics”, in Handbook of Quantum Logic and Quantum Structures, vol. II, K. Engesser, D. M. Gabbay and D. Lehmann (Eds.) Elsevier, Amsterdam. [3] Abramsky, S. and Coecke, B., 2004, “A Categorical Semantics of Quantum Protocols”, in LICS Proceedings, 415-425. [4] Abramsky, S. and Duncan, R.W., 2006, “A Categorical Quantum Logic”, Mathematical Structures in Computer Science, 16, 469-489. [5] Aerts, D. and Daubechies, I., 1979 “A characterization of subsystems in physics”, Letters in Mathematical Physics, 3, 11-17. [6] Aerts, D. and Daubechies, I., 1979, “A mathematical condition for a sublattice of a propositional system to represent a physical subsystem, with a physical interpretation”, Letters in Mathematical Physics, 3 19-27. [7] Aerts, D., 1981, The One and the Many: Towards a Unification of the Quantum and Classical Description of One and Many Physical Entities, Doctoral dissertation, Brussels Free University, Belgium. [8] Aerts, D., 1981, “Description of compound physical syste ms and logical interaction of physical systems”, in Current Issues on Quantum Logic, E.G. Beltrametti and B.C. van Fraassen (Eds.), pp. 381-405, Kluwer Academic Publishers, Dordrecht. [9] Aerts, D., 1982, “Description of many physical entities without the paradoxes encountered in quantum mechanics”, Foundations of Physics, 12, 1131-1170. [10] Aerts, D., 1983, “Classical theories and non-classical theories as a spetial case of a more general theory”, Journal of Mathematical Physics, 24, 24412453. [11] Aerts, D., 1984, “Construction of a structure which makes it possible to describe the joint system of a classical and a quantum system”, Reports in Mathematical Physics, 20, 421-428. [12] Aerts, D., 1999, “Foundations of Quantum physics: a general realistic and operational approach”, International Journal of Theoretical Physics, 38, 289-358. [13] Aerts, D., 2009, “Quantum structure in cognition”, Journal of Mathematical Psychology, 53, 314-348. [14] Aerts, D., 2009, “Quantum particles as conceptual entities: a possible explanatory framework for quantum theory”, Foundations of Science, 14, 361-411. [15] Aerts, D., 2010, “A Potentiality and conceptuality interpretation of quantum mechanics”, Philosophica, 83, 15-52. [16] Aerts, D., 2011, “Quantum interference and superposition in cognition. Development of a theory for the disjunction of concepts”, in Worldviews, Science and Us: Bridging Knowledge and its Implications for Our Perspectives of the World, D. Aerts, J. Broekaert, B. D’Hooghe and N. Note (Eds.), pp. 169-211, World Scientific, Singapore. [17] Aerts, D., Broekaert, J. and Gabora, L., 2011, “A case for applying an abstracted quantum formalism to cognition”, New Ideas in Psychology, 29, 136-146. [18] Aerts, D., Czachor, M and Sozzo, S., 2011, “Quantum interaction approach in cognition, artificial intelligence and robotics” in Proceedings of the Fifth International Conference on Quantum, Nano and Micro Technologies, V. Privman and V. Ovchinnikov (Eds.), pp. 35-40, 2011 [19] Aerts, D. and Sozzo, S., 2011, “Quantum structures in cognition: why and how concepts are entangled”, in Quantum Interaction Lecture Notes in Computer Science, 7052, 116-127. [20] Aerts, D. Gabora, L. and Sozzo, S., 2013, “Concepts and their dynamics: a quantum-theoretic modeling of human thought”, Topics in Cognitive Science, 5, 737-772. [21] Aerts, D. and Sozzo, S., 2014, “Quantum entanglement in concept combination”, International Journal of Theoretical Physics, 53, 3587-3603. [22] Amira, H., Coecke, B. and Stubbe, I., 1998, “How quantales emerge by introducing induction within the operational approach”, Helvetica Physica Acta, 71, 554-572. [23] Arenhart, J. R. and Krause, D., 2014, “Oppositions in Quantum Mechanics”, in New Dimensions of the Square of Opposition, J.-Y. Béziau and K. Gan-Krzywoszynska (Eds.), pp. 337-356, Philosophia Verlag, Munich. [24] Arenhart, J. R. and Krause, D., 2014, “Contradiction, Quantum Mechanics, and the Square of Opposition”, Logique et Analyse. [25] Arenhart, J. R. and Krause, D., 2015, “Potentiality and Contradiction in Quantum Mechanics”, in The Road to Universal Logic (volume II), A. Koslow and A. Buchsbaum (Eds.), pp. 201-211, Springer, Berlin. [26] Bacciagaluppi, G., 1995, “A Kochen Specker Theorem in the Modal Interpretation of Quantum Mechanics”, Internal Journal of Theoretical Physics, 34, 1205-1216. [27] Bacciagaluppi, G. and Dickson, W. M., 1997, “Dynamics for Density Operator Interpretations of Quantum Theory”, Preprint. (quantph/arXiv:9711048) [28] Baltag, A. and Smets, S., 2004, “The logic of quantum programs”, in Proceedings of the 2nd International Workshop on Quantum Programming Languages, P. Selinger (Ed.), pp. 39-56, TUCS General Publication. [29] Baltag, A. and Smets, S., 2010, “Correlated knowledge: an epistemic-logic view on quantum entanglement”, Internal Journal of Theoretical Physics, 49, 3005-3021. [30] Baltag, A. and Smets, S., 2012, “The dynamic turn in quantum logic”, Synthese, 186, 753-773. [31] Baltag, A., Bergfeld, J., Kishida, K., Sack, J., Smets, S. and Zhong, S., 2014, “PLQP and company: decidable logics for quantum algorithms”, Internal Journal of Theoretical Physics, 53, 3628-3647. [32] Beltrametti, E.G. and Cassinelli, J., 1981, The Logic of Quantum Mechanics, Addison-Wesley, Reading, New York. [33] Beltrametti, E.G. and van Fraassen, B.C. (Eds.), 1981, Current Issues in Quantum Logic, Plenum, New York. [34] Beltrametti, E., Dalla Chiara, M.L., Giuntini, R., Leporini, R. and Sergioli, G., 2014, “A quantum computational semantics for epistemic logical operators. Part I: epistemic structures”, Internal Journal of Theoretical Physics, 53, 3279-3292. [35] Beltrametti, E., Dalla Chiara, M.L., Giuntini, R., Leporini, R. and Sergioli, G., 2014, “A quantum computational semantics for epistemic logical operators. Part II: semantics”, Internal Journal of Theoretical Physics, 53, 3293-3307. [36] Bennett, M.K. and Foulis, D.J., 1995, “Phi-symmetric effect algebras”, Foundations of Physics, 25, 1699-1722. [37] Bennett, M.K. and Foulis, D.J., 1997, “Interval algebras and unsharp quantum logics”, Advances in Mathematics, 19, 200-215. [38] Béziau, J.-Y., 2003, “New light on the square of opposition and its nameless corner”, Logical Investigations, 10, 218-232. [39] Béziau, J.-Y., 2012, “The Power of the Hexagon”, Logica Universalis, 6, 1-43. [40] Béziau, J.-Y., 2014, “Paraconsistent logic and contradictory viewpoint”, Revista Brasileira de Filosofia. [41] Birkhoff, G. and von Neumann, J., 1936, “The logic of quantum mechanics”, Annals of Mathematics 37, 823-843. [42] Bitbol, M., 1996, Mécanique Quantique, Flamarion, Paris. [43] Bitbol, M., 1998, “Some steps towards a trascendental deduction of quantum mechanics”, Philosophia Naturalis, 35, 253-280. [44] Bohr, N., 1985, Collected Works, vol. 6, I. Kolckar (Ed.), North-Holland, Amsterdam. [45] Brouwmeester, D., Ekert, A.K. and Zeilinger, A., 2001, The Physics of Quantum Information: Quantum Criptography, Quantum Teleportation, Quantum Computation, Springer, Berlin. [46] Bub, J., 1992, “Quantum Mechanics Without the Projection Postulate”, Foundations of Physics, 22, 737-754. [47] Bub, J., 1997, Interpreting the Quantum World, Cambridge University Press, Cambridge. [48] Bush, P., Pfarr, J., Ristig, M. and Stachow, E.-W., 2010, “QuantumMatter-Spacetime: Peter Mittelstaedt’s Contributions to Physics and Its Foundations”, Foundations of Physics, 40, 1163-1170. [49] Caleiro, C., Mateus, P., Sernadas, A. and Sernadas, C., 2006, “Quantum Institutions”, in Algebra, Meaning, and Computation: Essays Dedicated to Joseph A. Goguen on the Occasion of His 65th Birthday, K. Futatsugi, J.-P. Jouannaud, and J. Meseguer (Eds.), 50-64, Springer-Verlag, Berlin. [50] Caspers, M., Heunen, C., Landsman, N. and Spitters, B., 2009, “Intuitionistic quantum logic of an n-level system”, Foundations of Physics, 39, 731-759. [51] Cattaneo, G. and Nisticò, G.,1986, “Brower-Zadeh posets and three-valued ?ukasiewicz posets”, Fuzzy Sets and Systems, 33, 165-190. [52] Cattaneo, G. and Laudisa, F., 1994, “Axiomatic unsharp quantum theory (from Mackey to Ludwig and Piron)”, Foundations of Physics, 24, 631-683. [53] Cattaneo, G., Dalla Chiara, M.L., Giuntini, R. and Leporini, R., 2004, “An unsharp logic from quantum computation”, International Journal of Theoretical Physics, 43, 1803-1817. [54] Cattaneo, G., Dalla Chiara, M.L., Giuntini, R. and Paoli, F., 2009, “Quantum Logic and Nonclassical Logics”, in Handbook of Quantum Logic and Quantum Structures, K. Engesser, D. Gabbay and D. Lehmann (Eds), pp. 127-226, Elsevier, Amsterdam. [55] Chajda, I. and Kühr, J., 2012, “A generalization of effect algebras and ortholattices”, Mathematica Slovaca, 62, 1045-1062. [56] Clifton, R.K., 1996, “The Properties of Modal Interpretations of Quantum Mechanics”, British Journal for the Philosophy of Science, 47, 371-398. [57] Coecke, B., 2000, “Structural characterization of compoundness”, International Journal of Theoretical Physics, 39, 581-590. [58] Coecke, B., Moore, D.J. and Wilce, A., 2000, “Operational Quantum Logic: An Overview”, in Current Research in Operational Quantum Logic: Algebras, Categories, Languages, B. Coecke, D.J. Moore and A. Wilce (Eds.), pp. 1-36, Kluwer Academic Publishers, Dordrecht. [59] Coecke, B., Moore, D.J. and Stubbe, I., 2001, “Quantaloids describing causation and propagation of physical properties”, Foundation of Physics Letters, 14, 357-367. [60] Coecke, B. and Smets, S., 2004, “The Sasaki-hook is not a [static] implicative connective but induces a backward [in time] dynamic one that assigns causes”, International Journal of Theoretical Physics, 43, 1705-1736. [61] Coecke, B., Moore D.J. and Wilce A. (Eds.), 2000, Current Research in Operational Quantum Logic: Algebras Categories, Languages, Kluwer Academic Publishers, Dordrecht. [62] Coecke, B., Moore, D.J. and Smets, S., 2004, “Logic of dynamics & dynamics of logic”, in Logic, Epistemology and the Unit of Science, S. Rahman, J. Symons (Eds.), pp. 527-555, Kluwer Academic Publisher, Dordrecht. [63] Coecke, B., 2005, “Kindergarten Quantum Mechanics”, in Proceedings of QTRF-III, G. Adenier, A.Yu Khrennikov and T.M. Nieuwenhuizen (Eds.), pp. 81-98, AIP Proceedings, New York. [64] Coecke, B., 2010, “Quantum Picturalism”, Contemporary Physics, 51, 5983. [65] Coecke, B., Duncan, R., Kissinger, A. and Wang, Q., 2012, “Strong Complementarity and Non-locality in Categorical Quantum Mechanics”, in Proceedings of the 27th Annual IEEE Symposium on Logic in Computer Science LiCS 2012, pp. 245-254, IEEE Publisher. [66] Coecke, B., 2012, “The Logic of Quantum Mechanics – Take II”, Preprint. (quant-ph/arXiv:1204.3458) [67] Coecke, B., Heunen, C. and Kissinger, A., 2013, “Compositional Quantum Logic”, Computation, Logic, Games, and Quantum Foundations, 21-36. [68] da Costa, N.C.A., 1997, Logique Classique et Non-Classique, Masson, Paris. [69] da Costa, N. C. A. and French, S., 2003, Partial Truth: A Unitary Approach to Models and Scientific Reasoning, Oxford University Press, Oxford. [70] da Costa, N.C.A., Krause, D. and Bueno, O., 2006, “Paraconsistent Logics and Paraconsistence”, in Philosophy of logic, D.M. Gabbay, P. Thagard and J. Woods (Eds.), pp. 655-781, Elsevier, Amsterdam. [71] da Costa, N.C.A. and Krause, D., 2006, “The Logic of Complementarity”, in The Age of Alternative Logics: Assessing Philosophy of Logic and Mathematics Today, J. van Benthem, G. Heinzmann, M. Rebushi and H. Visser (Eds.), pp. 103-120, Springer, Berlin. [72] da Costa, N. C. A., Krause, D., and Bueno, O., 2007, “Paraconsistent Logics and Paraconsistency”, in Handbook of the Philosophy of Science (Philosophy of Logic), D. Jacquette (Ed.), pp. 791-911, Elsevier, Amsterdam. [73] da Costa, N.C.A. and Bueno, O., 2009, “Non Reflexive Logics”, Revista Brasilera de Filosofía, 232, 181-196. [74] da Costa, N. and de Ronde, C., 2013, “The Paraconsistent Logic of Quantum Superpositions”, Foundations of Physics, 43, 845-858. [75] da Costa, N.C.A. and de Ronde, C., 2014, “Non-Reflexive Logical Foundation for Quantum Mechanics”, Foundations of Physics, 44, 1369-1380. [76] da Costa, N.C.A. and de Ronde, C., 2014, “The Paraconsistent Approach to Quantum Superpositions Reloaded”, Preprint. (quantph/arXiv:1507.02706) [77] Dalla Chiara, M.L., Giuntini, R. and Krause, D., 1998, “Quasi set theories for microobjects: a comparison”, in Interpreting Bodies: Classical and Quantum Objects in Modern Physics, E. Castellani (Ed.), Princeton University Press, Princeton. [78] Dalla Chiara, M. and Giuntini, R., 2002, “Quantum Logics”, in Handbook of Philosophical Logic, Vol 6, D. Gabbay and F. Guenthner, (Eds.), Kluwer Academic Publishers, Dordrecht. [79] Dalla Chiara, M. and Giuntini, R., 2000, “Paraconsistent Ideas in Quantum Logic”, Synthese, 125, 55-68. [80] Dalla Chiara, M.L., Giuntini, R. and Leporini, R., 2003, “Quantum Computational Logic. A Survey”, Preprint. (quant-ph/arXiv:030529). [81] Dalla Chiara, M., Giuntini, R. and Greechie, R., 2004, Reasoning in Quantum Theory, Kluwer Academic Publishers, Dordrecht. [82] Dalla Chiara, M.L., Giuntini, R., Freytes, H., Ledda, A. and Sergioli, G., 2009, “The Algebraic Structure of an Approximately Universal System of Quantum Computational Gates”, Foundation of Physics, 39, 559-572. [83] de Ronde, C., 2011, The Contextual and Modal Character of Quantum Mechanics: A Formal and Philosophical Analysis in the Foundations of Physics, Doctoral dissertation, Utrecht University, Utrecht. [84] de Ronde, C., 2013, “Quantum Superpositions and Causality: On the Multiple Paths to the Measurement Result”, Preprint. (quantph/arXiv:1310.4534) [85] de Ronde, C., 2013, “Representing Quantum Superpositions: Powers, Potentia and Potential Effectuations”, Preprint. (quant-ph/arXiv:1312.7322) [86] de Ronde, C., 2015, “Modality, Potentiality and Contradiction in Quantum Mechanics”, New Directions in Paraconsistent Logic, pp. 249-265, J.-Y. Beziau, M. Chakraborty and S. Dutta (Eds.), Springer, Berlin. [87] de Ronde, C., 2016, “Representational Realism, Closed Theories and the Quantum to Classical Limit”, in R. E. Kastner, J. Jeknic-Dugic and G. Jaroszkiewicz (Eds.), Quantum Structural Studies, World Scientific, Singapore. [88] de Ronde, C., Freytes, H. and Domenech, G., 2014, “Interpreting the Modal Kochen-Specker Theorem: Possibility and Many Worlds in Quantum Mechanics”, Studies in History and Philosophy of Modern Physics, 45, pp. 11-18. [89] de Ronde, C., Freytes, H. and Domenech, G., 2014, “Quantum Mechanics and the Interpretation of the Orthomodular Square of Opposition”, in New Dimensions of the Square of Opposition, Jean-Yves Béziau and Katarzyna Gan-Krzywoszynska (Eds.), pp. 223-242, Philosophia Verlag, Munich. [90] DeWitt, B., 1973, “The Many-Universes Interpretation of Quantum Mechanics”, In Foundations of Quantum Mechanics, 167-218, Academic Press, New York. [91] DeWitt, B. and Graham, N., 1973, The Many-Worlds Interpretation of Quantum Mechanics, Princeton University Press, Princeton. [92] Dickson, W. M., 2001, “Quantum logic is alive ? (It is true ? It is false)”, Proceedings of the Philosophy of Science Association 2001, 3, S274-S287. [93] Dickson, W. M., 1998, Quantum Chance and Nonlocality: Probability and Nonlocality in the Interpretations of Quantum Mechanics, Cambridge University Press, Cambridge. [94] Dickson, M. and Dieks, D., 2002, “Modal Interpretations of Quantum Mechanics”, The Stanford Encyclopedia of Philosophy (Winter 2002 Edition), E. N. Zalta (Ed.), URL: http://plato.stanford.edu/archives/win2002/entries/qm-modal/. [95] Dieks, D., 1988, “The Formalism of Quantum Theory: An Objective Description of Reality”, Annalen der Physik, 7, 174-190. [96] Dieks, D., 1989, “Quantum Mechanics Without the Projection Postulate and Its Realistic Interpretation”, Foundations of Physics, 19, 1397-1423. [97] Dieks, D., 2007, “Probability in the modal Interpretation of quantum mechanics”, Studies in History and Philosophy of Modern Physics, 38, 292310. [98] Dieks, D., 2010, “Quantum Mechanics, Chance and Modality”, Philosophica, 83, 117-137. [99] Dirac, P. A. M., 1974, The Principles of Quantum Mechanics, 4th Edition, Oxford University Press, London. [100] Domenech, G. and Freytes, H., 2005, “Contextual logic for quantum systems”, Journal of Mathematical Physics, 46, 012102-1 – 012102-9. [101] Domenech, G. and Freytes, H., 2005, “Fuzzy propositional logic associated with quantum computational gates”, International Journal of Theoretical Physics, 45, 228-261. [102] Domenech, G., Freytes, H. and de Ronde, C., 2006, “Scopes and limits of modality in quantum mechanics”, Annalen der Physik, 15, 853-860. [103] Domenech, G., Freytes, H. and de Ronde, C., 2008, “A topological study of contextuality and modality in quantum mechanics”, International Journal of Theoretical Physics, 47, 168-174. [104] Domenech, G., Freytes, H. and de Ronde, C., 2009, “Modal-type orthomodular logic”, Mathematical Logic Quarterly, 3, 307-319. [105] Domenech, G., Freytes, H. and de Ronde, C., 2009, “Many worlds and modality in the interpretation of quantum mechanics: an algebraic approach”, Journal of Mathematical Physics, 50, 072108. [106] Domenech, G., Holik, F and Massri, C., 2010, “A quantum logical and geometrical approach to the study of improper mixtures”, Journal of Mathematical Physics, 51, 052108. [107] Döring, A. and Isham, C. J., 2008, “A topos foundation for theories of physics: I. Formal languages for physics”, Journal of Mathematical Physics, 49, 053515. [108] Döring, A. and Isham, C. J., 2008, “A topos foundation for theories of physics: II. Daseinisation and the liberation of quantum theory”, Journal of Mathematical Physics, 49, 053516. [109] Döring, A. and Isham, C. J., 2008, “A topos foundation for theories of physics: III. The representation of physical quantities with arrows”, Journal of Mathematical Physics, 49, 053517. [110] Döring, A. and Isham, C. J., 2008, “A topos foundation for theories of physics: VI. Categories of systems”, Journal of Mathematical Physics, 49, 053518. [111] Döring, A. and Isham, C. J., 2011, “What is a thing’ Topos theory in the foundations of physics”, in New structures in physics, B. Coecke (Ed.), 753-940, Springer, Berlin. [112] Dvure?enskij, A. and Pulmannová, S., 1994, “Difference posets, effects, and quantum measurements”, International Journal of Theoretical Physics, 33, 819-850. [113] Dvure?enskij, A. and Pulmannová, S., 1994, “Tensor products of D-posets and D-test spaces”, Reports in Mathematical Physics, 34, 251-275. [114] Dvure?enskij, A., 1995, “Tensor product of difference posets and effect algebras”, International Journal of Theoretical Physics, 34, 1337-1348. [115] Dvure?enskij A. and Pulmannova, S., 2000, New Trends in Quantum Structures, Kluwer Academic Publishers, Dordrecht. [116] Dvure?enskij, A. and Xie, Y., 2014, “N-Perfect and Q-Perfect Pseudo Effect Algebras”, International Journal of Theoretical Physics, 53, 33803390. [117] Engesser, K., Gabbay, D.M. and Lehman, D. (Eds.), 2009, Handbook of Quantum Logic and Quantum Structures, Elsevier, Amsterdam. [118] Everett, H., 1957, “‘Relative State’ Formulation of Quantum Mechanics”, Reviews of Modern Physics, 29, 454-462. [119] Everett, H., 1973, “The Theory of the Universal Wave Function”, In The Many-Worlds Interpretation of Quantum Mechanics, DeWitt and Graham (Eds.), Princeton University Press, Princeton. [120] Finkelstein, D., “Matter, space and logic”, in Boston Studies in the Philosophy of Science V, R.S. Cohen and M.W. Wartofsky (Eds.), D. Reidel, Dordrecht. [121] Finkelstein, D., “The physics of logic”, in Paradigms and Paradoxes: The Philosophical Challenge of the Quantum Domain, R.G. Colodny (Ed.), University of Pittsburg Press, Pittsburg. [122] Foulis, D.J. and Randall, C.H., 1972, “Operational statistics, I. Basic concepts”,Journal of Mathematical Physics, 13, 1667-1675. [123] Foulis, D.J. and Randall, C.H., 1974, “Empirical logic and quantum mechanics”, Synthese, 29, 81-111. [124] Foulis, D.J. and Randall, C.H., 1978, “Manuals, morphisms and quantum mechanics”, in Mathematical Foundations of Quantum Theory, A. Marlow (Ed.), Academic Press, New York. [125] Foulis, D.J. and Randall, C.H., 1979, “Tensor products of quantum logics do not exist”, Noticies of the American Mathematical Society, 26, 557. [126] Foulis, D.J. and Randall, C.H., 1981, “Empirical logic and tensor products”, in Interpretations and Foundations of Quantum Theory, H. Neumann (ed.), B. I. Wissenschaft, Mannheim. [127] Foulis, D.J., Piron, C. and Randall, C.H., 1983, “Realism, Operationalism, and Quantum Mechanics”, Foundations of Physics, 13, 813-841. [128] Foulis, D.J. and Bennett, M.K., 1994, “Effect algebras and unsharp quantum logics”, Foundations of Physics, 24, 1331-1352. [129] Foulis, D.J., Bennett, M.K. and Greechie, R.J., 1996, “Test groups and effect algebras”, International Journal of Theoretical Physics, 35, 11171140. [130] Foulis, D.J., 2000, “Representations on unigroups”, in Current Research in Operational Quantum Logic: Algebras Categories, Languages, B. Coecke, D.J. Moore and A. Wilce (Eds.), Kluwer Academic Publishers, Dordrecht. [131] Foulis, D.J., Pulmannová, S. and Vinceková, E., 2011, “Lattice pseudoeffect algebras as double residuated structures”, Soft Computing, 15, 24792488. [132] Foulis, D.J. and Pulmannová, S., 2013, “Dimension theory for generalized effect algebras”, Algebra Universalis, 69, 357-386. [133] Foulis, D.J. and Pulmannová, S., 2014, “Symmetries in synaptic algebras”, Mathematica Slovaca, 64, 751-776. [134] French, S. and Krause, D., 2006, Identity in Physics: A Historical, Philosophical and Formal Analysis, Oxford University Press, Oxford. [135] Freytes, H. and Ledda, A., 2009, “Categories of semigroups in quantum computational structures”, Mathematica Slovaca, 59, 413-432 [136] Freytes, H., 2010, “Quantum computational structures: categorical equivalence for square roots QMV-algebras”, Studia Logica, 95, 63-80. [137] Freytes, H., de Ronde, C. and Domenech, G., 2012, “The Square of Opposition in Orthomodular Logic”, in Around and Beyond the Square of Opposition: Studies in Universal Logic, J.-Y. Béziau and D. Jacquette (Eds.), pp. 193-201, Springer, Basel. [138] Freytes, H. and Domenech, G., 2013, “Quantum computational logic with mixed states”, Mathematical Logic Quarterly, 59, 27-50. [139] Friedman, M. and Putnam, H., 1978, “Quantum Logic, Conditional Probability and Inference”, Dialectica, 32, 305-315. [140] Girard, J-Y, “Linear logic”, 1987, Theoretical Computational Science, 50, 1-102. [141] Giuntini, R. and Greuling, H., 1989, “Toward a formal language for unsharp properties”, Foundations of Physics, 19, 931-945. [142] Giuntini, R., 1996, “Quantum MV-Algebras”, Studia Logica, 56, 393-417. [143] Giuntini, R., Freytes, H,. Ledda, A. and Paoli, F., 2009, “A discriminator variety of Gödel algebras with operators arising from quantum computation”, Fuzzy Sets and Systems, 160, 1082-1098. [144] Gleason, A.M., 1957, “Measures on the closed subspaces of a Hilbert space”, Journal Mathematical Mechanics, 6, 885-893. [145] Goldblatt, R.I., 1974, “Semantic analysis of orthologic”, Journal of Philosophical Logic, 3, 19-35. [146] Goldblatt, R.I., 1984, Topoi: The Categorial Analysis of Logic, Elsevier, Amsterdam. [147] Greechie, R.J., Foulis, D.J. and Pulmannová, S., 1995, “The center of an effect algebra”, Order, 12, 910-106. [148] Gudder, S.P., “Effect test spaces”, 1997, International Journal of Theoretical Physics, 36, 2681-2705. [149] Gudder, S., 2002, “Quantum Computational Logic”, International Journal of Theoretical Physics, 42, 39-47. [150] Halmos, P., 1995, “Algebraic Logic I, Monadic Boolean algebras”, Compositio Mathematica, 12, 217-249. [151] Hajek, P., 1998, Metamathematics of Fuzzy Logic, Kluwer Academic Publishers, Dordrecht. [152] Heisenberg, W., 1958, Physics and Philosophy, Ruskin House, London. [153] Heisenberg, W., 1973, “Development of Concepts in the History of Quantum Theory”, in The Physicist’s Conception of Nature, 264-275, J. Mehra (Ed.), Reidel, Dordrecht. [154] Heunen, C. and Spitters, B., 2009, “A Topos for Algebraic Quantum Theory”, Communications in Mathematical Physics, 291, 63-110. [155] Hooker, C.A. (Ed.), 1979, The Logico-Algebraic Approach to Quantum Mechanics II, D. Reidel, Dordrecht. [156] Husimi, K., 1937, “Studies on the foundations of quantum mechanics I”, Proceedings of the Physico-Mathematical Society Japan, 9, 766-778. [157] Isham, C. J., 2011, “Topos Methods in the Foundations of Physics”, in Deep Beauty, H. Halvorson (Ed.), 187-206, Cambridge University Press. [158] Jammer, M., 1974, Philosophy of Quantum Mechanics, Wiley, New York. [159] Jauch, J.M., 1968, Foundations of Quantum Mechanics, Addison-Wesley, Reading. [160] Jauch, J.M. and Piron, C., 1969, “On the structure of quantal proposition systems”, Helvetica Physica Acta, 42, 842-848. [161] Kalman, J. A., 1958, “Lattices with involution”, Transactions of the American Mathematical Society, 87, 485-491. [162] Kalmbach, G., 1983, Ortomodular Lattices, Academic Press, London. [163] Kauark-Leite, P., 2004, The Transcendental Approach and the Problem of Language and Reality in Quantum Mechanics, Doctoral dissertation, Centre de Recherche en Epistémologie Appliquée – École Polytechnique, Paris. [164] Kaye, P., Laflamme, R. and Mosca, M., 2007, An Introduction to Quantum Computing, Oxford University Press, New York. [165] Kissinger, A., 2014, “Abstract Tensor Systems as Monoidal Categories”, in Categories and Types in Logic, Language, and Physics, 235-252. [166] Kochen, S., 1985, “A New Interpretation of Quantum Mechanics”, In Symposium on the foundations of Modern Physics 1985, P. Lathi and P. Mittelslaedt (Eds.), pp. 151-169, World Scientific, Johensuu. [167] Kochen, S. and Specker, E., 1967, “On the problem of Hidden Variables in Quantum Mechanics”, Journal of Mathematics and Mechanics, 17, 59-87. Reprinted in Hooker, 1975, 293-328. [168] Köhler, P., 1981, “Brouwerian semilattices”, Transactions of the American Mathematical Society, 268, 103-126. [169] Kôpka, F., 1992, “D-posets of fuzzy sets”, Tatra Mountains Mathematical Publications, 1, 83-87. [170] Krause, K., 1992, “On a quasi-set theory”, Notre Dame Journal of Formal Logic, 33, 402-411. [171] Krause, D., 2002, “Why quasi-sets?”, Boletim da Sociedade Paranaense de Matemática, 20, 73-92. [172] Krause, D., 2014, “The problem of identity and a justification for a nonreflexive quantum mechanics,” Logic Journal of the IGLP, 22, 186-205. [173] Krause, D. and Arenhart, J., 2016, “A Logic of Quantum Superpositions”, in Probing the Meaning of Quantum Mechanics, D. Aerts, C. de Ronde, H. Freytes and R. Giuntini (Eds.), World Scientific, Singapore. [174] Lahti, P., 1980, “Uncertainty and Complementarity in Axiomatic Quantum Mechanics”, International Journal of Theoretical Physics, 19, 789-842. [175] Lewis, D., 1986, On the Plurality of Worlds, Blackwell Publishers, Harvard. [176] Ludwig, G., An Axiomatic Basis of Quantum Mechanics 1. Derivation of Hilbert Space, Springer-Verlag, Berlin, 1985 [177] Ludwig, G., 1987, An Axiomatic Foundation of Quantum Mechanics 2. Quantum Mechanics and Macrosystems, Springer-Verlag, Berlin. [178] Lyre, H., 2003, “C. F. von Weizsäcker’s Reconstruction of Physics: Yesterday, Today, Tomorrow”, in Time, Quantum and Information (Essays in Honor of C. F. von Weizsäcker), L. Castell and O. Ischebeck (Eds.), Springer, Berlin. [179] Mackey, G.W., 1963, The Mathematical Foundations of Quantum Mechanics, Benjamin, Amsterdam. [180] Maeda, F. and Maeda, S., 1970, Theory of Symmetric Lattices, SpringerVerlag, Berlin. [181] Manin, Yu.I., 1976, “Mathematical Problems I: Foundations”, in Mathematical Problems Arising from Hilbert Problems, F.E. Browder (Ed.), p. 36, American Mathematical Society, Providence. [182] Mateus P. and Sernadas, A., 2004, “Reasoning About Quantum Systems”, in Logics in Artificial Intelligence, Ninth European Conference, JELIA-04, 239-251, J. Alferes and J. Leite (Eds), Springer-Verlag. [183] Mateus P. and Sernadas, A., 2006, “Weakly complete axiomatization of exogenous quantum propositional logic”, Information and Computation, 204, 771-794. [184] Matoušek, M. and Pták, P., 2014, “Orthomodular posets related to Zvalued states”, International Journal of Theoretical Physics, 53, 3323-3332 [185] Mittelstäedt; P., 1978, Quantum logic, Reidel, Dordrecht. [186] Mittelstäedt; P., 1979, “The modal logic of quantum logic”, Journal Philosophical Logic, 8, 479-504. [187] Mittelstäedt; P., 1981, “The concepts of truth, possibility an probability in the language of quantum physics”, in Interpretations and Foundations of Quantum Theory, H. Neumann (Ed.), pp. 70-94, Bibliographisches Institut, Mannheim. [188] Mittelstäedt; P., 1981, “The dialogic approach to modalities in the language of quantum physics”, in Current Issues in Quantum Logic, E. Beltrametti and B.C. van Fraassen (Eds.), pp. 259-281, Plenum Publication Co, New York. [189] Mittelstäedt; P., 1985, “Constituting, Naming and Identity in Quantum Logic”, in Recent Developments in Quantum Logic, P. Mittelstäedt and E.-W. Statchow (Eds.), pp. 215-234, BI-Wissenschaftsverlag, Mannheim. [190] Mittelstäedt; P., 1986, “Naming and Identity in Quantum Logic”, in Foundations of physics, P. Weingartner and G. Dorn (Eds.), pp. 139-161, Vienna. [191] Nielsen, M.A. and Chuang, I.L., 2010, Quantum Computation and Quantum Information: 10th Anniversary Edition, Cambridge University Press, Cambridge. [192] Paoli, F., Ledda, A., Giuntini, R. and Freytes, H., 2009, “On some properties of quasi-MV algebras and sqrt quasi-MV algebras”, Reports on Mathematical Logic, 44, 31-63. [193] Paoli, F., Ledda, A., Spinks, Freytes, H. and Giuntini, R., 2011, “Logics from sqrt MV-algebras”, International Journal of Theoretical Physics, 50, 3882-3902. [194] Piron, C., 1964, “Axiomatique Quantique”, Helvetica Physica Acta, 37, 439-468. [195] Piron, C., 1976, Foundations of Quantum Mechanics, W.A. Benjamin, Inc., Reading. [196] Piron, C., 1983, “Le realisme en physique quantique: une approche selon Aristote”, In The Concept of Physical Reality. Proceedings of a Conference Organized by the Interdisciplinary Research Group, University of Athens, Athens. [197] Piron, C., 1989, “Recent Developments in Quantum Mechanics”, Helvetica Physica Acta, 62, 82-90. [198] Pták, P. and Pulmannová, S., 1991, Orthomodular Structures as Quantum Logics, Kluwer Academic Publishers, Dordrecht. [199] Pulmannová, S., 1985, “Tensor Product of Quantum Logics”, Journal of Mathematical Physics, 26, 1-5. [200] Pulmannová, S. and Wilce, A., 1995, “Representations of D-posets”, International Journal of Theoretical Physics, 34, 1689-1696. [201] Pulmannová, S. and Vincenková, E., 2007, “Remarks on the order for quantum observables”, Mathematica Slovaca, 57, 589-600. [202] Putnam, H., 1968, “Is Logic Empirical?”, Boston Studies in the Philosophy of Science V, 5, 199-215. [203] Putnam, H., 1974, “How To Think Quantum Logically”, Synthese, 29, 55-61. [204] Randall C.H. and Foulis, D.J., 1970, “An approach to empirical logic”, American Mathematicl Monthly, 77, 363-374. [205] Randall C.H. and Foulis, D.J., 1973, “Operational statistics II. Manuals of operations and their logics”, Journal Mathematical Physics, 14, 1472-1480. [206] Randall C.H. and Foulis, D.J., 1983, “A mathematical language for quantum physics”, in Les fondements de la mécanique quantique, C. Gruber, C. piron, T.M. Tâm and R. Weil (Eds), AVCP, Lausanne. [207] Randall, C.H. and Foulis, D.J., 1983, “Properties and operational propositions in quantum mechanics”, Foundations of Physics, 13, 843-857. [208] Reichenbach, H., 1975, “Three valued logic and the interpretation of quantum mechanics”, in The Logico-Algebraic Approach to Quantum Mechanics – Vol I, C.A. Hooker (Ed.), Reidel, Dordrecht. [209] Rédei, M., 1989, “Quantum conditional probabilities are not probabilities of quantum conditional”, Physics Letters, A 139, 287-290. [210] Rédei, M., 1998, Quantum Logic in Algebraic Approach, Kluwer Academic Publishers, Dordrecht. [211] Rédei, M., 1999, “ ‘Unsolved Problems of Mathematics’ J. von Neumann’s address to the International Congress of Mathematicians, Amsterdam, September 2-9, 1954”, The Mathematical Intelligencer, 21, 7-12. [212] Rédei, M., 2001, “Von Neumann’s concept of quantum logic and quantum probability”, in John von Neumann and the Foundations of Quantum Physics, M. Rédei and M. Stötzner (Eds.), pp. 153-172, Kluwer Academic Publishers, Dordrecht. [213] Rédei, M. and Summers, S.J., 2007, “Quantum probability theory”, Studies in the History and Philosophy of Modern Physics, 38, 390-417. [214] Sakurai, J. J. and Napolitano, J., 2010, Modern Quantum Mechanics, Addison-Wesley, London. [215] Smets, S., 2000, “In Defense of Operational Quantum Logic”, Logic and Logical Philosophy. [216] Smets, S., 2001, The Logic of Physical Properties in Static and Dynamic Perspective, Doctoral dissertation, Brussels Free University, Brussels. [217] Smets, S., 2005, “The Modes of Physical Properties in the Logical Foundations of Physics”, Logic and Logical Philosophy, 14, 37-53. [218] Smets, S., 2010, “Logic and quantum physics”, Journal of the Indian Council of Philosophical Research Spetial Issue XXVIII, N2. [219] Solèr, M.P., 1995, “Characterization of Hilbert spaces by orthomodular spaces”, Communcations in Algebra, 23, 219-243. [220] Svozil, K., 1998, Quantum Logic, Springer, Singapore. [221] Takeuti, G., 1978, Two Applications of Logic to Mathematics, Iwanami and Princeton University Press, Tokyo and Princeton. [222] Takeuti, G., 1981, “Quantum Set Theory”, in Current Issues in Quantum Logic, E. Beltrametti and B.C. van Frassen (Eds.), pp. 303-322, Plenum, New York. [223] Titani, S., 1999, “Lattice Valued Set Theory”, Archive for Mathematical Logic, 38, 395-421. [224] Van Fraassen, B.C., 1972, “A formal approach to the philosophy of science”, in Paradigms and Paradoxes: The Philosophical Challenge of the Quantum Domain, R. Colodny (Ed.), pp. 303-366, University of Pittsburgh Press, Pittsburgh. [225] Van Fraassen, B.C., 1973, “Semantic Analysis of Quantum Logic”, In Contemporary Research in the Foundations and Philosophy of Quantum Theory, C. A. Hooker (Ed.), pp. 80-113, Reidel, Dordrecht. [226] Van Fraassen, B.C., 1974, “The Einstein-Podolsky-Rosen paradox”, Synthese, 29, 291-309. [227] Van Fraassen, B.C., 1981, “A modal Interpretation of Quantum Mechanics”, in Current Issues in Quantum Logic, 229-258, E. G. Beltrametti and B. C. van Fraassen (Eds.), Plenum, New York. [228] Van Fraassen, B.C., 1991, Quantum Mechanics: An Empiricist View, Clarendon, Oxford. [229] Varadarajan, V.S., 1962, “Probability in Physics and a Theorem on Simultaneous Observability”, Communication of Pure and Applied Mathematics, XV, 189-217. [230] Varadarajan, V.S., 1985, Geometry of Quantum Theory, Springer, Berlin. [231] Verelst, K. and Coecke, B., 1999, “Early Greek Thought and Perspectives for the Interpretation of Quantum Mechanics: Preliminaries to an Ontological Approach”, in The Blue Book of Einstein Meets Magritte,  Gustaaf C. Cornelis, Sonja Smets, Jean-Paul van Bendegem (Eds.), pp. 163-196, Kluwer Academic Publishers, Dordrecht. [232] Vermaas, P.E., 1999, A Philosophers Understanding of Quantum Mechanics, Cambridge University Press, Cambridge. [233] Vermaas, P.E. and Dieks, D., 1995, “The Modal Interpretation of Quantum Mechanics and Its Generalization to Density Operators”, Foundations of Physics, 25, 145-158. [234] Von Neumann, J., 1996, Mathematical Foundations of Quantum Mechanics, Princeton University Press (12th. edition), Princeton. [235] Von Neumann, J., 1961, Collected Works Vol III: Rings of Operators, A. H. Taub (Ed.), Pergamon Press. [236] Von Weizsäcker, C. F., 1985, “Heisenberg’s Philosophy”, In Symposium on the Foundations of Modern Physics 1985, P. Lathi and P. Mittelslaedt (Eds.), pp. 277-293, World Scientific, Singapore. [237] Wilce, A., 1995, “Partial Abelian Semigroups”, International Journal of Theoretical Physics, 34, 1807-1812. [238] Wilce, A., 1998, “Perspectivity and Congruence in Partial Abelian Semigroups”, Mathematica Slovaca, 48, 117-135. [239] Zafiris, E. and Karakostas, V., 2013, “A categorial semantics representation of quantum events”, Foundations of Physics, 43, 1090-1123. Author Information C. de Ronde Email: cderonde@gmail.com University of Buenos Aires G. Domenech Email: gradomenech@gmail.com Vrije Universiteit Brussel H. Freytes Email: hfreytes@gmail.com Cagliari University University of Rosario
541fd78a7413df5c
Presentation is loading. Please wait. Presentation is loading. Please wait. CHAPTER 6 Quantum Mechanics II Similar presentations Presentation on theme: "CHAPTER 6 Quantum Mechanics II"— Presentation transcript: 1 CHAPTER 6 Quantum Mechanics II 6.1 The Schrödinger Wave Equation 6.2 Expectation Values 6.3 Infinite Square-Well Potential 6.4 Finite Square-Well Potential 6.5 Three-Dimensional Infinite-Potential Well 6.6 Simple Harmonic Oscillator 6.7 Barriers and Tunneling I think it is safe to say that no one understands quantum mechanics. Do not keep saying to yourself, if you can possibly avoid it, “But how can it be like that?” because you will get “down the drain” into a blind alley from which nobody has yet escaped. Nobody knows how it can be like that. - Richard Feynman 2 6.1: The Schrödinger Wave Equation The Schrödinger wave equation in its time-dependent form for a particle of energy E moving in a potential V in one dimension is The extension into three dimensions is where is an imaginary number. 3 General Solution of the Schrödinger Wave Equation The general form of the wave function is which also describes a wave moving in the x direction. In general the amplitude may also be complex. The wave function is also not restricted to being real. Notice that the sine term has an imaginary number. Only the physically measurable quantities must be real. These include the probability, momentum and energy. 4 Normalization and Probability The probability P(x) dx of a particle being between x and X + dx was given in the equation The probability of the particle being between x1 and x2 is given by The wave function must also be normalized so that the probability of the particle being somewhere on the x axis is 1. 5 Properties of Valid Wave Functions Boundary conditions In order to avoid infinite probabilities, the wave function must be finite everywhere. In order to avoid multiple values of the probability, the wave function must be single valued. For finite potentials, the wave function and its derivative must be continuous. This is required because the second-order derivative term in the wave equation must be single valued. (There are exceptions to this rule when V is infinite.) In order to normalize the wave functions, they must approach zero as x approaches infinity. Solutions that do not satisfy these properties do not generally correspond to physically realizable circumstances. 6 Time-Independent Schrödinger Wave Equation The potential in many cases will not depend explicitly on time. The dependence on time and position can then be separated in the Schrödinger wave equation. Let , which yields: Now divide by the wave function: The left side of Equation (6.10) depends only on time, and the right side depends only on spatial coordinates. Hence each side must be equal to a constant. The time dependent side is 7 Time-Independent Schrödinger Wave Equation Continued We integrate both sides and find: where C is an integration constant that we may choose to be 0. Therefore This determines f to be This is known as the time-independent Schrödinger wave equation, and it is a fundamental equation in quantum mechanics. 8 Stationary State The wave function can be written as: The probability density becomes: The probability distributions are constant in time. This is a standing wave phenomena that is called the stationary state. 9 Momentum Operator To find the expectation value of p, we first need to represent p in terms of x and t. Consider the derivative of the wave function of a free particle with respect to x: With k = p / ħ we have This yields This suggests we define the momentum operator as The expectation value of the momentum is 10 Position and Energy Operators The position x is its own operator as seen above. The time derivative of the free-particle wave function is Substituting ω = E / ħ yields The energy operator is The expectation value of the energy is 11 Comparison of Classical and Quantum Mechanics Newton’s second law and Schrödinger’s wave equation are both differential equations. Newton’s second law can be derived from the Schrödinger wave equation, so the latter is the more fundamental. Classical mechanics only appears to be more precise because it deals with macroscopic phenomena. The underlying uncertainties in macroscopic measurements are just too small to be significant. 12 6.2: Expectation Values The expectation value is the expected result of the average of many measurements of a given quantity. The expectation value of x is denoted by <x> Any measurable quantity for which we can calculate the expectation value is called a physical observable. The expectation values of physical observables (for example, position, linear momentum, angular momentum, and energy) must be real, because the experimental results of measurements are real. The average value of x is 13 Continuous Expectation Values We can change from discrete to continuous variables by using the probability P(x,t) of observing the particle at a particular x. Using the wave function, the expectation value is: The expectation value of any function g(x) for a normalized wave function: 14 Some expectation values are sharp some others fuzzy Since there is scatter in the actual positions (x), the calculated expectation value will have an uncertainty, fuzziness (Note that x is its own operator.) 15 Some expectation values are sharp some others fuzzy, continued I x may as well stand for any kind of operator Q If not fuzzy, ΔQ = 0 Because <Q2>= <Q>2 For any observable, fuzzy or not 16 Some expectation values are sharp some others fuzzy, continued II Eigenvalues of operators are always sharp (an actual – physical - measurement may give some variation in the result, but the calculation gives zero fuzziness Say Q is the Hamiltonian operator A wavefunction that solves this equation is an eigenfunction of this operator, E is the corresponding eigenvalue, apply this operator twice and you get E2 – which sure is the same as squaring to result of applying it once (E) 17 6.3: Infinite Square-Well Potential The simplest such system is that of a particle trapped in a box with infinitely hard walls that the particle cannot penetrate. This potential is called an infinite square well and is given by Clearly the wave function must be zero where the potential is infinite. Where the potential is zero inside the box, the Schrödinger wave equation becomes where . The general solution is . 18 Quantization Boundary conditions of the potential dictate that the wave function must be zero at x = 0 and x = L. This yields valid solutions for integer values of n such that kL = nπ. The wave function is now We normalize the wave function The normalized wave function becomes These functions are identical to those obtained for a vibrating string with fixed ends. 19 Quantized Energy The quantized wave number now becomes Solving for the energy yields Note that the energy depends on the integer values of n. Hence the energy is quantized and nonzero. The special case of n = 0 is called the ground state energy. 20 6.4: Finite Square-Well Potential The finite square-well potential is The Schrödinger equation outside the finite well in regions I and III is or using yields Considering that the wave function must be zero at infinity, the solutions for this equation are 21 Finite Square-Well Solution Inside the square well, where the potential V is zero, the wave equation becomes where Instead of a sinusoidal solution we have The boundary conditions require that and the wave function must be smooth where the regions meet. Note that the wave function is nonzero outside of the box. 22 Penetration Depth The penetration depth is the distance outside the potential well where the probability significantly decreases. It is given by It should not be surprising to find that the penetration distance that violates classical physics is proportional to Planck’s constant. 23 6.5: Three-Dimensional Infinite-Potential Well The wave function must be a function of all three spatial coordinates. We begin with the conservation of energy Multiply this by the wave function to get Now consider momentum as an operator acting on the wave function. In this case, the operator must act twice on each dimension. Given: The three dimensional Schrödinger wave equation is 24 Degeneracy Analysis of the Schrödinger wave equation in three dimensions introduces three quantum numbers that quantize the energy. A quantum state is degenerate when there is more than one wave function for a given energy. Degeneracy results from particular properties of the potential energy function that describes the system. A perturbation of the potential energy can remove the degeneracy. 25 6.6: Simple Harmonic Oscillator Simple harmonic oscillators describe many physical situations: springs, diatomic molecules and atomic lattices. Consider the Taylor expansion of a potential function: Redefining the minimum potential and the zero potential, we have Substituting this into the wave equation: Let and which yields . 26 Parabolic Potential Well If the lowest energy level is zero, this violates the uncertainty principle. The wave function solutions are where Hn(x) are Hermite polynomials of order n. In contrast to the particle in a box, where the oscillatory wave function is a sinusoidal curve, in this case the oscillatory behavior is due to the polynomial, which dominates at small x. The exponential tail is provided by the Gaussian function, which dominates at large x. 27 Analysis of the Parabolic Potential Well The energy levels are given by The zero point energy is called the Heisenberg limit: Classically, the probability of finding the mass is greatest at the ends of motion and smallest at the center (that is, proportional to the amount of time the mass spends at each position). Contrary to the classical one, the largest probability for this lowest energy state is for the particle to be at the center. 28 6.7: Barriers and Tunneling Consider a particle of energy E approaching a potential barrier of height V0 and the potential everywhere else is zero. We will first consider the case when the energy is greater than the potential barrier. In regions I and III the wave numbers are: In the barrier region we have 29 Reflection and Transmission The wave function will consist of an incident wave, a reflected wave, and a transmitted wave. The potentials and the Schrödinger wave equation for the three regions are as follows: The corresponding solutions are: As the wave moves from left to right, we can simplify the wave functions to: 30 Probability of Reflection and Transmission The probability of the particles being reflected R or transmitted T is: The maximum kinetic energy of the photoelectrons depends on the value of the light frequency f and not on the intensity. Because the particles must be either reflected or transmitted we have: R + T = 1. By applying the boundary conditions x → ±∞, x = 0, and x = L, we arrive at the transmission probability: Notice that there is a situation in which the transmission probability is 1. 31 Tunneling Now we consider the situation where classically the particle does not have enough energy to surmount the potential barrier, E < V0. The quantum mechanical result, however, is one of the most remarkable features of modern physics, and there is ample experimental proof of its existence. There is a small, but finite, probability that the particle can penetrate the barrier and even emerge on the other side. The wave function in region II becomes The transmission probability that describes the phenomenon of tunneling is 32 Uncertainty Explanation Consider when κL >> 1 then the transmission probability becomes: This violation allowed by the uncertainty principle is equal to the negative kinetic energy required! The particle is allowed by quantum mechanics and the uncertainty principle to penetrate into a classically forbidden region. The minimum such kinetic energy is: 33 Analogy with Wave Optics If light passing through a glass prism reflects from an internal surface with an angle greater than the critical angle, total internal reflection occurs. However, the electromagnetic field is not exactly zero just outside the prism. If we bring another prism very close to the first one, experiments show that the electromagnetic wave (light) appears in the second prism The situation is analogous to the tunneling described here. This effect was observed by Newton and can be demonstrated with two prisms and a laser. The intensity of the second light beam decreases exponentially as the distance between the two prisms increases. 34 Potential Well Consider a particle passing through a potential well region rather than through a potential barrier. Classically, the particle would speed up passing the well region, because K = mv2 / 2 = E + V0. According to quantum mechanics, reflection and transmission may occur, but the wavelength inside the potential well is smaller than outside. When the width of the potential well is precisely equal to half-integral or integral units of the wavelength, the reflected waves may be out of phase or in phase with the original wave, and cancellations or resonances may occur. The reflection/cancellation effects can lead to almost pure transmission or pure reflection for certain wavelengths. For example, at the second boundary (x = L) for a wave passing to the right, the wave may reflect and be out of phase with the incident wave. The effect would be a cancellation inside the well. 35 Alpha-Particle Decay The phenomenon of tunneling explains the alpha-particle decay of heavy, radioactive nuclei. Inside the nucleus, an alpha particle feels the strong, short-range attractive nuclear force as well as the repulsive Coulomb force. The nuclear force dominates inside the nuclear radius where the potential is approximately a square well. The Coulomb force dominates outside the nuclear radius. The potential barrier at the nuclear radius is several times greater than the energy of an alpha particle. According to quantum mechanics, however, the alpha particle can “tunnel” through the barrier. Hence this is observed as radioactive decay. Download ppt "CHAPTER 6 Quantum Mechanics II" Similar presentations Ads by Google
e7804f0867d691f2
Hyperfine-structure-induced depolarization of impulsively aligned {\rm{\bf I_{2}}} molecules Hyperfine-structure-induced depolarization of impulsively aligned molecules Esben F. Thomas Department of Chemistry, Technical University of Denmark, Building 206, DK-2800 Kongens Lyngby, Denmark    Anders A. Søndergaard Department of Chemistry, Aarhus University, Langelandsgade 140, DK-8000 Aarhus C, Denmark    Benjamin Shepperson Department of Chemistry, Aarhus University, Langelandsgade 140, DK-8000 Aarhus C, Denmark    Niels E. Henriksen Department of Chemistry, Technical University of Denmark, Building 206, DK-2800 Kongens Lyngby, Denmark    Henrik Stapelfeldt Department of Chemistry, Aarhus University, Langelandsgade 140, DK-8000 Aarhus C, Denmark A moderately intense fs laser pulse is used to create rotational wave packets in gas phase molecules. The ensuing time-dependent alignment, measured by Coulomb explosion imaging with a delayed probe pulse, exhibits the characteristic revival structures expected for rotational wave packets but also a complex non-periodic substructure and decreasing mean alignment not observed before. A quantum mechanical model attributes the phenomena to coupling between the rotational angular momenta and the nuclear spins through the electric quadrupole interaction. The calculated alignment trace agrees very well with the experimental results. Alignment of isolated molecules, i.e. confinement of their internal axes to directions fixed in space, by moderately intense laser pulses is considered a well-understood process resulting from the polarizability interaction Stapelfeldt and Seideman (2003); Seideman and Hamilton (2005); Ohshima and Hasegawa (2010); Fleischer et al. (2012). In the impulsive limit, where a laser pulse much shorter than the molecular rotational period is used, each molecule is left in a superposition of rotational eigenstates. For the widely studied case of linear molecules and a linearly polarized fs alignment pulse, this wave packet formation causes the molecules to align shortly after the laser pulse and in periodically occurring narrow time windows, termed revivals Seideman (1999); Rosca-Pruna and Vrakking (2001); Machholm and Henriksen (2001); Renard et al. (2003); Dooley et al. (2003). In the rigid rotor approximation the revival pattern repeats itself Przystawik et al. (2012) unless the rotational coherence is distorted by e.g. a dissipative environment Ramakrishna and Seideman (2005); Vieillard et al. (2008); Owschimikow et al. (2010); Hartmann and Boulet (2012); Pentlehner et al. (2013); Shepperson et al. (2017a). Decades of frequency-resolved high-resolution spectroscopy Gordy and Cook (1984); Zare (1988) and time-dependent depolarization experiments on molecules prepared in single rotational states (see Refs. (Code and Ramsey, 1971; Fano and Macek, 1973; Altkorn et al., 1985; Yan and Kummel, 1993; Gough and Crowe, 1993; Cool and Hemmi, 1995; Zhang et al., 1993; Wouters et al., 1997; Rudert et al., 1999; Sofikitis et al., 2007; Bartlett et al., 2009, 2010; Grygoryeva et al., 2017) for previous examples) have, however, established that a rigid rotor model is insufficient and that a precise description of rotational spectra must include the coupling between rotational angular momentum and electronic or nuclear spin. It is, therefore, surprising that the influence of such effects, notably the hyperfine coupling between the electric quadrupole moment of the nuclei and the electric field of the electrons, has never been addressed in fs-laser-induced molecular alignment studies. In the current work we measured the time-dependent degree of alignment, induced by a fs pulse, for a sample of molecules covering the first seven rotational revivals. By contrast to the aforementioned depolarization studies, which do not involve coherent superpositions of rotational states, our experiment probes the impact of hyperfine coupling on the revival structures. Using a quantum mechanical model in conjunction with the experimental results, we find that the hyperfine coupling affects the revival structures in qualitatively different ways compared to the well-understood impact on the “permanent” alignment of a molecule prepared in a single rotational state. Notably, the effect on the permanent alignment is known to be negligible in the limit where the rotational angular momentum is much larger than the angular momentum of the total nuclear spin (Bartlett et al., 2009). By contrast, we find that the hyperfine coupling will always significantly perturb the revival structures over time. The experimental setup and methods were described previously Shepperson et al. (2017b), so only a few details are pointed out here. A pulsed molecular beam, formed by expanding   mbar iodine gas in bar of He gas into vacuum, enters a velocity map imaging (VMI) spectrometer where it is crossed at by two pulsed collinear laser beams. The first pulse (kick pulse, nm, fs, I = ) creates rotational wave packets in the molecules. The second pulse (probe pulse, nm, fs, ) Coulomb explodes the molecules. This leads to ion fragments with recoil directions given by the angular distribution of the molecular axes at the instant of the probe pulse. The emission directions of the ions are recorded with a D imaging detector at different kick-probe delays, which allows us to determine the time-dependent degree of alignment, , being the angle between the alignment pulse polarization and the projection of an ion velocity vector on the detector (Søndergaard et al., 2017). The time dependence of determined experimentally is shown in black in Fig. 1. The alignment trace is dominated by the pronounced half and full revivals, but their amplitude decreases with the revival order and their structure is changing. These observations are not caused by experimental factors such as collisions or limited temporal detection windows (see Supplemental Material sup , which includes Refs. Linstrom and Eds. (2017); Bacis et al. (1980); Lubman et al. (1982); Maroulis (1992); Maroulis et al. (1997); Even et al. (2000); Filsinger et al. (2009); Shu et al. (2017)). For comparison, calculated by solving the time-dependent Schrödinger equation (TDSE) for a rigid rotor is also shown. It is clear that the decreasing amplitude and changing structure of the revivals observed experimentally is at odds with the calculations. In particular, significant experimental deviations from the calculated results are evident in the higher order fractional revivals, for instance at the 6 + revival the experimental and calculated peaks point in opposite directions. Figure 1: Experimental results (black) and the model calculations (red) calculated at K. An alignment trace calculated without quadrupole coupling is also shown (dotted black). We now show that the numerical results match the experimental findings to a high degree of accuracy when hyperfine coupling is included in the theoretical model. For molecules this mainly stems from the coupling between the electric quadrupole moment of the atomic nuclei and the gradient of the electric field created by the electrons. The coupling between the magnetic dipole moment of the nuclei and the B-field from the electrons is much weaker and not included in our model Yokozeki and Muenter (1980). The total nuclear spin, , is the sum of the spins of the two atomic nuclei: = . As a result there are nuclear spin isomers with and since the nuclear spin of is . We assume that the nuclear spin isomers are initially equally abundant (McQuarrie, 1976). The rotational wave packet created by the alignment pulse from an initial rotational eigenstate is denoted ( is not changed due to the linear polarization of the alignment pulse). The symmetry requirements of the total molecular wave function entail that the parity of the and states must be the same in a given molecule (McQuarrie, 1976). Consequently, there are / para/ortho (even/odd and ) spin isomers. The coupled spin isomer – rotational wave packet is described as: where is the total angular momentum, = , , and the Clebsch-Gordan coefficients. In preparation for solving the TDSE, we construct a square matrix in the basis, with elements given by (Cook and De Lucia, 1971): describes the rigid rotor Hamiltonian, where GHz Linstrom and Eds. (2017) is the molecular rotational constant of in the vibrational ground state (centrifugal distortion was found to be negligible sup ), and is the electric quadrupole interaction component of the hyperfine structure Hamiltonian for a diatomic molecule, where GHz Yokozeki and Muenter (1980) is the quadrupole coupling constant. introduces shifts in the diagonal elements of , and off-diagonal couplings when and/or . Generally, must incorporate all initial states occupied at (i.e. those given by the right side of Equation 1), as well as states that may become occupied over time as a result of the off-diagonal couplings. Specifically, it was found that any states that can be reached via inter- coupling must be included to faithfully reproduce the experimental alignment trace; however, inter- couplings were found to have a negligible impact on (attributed to the relatively large energy differences between various states). As such, the states incorporated in need only contain -values that were already present in . The TDSE is solved by expanding the full wave function onto the coupled basis functions, i.e. where is the order of and is given by Equation 1. The coefficients are found by diagonalizing to solve the resulting system of coupled linear differential equations. is then transformed back into the uncoupled representation to calculate the alignment trace, i.e. Efficient calculation of is achieved by expanding onto a basis of Legendre polynomials as described in Ref. (Søndergaard et al., 2017), and noting the orthonormality of the states in Equation 3. The alignment trace of any initial superposition is the equally weighted, incoherent sum of traces generated by coupling to all nuclear spin isomers that symmetry requirements will permit. The complete alignment trace is the weighted incoherent sum of traces from all the different initial superpositions that exist because of thermal and focal volume averaging, as per the methodology outlined in Refs. (Bisgaard, 2006; Søndergaard, 2016). The simulated alignment trace with the effects of quadrupole coupling included is shown in red in Fig. 1. Based on previous work in our group, we estimate that the molecules are initially in thermal (Boltzmann) equilibrium at Shepperson et al. (2017b). The minor discrepancy between the theoretical and experimental traces from ps can essentially be eliminated by fitting the temperature sup , however this results in a fitted temperature of K, which we believe is unrealistically low. Let represent the sum of matrix elements that generates the theoretical alignment trace, where we omit the nuclear spin states and time-dependent coefficients in Equation 3 for clarity. The terms are sinusoidal functions oscillating at frequencies proportional to the energy difference between the and states. These terms represent the coherence of the wave packet and are responsible for the revivals. In analogy with previous work Ramakrishna and Seideman (2005) we refer to their sum as . Conversely, the terms where represent the population of the rotational states and characterize the permanent alignment. Their sum is denoted  Ramakrishna and Seideman (2005). Note that the mean alignment of the trace is well characterized by , as this term provides the baseline value around which oscillates. Visual inspection of Fig. 1 indicates that the mean alignment is slightly decreasing from ps. This behaviour is attributed to the well-understood fact that quadrupole coupling leads to changes in the projection of a single state due to angular “precession” of the coupled and vectors around (see, e.g., Fig. in Ref. (Bartlett et al., 2010)) and changes in the relative orientation of the individual nuclear spin vectors (which we denote “spin flipping”). Previous experiments on hyperfine-induced depolarization of single rotational states have shown that this effect (hereafter referred to as “precession-type depolarization”) leads to a general time-dependent decrease in molecular alignment (Code and Ramsey, 1971; Fano and Macek, 1973; Yan and Kummel, 1993; Gough and Crowe, 1993; Cool and Hemmi, 1995; Zhang et al., 1993; Wouters et al., 1997; Rudert et al., 1999; Sofikitis et al., 2007; Bartlett et al., 2009, 2010; Grygoryeva et al., 2017). Figure 2(a) shows superposed with the sum of trace elements calculated without quadrupole coupling (denoted ) for comparison. It is seen that the quadrupole coupling also strongly affects the revival structures. Figure 2: In (a) the sum of matrix elements comprising the theoretical quadrupole coupled alignment trace, , is shown in blue. The blue trace in (b) shows , where the frequency and phase shifts caused by the hyperfine energy splitting have been suppressed. Both (a) and (b) are superposed with equivalent traces calculated without quadrupole coupling (dotted black). To understand the cause of the amplitude loss and substructure modification in , the effects of precession-type depolarization were artificially suppressed in the model by changing all ’s in Equation 3 to the from the initial superposition when calculating the elements (while treating everything else as if the “actual” ’s are still in place). Surprisingly, this has very little effect on the shape of , indicating that some previously unexplored mechanisms associated with the quadrupole coupling are causing the modulations in the signal. It is informative to show the quadrupole coupled dynamics of a molecule starting in a single spin isomer/ state combination. In Fig. 3(a) the time evolution of initial state is shown projected onto the basis, where the (negligible) effect of inter- coupling has been suppressed for clarity. The example in Fig. 3 illustrates how the quadrupole coupling will cause each from the initial superposition to spread out across a “-manifold” of coupled states. Also, Fig. 3 shows how all states in a given -manifold will have different . Therefore, orthonormality of the spin states implies that each state in the -manifold will combine with at most one state in the -manifold to yield a nonzero contribution to the alignment trace. Figure 3: (a) Occupancy of initial state and the states it couples to evolving over the timescale of the experiment. (b) Sketch of the state energy splittings and relative coupling strengths ( MHz). (c) Schematic classical interpretation of the system dynamics. Given two or more superposed states coupled to the same spin isomer at , the energy splitting, coupling strength, and number of states associated with each -manifold partially depends on (attributable, e.g., to the appearance of in Equation 2). Dissimilarities in the energy splitting between different -manifolds introduces multiple frequency shifts into the components of . The beating caused by the introduction of these new frequencies modulates the alignment trace. We investigated the nature of this frequency beating by artificially suppressing its effect in the model. This was done by eliminating the quadrupole-coupling-induced frequency and phase shifts in the complex arguments of the coefficients governing the time evolution of all states across all -manifolds. In this way we calculate a modified trace, , where indicates that all frequency shifts introduced by the energy splitting have been removed while leaving the -manifold population dynamics unchanged. A plot of is shown in Figure 2(b). Comparison of the and traces shown in Figure 2 reveals that the frequency beating plays a significant, but not singular, role in attenuating the peak amplitudes. It is also remarkable that the higher order fractional revivals in do not exhibit the deviations and sign changes that are present in . This demonstrates that the complex non-periodic substructures observed in the experimental trace can be solely attributed to the new frequencies introduced into by the hyperfine coupling. Note that the peak amplitudes in still decrease compared to . This is because, as stated earlier, the state population distributions of manifolds with different will become increasingly dissimilar over time. These asynchronous distributional dynamics cause a net loss of amplitude due to the bijective/injective (one to at most one) way of combining different sets of states associated with different -manifolds when calculating nonzero contributions to the trace. Experimentally, some of the observed peak attenuation may in principle be caused by molecules in vibrationally excited states. Our analysis shows, however, that the potential impact is minor sup . It has been remarked that precession-type depolarization in single states is most significant when the coupled and vectors have similar magnitudes (Bartlett et al., 2009). Conversely, our analysis suggests that the observed modulations in the revival structures of are not directly contingent on the magnitude of or . Therefore, we investigate what happens if rotational wave packets containing larger are created. To this end, we simulated the effects of quadrupole coupling in molecules aligned with pulses up to more intense than used in the current experiment. Increasing the pulse intensity leads to initial revival peaks with larger amplitudes, as well as a higher level of mean alignment. The early decrease in mean alignment observed in the experiment becomes less pronounced at higher intensities, and for all intensities the mean alignment is nearly constant when ns. Additionally, it was found that for all intensities the revival structures always decay into what resembles low amplitude unstructured “noise”, however this decay takes longer for more intense pulses, as illustrated in Fig. 4 sup . These observations agree qualitatively with our expectations, i.e. the classical model of precession predicts that will change less for wave packets with large , whereas the scrambling/attenuating effects of the frequency beating and asynchronous dynamics will accumulate over time and eventually dominate the component of the trace regardless of the magnitude of the -values present in the wave packet. Figure 4: Quadrupole-coupled alignment traces simulated to ns, with alignment pulse intensities set to (a) the experimental and (b) the experimental value. In closing, we note that the alignment trace of any molecule containing heavy atoms (e.g. or ) with large quadrupole coupling constants is expected to show similar deviations from the rigid rotor approximation when excited into a coherent superposition of rotational eigenstates. HS acknowledges support from the European Research Council-AdG (Project No. 320459, DropletControl). • Stapelfeldt and Seideman (2003) H. Stapelfeldt and T. Seideman, Rev. Mod. Phys. 75, 543 (2003). • Seideman and Hamilton (2005) T. Seideman and E. Hamilton, Adv. At. Mol. Opt. Phys 52, 289 (2005). • Ohshima and Hasegawa (2010) Y. Ohshima and H. Hasegawa, Int. Rev. Phys. Chem. 29, 619 (2010). • Fleischer et al. (2012) S. Fleischer, Y. Khodorkovsky, E. Gershnabel, Y. Prior,  and I. S. Averbukh, Isr. J. Chem. 52, 414 (2012). • Seideman (1999) T. Seideman, Phys. Rev. Lett. 83, 4971 (1999). • Rosca-Pruna and Vrakking (2001) F. Rosca-Pruna and M. J. J. Vrakking, Phys. Rev. Lett. 87, 153902 (2001). • Machholm and Henriksen (2001) M. Machholm and N. E. Henriksen, Phys. Rev. Lett. 87, 193001 (2001). • Renard et al. (2003) V. Renard, M. Renard, S. Guérin, Y. T. Pashayan, B. Lavorel, O. Faucher,  and H. R. Jauslin, Phys. Rev. Lett. 90, 153601 (2003). • Dooley et al. (2003) P. W. Dooley, I. V. Litvinyuk, K. F. Lee, D. M. Rayner, M. Spanner, D. M. Villeneuve,  and P. B. Corkum, Phys. Rev. A 68, 023406 (2003). • Przystawik et al. (2012) A. Przystawik, A. Kickermann, A. Al-Shemmary, S. Düsterer, A. M. Ellis, K. von Haeften, M. Harmand, S. Ramakrishna, H. Redlin, L. Schroedter, M. Schulz, T. Seideman, N. Stojanovic, J. Szekely, F. Tavella, S. Toleikis,  and T. Laarmann, Phys. Rev. A 85, 052503 (2012). • Ramakrishna and Seideman (2005) S. Ramakrishna and T. Seideman, Phys. Rev. Lett. 95, 113001 (2005). • Vieillard et al. (2008) T. Vieillard, F. Chaussard, D. Sugny, B. Lavorel,  and O. Faucher, J. Raman Spectrosc. 39, 694 (2008). • Owschimikow et al. (2010) N. Owschimikow, F. Königsmann, J. Maurer, P. Giese, A. Ott, B. Schmidt,  and N. Schwentner, J. Chem. Phys. 133, 044311 (2010). • Hartmann and Boulet (2012) J. M. Hartmann and C. Boulet, J. Chem. Phys. 136, 184302 (2012). • Pentlehner et al. (2013) D. Pentlehner, J. H. Nielsen, A. Slenczka, K. Mølmer,  and H. Stapelfeldt, Phys. Rev. Lett. 110, 093002 (2013). • Shepperson et al. (2017a) B. Shepperson, A. A. Søndergaard, L. Christiansen, J. Kaczmarczyk, R. E. Zillich, M. Lemeshko,  and H. Stapelfeldt, Phys. Rev. Lett. 118, 203203 (2017a). • Gordy and Cook (1984) W. Gordy and R. L. Cook, Microwave Molecular Spectra, 3rd ed. (John Wiley & Sons Inc, 1984). • Zare (1988) R. N. Zare, Angular Momentum (John Wily & Sons Inc., 1988) pp. 243–251. • Code and Ramsey (1971) R. F. Code and N. F. Ramsey, Phys. Rev. A 4, 1945 (1971). • Fano and Macek (1973) U. Fano and J. H. Macek, Rev. Mod. Phys. 45, 553 (1973). • Altkorn et al. (1985) R. Altkorn, R. N. Zare,  and C. H. Greene, Molecular Physics 55, 1 (1985). • Yan and Kummel (1993) C. Yan and A. C. Kummel, J. Chem. Phys. 98, 6869 (1993). • Gough and Crowe (1993) S. F. Gough and A. Crowe, J. Phys. B: At. Mol. Opt. Phys. 26, 2403 (1993). • Cool and Hemmi (1995) T. A. Cool and N. Hemmi, J. Chem. Phys. 103, 3357 (1995). • Zhang et al. (1993) J. Zhang, C. W. Riehn, M. Dulligan,  and C. Wittig, J. Chem. Phys. 104, 7027 (1993). • Wouters et al. (1997) E. R. Wouters, L. D. A. Siebbeles, K. L. Reid, B. Buijsse,  and W. J. van der Zande, Chem. Phys. 218, 309 (1997). • Rudert et al. (1999) A. D. Rudert, J. Martin, W.-B. Gao, J. B. Halpern,  and H. Zacharias, J. Chem. Phys. 111, 9549 (1999). • Sofikitis et al. (2007) D. Sofikitis, L. Rubio-Lago, M. R. Martin, D. J. A. Brown, N. C.-M. Bartlett, A. J. Alexander, R. N. Zare,  and T. P. Rakitzis, J. Chem. Phys. 127, 144307 (2007). • Bartlett et al. (2009) N. C.-M. Bartlett, D. J. Miller, R. N. Zare, A. J. Alexander, D. Sofikitis,  and T. P. Rakitzis, Phys. Chem. Chem. Phys. 11, 142 (2009). • Bartlett et al. (2010) N. C.-M. Bartlett, J. Jankunas, R. N. Zare,  and J. A. Harrison, Phys. Chem. Chem. Phys. 12, 15689 (2010). • Grygoryeva et al. (2017) K. Grygoryeva, J. Rakovský, O. Votava,  and M. Fárník, J. Chem. Phys. 147, 013901 (2017). • Shepperson et al. (2017b) B. Shepperson, A. S. Chatterley, A. A. Søndergaard, L. Christiansen, M. Lemeshko,  and H. Stapelfeldt, J. Chem. Phys. 147, 013946 (2017b). • Søndergaard et al. (2017) A. A. Søndergaard, B. Shepperson,  and H. Stapelfeldt, J. Chem. Phys. 147, 013905 (2017). • (34) “See the Supplemental Material for details.” . • Linstrom and Eds. (2017) P. Linstrom and W. M. Eds., NIST Chemistry WebBook, NIST Standard Reference Database Number 69 (National Institute of Standards and Technology, Gaithersburg MD, 20899, http://webbook.nist.gov, 2017). • Bacis et al. (1980) R. Bacis, M. Broyer, S. Churassy, J. Vergès,  and J. Vigué, J. Chem. Phys. 73, 2641 (1980). • Lubman et al. (1982) D. M. Lubman, C. T. Rettner,  and R. N. Zare, J. Phys. Chem. 86, 1129 (1982). • Maroulis (1992) G. Maroulis, Mol. Phys. 77, 1085 (1992). • Maroulis et al. (1997) G. Maroulis, C. Makris, U. Hohm,  and D. Goebel, J. Phys. Chem. A 101, 953 (1997). • Even et al. (2000) U. Even, J. Jortner, D. Noy, N. Lavie,  and C. Cossart-Magos, J. Chem. Phys. 112, 8068 (2000). • Filsinger et al. (2009) F. Filsinger, J. Küpper, G. Meijer, L. Holmegaard, J. H. Nielsen, I. Nevo, J. L. Hansen,  and H. Stapelfeldt, J. Chem. Phys. 131, 064309 (2009). • Shu et al. (2017) C.-C. Shu, E. F. Thomas,  and N. E. Henriksen, Chem. Phys. Lett. 683, 234 (2017). • Yokozeki and Muenter (1980) A. Yokozeki and J. S. Muenter, J. Chem. Phys. 72, 3796 (1980). • McQuarrie (1976) D. A. McQuarrie, Statistical Mechanics (Harper & Row, 1976). • Cook and De Lucia (1971) R. L. Cook and F. C. De Lucia, Am. J. Phys. 39, 1433 (1971). • Bisgaard (2006) C. Z. Bisgaard, Laser Induced Alignment, Ph.D. thesis, Aarhus University (2006). • Søndergaard (2016) A. A. Søndergaard, Understanding Laser-Induced Alignment and Rotation of Molecules Embedded in Helium Nanodroplets, Ph.D. thesis, Aarhus University (2016). Comments 0 Request Comment You are adding the first comment! How to quickly get a good reply: Add comment Loading ... This is a comment super asjknd jkasnjk adsnkj The feedback must be of minumum 40 characters The feedback must be of minumum 40 characters You are asking your first question! How to quickly get a good answer: • Keep your question short and to the point • Check for grammar or spelling errors. • Phrase it like a question Test description
1f0cf50c354b0df7
Wolfram Blog Michael Trott An Exact Value for the Planck Constant: Why Reaching It Took 100 Years May 19, 2016 — Michael Trott, Chief Scientist, Wolfram|Alpha Scientific Content Blog communicated on behalf of Jean-Charles de Borda. Some thoughts for World Metrology Day 2016 Please allow me to introduce myself I’m a man of precision and science I’ve been around for a long, long time Stole many a man’s pound and toise And I was around when Louis XVI Had his moment of doubt and pain Made damn sure that metric rules Through platinum standards made forever Pleased to meet you Hope you guess my name Introduction and about me In case you can’t guess: I am Jean-Charles de Borda, sailor, mathematician, scientist, and member of the Académie des Sciences, born on May 4, 1733, in Dax, France. Two weeks ago would have been my 283rd birthday. This is me: Jean-Charles de Borda In my hometown of Dax there is a statue of me. Please stop by when you visit. In case you do not know where Dax is, here is a map: Map of Dax and statue of Jean-Charles de Borda In Europe when I was a boy, France looked basically like it does today. We had a bit less territory on our eastern border. On the American continent, my country owned a good fraction of land: France and French territory in America in 1733 I led a diverse earthly life. At 32 years old I carried out a lot of military and scientific work at sea. As a result, in my forties I commanded several ships in the Seven Years’ War. Most of the rest of my life I devoted to the sciences. But today nobody even knows where my grave is, as my physical body died on February 19, 1799, in Paris, France, in the upheaval of the French Revolution. (Of course, I know where it is, but I can’t communicate it anymore.) My name is the twelfth listed on the northeast side of the Eiffel Tower: Borda listed on the northeast side of the Eiffel Tower Over the centuries many of my fellow Frenchman who joined me up here told me that I deserved a place in the Panthéon. But you will not find me there, nor at the Père Lachaise, Montparnasse, or Montmartre cemeteries. But this is not why I still cannot rest in peace. I am a humble man; it is the kilogram that keeps me up at night. But soon I will be able to rest in peace at night for all time and approach new scientific challenges. Let me tell you why I will soon find a good night’s sleep. All my life, I was into mathematics, geometry, physics, and hydrology. And overall, I loved to measure things. You might have heard of substitution weighing (also called Borda’s method)—yes, this was my invention, as was the Borda count method. I also substantially improved the repeating circle. Here is where the story starts. The repeating circle was crucial in making a high-precision determination of the size of the Earth, which in turn defined the meter. (A good discussion of my circle can be found here.) Repeating circle I lived in France when it was still a monarchy. Times were difficult for many people—especially peasants—partially because trade and commerce were difficult due to the lack of measures all over the country. If you enjoy reading about history, I highly recommend Kula’s Measures and Men to understand the weights and measurements situation in France in 1790. The state of the weights and measures were similar in other countries; see for instance Johann Georg Trallesreport about the situation in Switzerland. In August 1790, I was made the chairman of the Commission of Weights and Measures as a result of a 1789 initiative from Louis XVI. (I still find it quite miraculous that 1,000 years after Charlemagne’s initiative to unify weights and measures, the next big initiative in this direction would be started.) Our commission created the metric system that today is the International System of Units, often abbreviated as SI (le Système international d’unités in French). In the commission were, among others, Pierre-Simon Laplace (think the Laplace equation), Adrien-Marie Legendre (Legendre polynomials), Joseph-Louis Lagrange (think Lagrangian), Antoine Lavoisier (conservation of mass), and the Marquis de Condorcet. (I always told Adrien-Marie that he should have some proper portrait made of him, but he always said he was too busy calculating. But for 10 years now, the politician Louis Legendre’s portrait has not been used in math books instead of Adrien-Marie’s. Over the last decades, Adrien-Marie befriended Jacques-Louis David, and Jacques-Louis has made a whole collection of paintings of Adrien-Marie; unfortunately, mortals will never see them.) Lagrange, Laplace, Monge, Condorcet, and I were on the original team. (And, in the very beginning, Jérôme Lalande was also involved; later, some others were as well, such as Louis Lefèvre‑Gineau.) Portraits of Pierre-Simon Laplace, Adrien-Marie Legendre, Joseph-Louis Lagrange, Antoine Lavoisier, and Marquis de Condorcet Three of us (Monge, Lagrange, and Condorcet) are today interred or commemorated at the Panthéon. It is my strong hope that Pierre-Simon is one day added; he really deserves it. As I said before, things were difficult for French citizens in this era. Laplace wrote: The prodigious number of measures in use, not only among different people, but in the same nation; their whimsical divisions, inconvenient for calculation, and the difficulty of knowing and comparing them; finally, the embarrassments and frauds which they produce in commerce, cannot be observed without acknowledging that the adoption of a system of measures, of which the uniform divisions are easily subjected to calculation, and which are derived in a manner the least arbitrary, from a fundamental measure, indicated by nature itself, would be one of the most important services which any government could confer on society. A nation which would originate such a system of measures, would combine the advantage of gathering the first fruits of it with that of seeing its example followed by other nations, of which it would thus become the benefactor; for the slow but irresistible empire of reason predominates at length over all national jealousies, and surmounts all the obstacles which oppose themselves to an advantage, which would be universally felt. All five of the mathematicians (Monge, Lagrange, Laplace, Legendre, and Condorcet) have made historic contributions to mathematics. Their names are still used for many mathematical theorems, structures, and operations: Monge, Lagrange, Laplace, Legendre, and Condorcet's contributions to mathematics In 1979, Ruth Inez Champagne wrote a detailed thesis about the influence of my five fellow citizens on the creation of the metric system. For Legendre’s contribution especially, see C. Doris Hellman’s paper. Today it seems to me that most mathematicians no longer care much about units and measures and that physicists are the driving force behind advancements in units and measures. But I did like Theodore P. Hill’s arXiv paper about the method of conflations of probability distributions that allows one to consolidate knowledge from various experiments. (Yes, before you ask, we do have instant access to arXiv up here. Actually, I would say that the direct arXiv connection has been the greatest improvement here in the last millennium.) Our task was to make standardized units of measure for time, length, volume, and mass. We needed measures that were easily extensible, and could be useful for both tiny things and astronomic scales. The principles of our approach were nicely summarized by John Quincy Adams, Secretary of State of the United States, in his 1821 book Report upon the Weights and Measures. Excerpt from John Quincy Adams' Report upon Weights and Measures Originally we (we being the metric men, as we call ourselves up here) had suggested just a few prefixes: kilo-, deca-, hecto-, deci-, centi-, milli-, and the no-longer-used myria-. In some old books you can find the myria- units. We had the idea of using prefixes quite early in the process of developing the new measurements. Here are our original proposals from 1794: Excerpts of original proposals from 1794 Side note: in my time, we also used the demis and the doubles, such as a demi-hectoliter (=50 liters) or a double dekaliter (=20 liters). As inhabitants of the twenty-first century know, times, lengths, and masses are measured in physics, chemistry, and astronomy over ranges spanning more than 50 orders of magnitude. And the units we created in the tumultuous era of the French Revolution stood the test of time: Orders of magnitude plots for length and area Orders of magnitude plots for length Orders of magnitude plot for area In the future, the SI might need some more prefixes. In a recent LIGO discovery, the length of the interferometer arms changed on the order of 10 yoctometers. Yoctogram resolution mass sensors exist. One yoctometer equals 10–24 meter. Mankind can already measure tiny forces on the order of zeptonewtons. On the other hand, astronomy needs prefixes larger than 1024. One day, these prefixes might become official. Proposed prefixes larger than 10^24 I am a man of strict rules, and it drives me nuts when I see people in the twenty-first century not obeying the rules for using SI prefixes. Recently I saw somebody writing on a whiteboard that a year is pretty much exactly 𝜋 dekamegaseconds (𝜋 daMs): 1 year approximately pi dekamegaseconds While it’s a good approximation (only 0.4% off), when will this person learn that one shouldn’t concatenate prefixes? The technological progress of mankind has occurred quickly in the last two centuries. And mega-, giga-, tera- or nano-, pico-, and femto- are common prefixes in the twenty-first century. Measured in meters per second, here is the probability distribution of speed values used by people. Some speeds (like speed limits, the speed of sound, or the speed of light) are much more common than others, but many local maxima can be found in the distribution function: Probability distribution of speed values used by people Here is the report we delivered in March of 1791 that started the metric system and gave the conceptual meaning of the meter and the kilogram, signed by myself, Lagrange, Laplace, Monge, and Concordet (now even available through what the modern world calls a “digital object identifier,” or DOI, like 10.3931/e-rara-28950): Report from 1791 that started the metric system and gave conceptual meaning of the meter and kilogram Today most people think that base 10 and the meter, second, and kilogram units are intimately related. But only on October 27, 1790, did we decide to use base 10 for subdividing the units. We were seriously considering a base-12 subdivision, because the divisibility by 2, 3, 4, and 6 is a nice feature for trading objects. It is clear today, though, that we made the right choice. Lagrange’s insistence on base 10 was the right thing. At the time of the French Revolution, we made no compromises. On November 5, 1792, I even suggested changing clocks to a decimal system. (D’Alambert had suggested this in 1754; for the detailed history of decimal time, see this paper.) Mankind was not ready yet; maybe in the twenty-first century decimal clocks and clock readings would finally be recognized as much better than 24 hours, 60 minutes, and 60 seconds. I loved our decimal clocks—they were so beautiful. So it’s a real surprise to me today that mankind still divides the angle into 90 degrees. In my repeating circle, I was dividing the right angle into 100 grades. We wanted to make the new (metric) units truly equal for all people, not base them, for instance, on the length of the forearm of a king. Rather, “For all time, for all people” (“À tous les temps, à tous les peuples”). Now, in just a few years, this dream will be achieved. And I am sure there will come the day where Mendeleev’s prediction (“Let us facilitate the universal spreading of the metric system and thus assist the common welfare and the desired future rapprochement of the peoples. It will come not yet, slowly, but surely.”) will come true even in the three remaining countries of the world that have not yet gone metric: Countries that have not gone metric The SI units have been legal for trade in the USA since the mid-twentieth century, when United States customary units became derived from the SI definitions of the base units. Citizens can choose which units they want for trade. We also introduced the decimal subdivision of money, and our franc was in use from 1793 to 2002. At least today all countries divide their money on the basis of base 10—no coins with label 12 are in use anymore. Here is the coin label breakdown by country: Coin label breakdown by country We took the “all” in “all people” quite seriously, and worked with our archenemy Britain and the new United States (through Thomas Jefferson personally) together to make a new system of units for all the major countries in my time. But, as is still so often the case today, politics won over reason. I died on February 19, 1799, just a few months before the our group’s efforts. On June 22, 1799, my dear friend Laplace gave a speech about the finished efforts to build new units of length and mass before the new prototypes were delivered to the Archives of the Republic (where they are still today). In case the reader is interested in my eventful life, Jean Mascart wrote a nice biography about me in 1919, and it is now available as a reprint from the Sorbonne. From the beginnings of the metric system to today Two of my friends, Jean Baptiste Joseph Delambre and Pierre Méchain, were sent out to measure distances in France and Spain from mountain to mountain to define the meter as one ten-millionth of the distance from the North Pole to the equator of the Earth. Historically, I am glad the mission was approved. Louis XVI was already under arrest when he approved the financing of the mission. My dear friend Lavoisier called their task “the most important mission that any man has ever been charged with.” Pierre Méchain and Jean Baptiste Joseph Delambre If you haven’t done so, you must read the book The Measure of All Things by Ken Alder. There is even a German movie about the adventures of my two old friends. Equipped with a special instrument that I had built for them, they did the work that resulted in the meter. Although we wanted the length of the meter to be one ten-millionth of the length of the half-meridian through Paris from pole to equator, I think today this is a beautiful definition conceptually. That the Earth isn’t quite as round as we had hoped for we did not know at the time, and this resulted in a small, regrettable error of 0.2 mm due to a miscalculation of the flattening of the Earth. Here is the length of the half-meridian through Paris, expressed through meters along an ellipsoid that approximates the Earth: If they had elevation taken into account (which they did not do—Delambre and Méchain would have had to travel the whole meridian to catch every mountain and hill!), and had used 3D coordinates (meaning including the elevation of the terrain) every few kilometers, they would have ended up with a meter that was 0.4 mm too short: Length of the meridian meter when taking elevation into account Here is the elevation profile along the Paris meridian: Elevation along the Paris meridian And the meter would be another 0.9 mm longer if measured with a yardstick the length of a few hundred meters: Length of the meridian meter when taking detailed elevation into account Because of the fractality of the Earth’s surface, an even smaller yardstick would have given an even longer half-meridian. It’s more realistic to follow the sea-level height. The difference between the length of the sea-level meridian meter and the ellipsoid approximation meter is just a few micrometers: Difference between the length of the sea-level meridian and the ellipsoid approximation meter But at least the meridian had to go through Paris (not London, as some British scientists of my time proposed). But anyway, the meridian length was only a stepping stone to make a meter prototype. Once we had the meter prototype, we didn’t have to refer to the meridian anymore. Here is a sketch of the triangulation carried out by Pierre and Jean Baptiste in their adventurous six-year expedition. Thanks to the internet and various French digitization projects, the French-speaking reader interested in metrology and history can now read the original results online and reproduce our calculations: Reproducing the triangulation carried out by Pierre and Jean Baptiste The part of the meridian through Paris (and especially through the Paris Observatory, marked in red) is today marked with the Arago markers—do not miss them during your next visit to Paris! François Arago remeasured the Paris meridian. After Méchain joined me up here in 1804, Laplace got the go-ahead (and the money) from Napoléon to remeasure the meridian and to verify and improve our work: Plotting the meridian through Paris and the Arago markers Plotting the meridian through Paris The second we derived from the length of a year. And the kilogram as a unit of mass we wanted to (and did) derive from a liter of water. If any liquid is special, it is surely water. Lavoisier and I had many discussions about the ideal temperature. The two temperatures that stand out are 0 °C and 4 °C. Originally we were thinking about 0 °C, as with ice water it is easy to see. But because of the maximal density of water at 4 °C, we later thought that would be the better choice. The switch to 4 °C was suggested by Louis Lefèvre-Gineau. The liter as a volume in turn we defined as one-tenth of a meter cubed. As it turns out, compared with high-precision measurements of distilled water, 1 kg equals the mass of 1.000028 dm3 of water. The interested reader can find many more details of the process of the water measurements here and about making the original metric system here. A shorter history in English can be found in the recent book by Williams and the ten-part series by Chisholm. I don’t want to brag, but we also came up with the name “meter” (derived from the Greek metron and the Latin metrum), which we suggested on July 11 of 1792 as the name of the new unit of length. And then we had the area (=100 m2) and the stere (=1 m3). And I have to mention this for historical accuracy: until I entered the heavenly spheres, I always thought our group was the first to carry out such an undertaking. How amazed and impressed I was when shortly after my arrival up here, I-Hsing and Nankung Yiieh introduced themselves to me and told me about their expedition from the years 721 to 725, more than 1,000 years before ours, to define a unit of length. I am so glad we defined the meter this way. Originally the idea was to define a meter through a pendulum of proper length as a period of one second. But I didn’t want any potential change in the second to affect the length of the meter. While dependencies will be unavoidable in a complete unit system, they should be minimized. Basing the meter on the Earth’s shape and the second on the Earth’s movement around the Sun seemed like a good idea at the time. Actually, it was the best idea that we could technologically realize at this time. We did not know how tides and time changed the shape of the Earth, or how continents drift apart. But we believed in the future of mankind, in ever-increasing measurement precision, but we did not know what concretely would change. But it was our initial steps for precisely measuring distances in France that were carried out. Today we have high-precision geo potential maps as high-order series of Legendre polynomials: GeogravityModelData for the astronomical observatory in Paris With great care, the finest craftsmen of my time melted platinum, and we forged a meter bar and a kilogram. It was an exciting time. Twice a week I would stop by Janety’s place when he was forging our first kilograms. Melting and forming platinum was still a very new process. And Janety, Louis XVI’s goldsmith, was a true master of forming platinum—to be precise, a spongelike eutectic made of platinum and arsenic. Just a few years earlier, on June 6, 1782, Lavoisier showed the melting of platinum in a hydrogen-oxygen flame to (the future) Tsar Paul I at a garden party at Versailles; Tsar Paul I was visiting Marie Antoinette and Loius XVI. And Étienne Lenoir made our platinum meter, and Jean Nicolas Fortin our platinum kilogram. For the reader interested in the history of platinum, I recommend McDonald’s and Hunt’s book. Platinum is a very special metal; it has a high density and is chemically very inert. It is also not as soft as gold. The best kilogram realizations today are made from a platinum-iridium mixture (10% iridium), as adding iridium to platinum does improve its mechanical properties. Here is a comparison of some physical characteristics of platinum, gold, and iridium: Comparison of physical characteristics of platinum, gold, and iridium This sounds easy, but at the time the best scientists spent countless hours calculating and experimenting to find the best materials, the best shapes, and the best conditions to define the new units. But both the new meter bar and the new kilogram cylinder were macroscopic bodies. And the meter has two markings of finite width. All macroscopic artifacts are difficult to transport (we developed special travel cases); they change by very small amounts over a hundred years through usage, absorption, desorption, heating, and cooling. In the amazing technological progress of the nineteenth and twentieth centuries, measuring time, mass, and length with precisions better than one in a billion has become possible. And measuring time can even be done a billion times better. I still vividly remember when, after we had made and delivered the new meter and the mass prototypes, Lavoisier said, “Never has anything grander and simpler and more coherent in all its parts come from the hands of man.” And I still feel so today. Our goal was to make units that truly belonged to everyone. “For all time, for all people” was our motto. We put copies of the meter all over Paris to let everybody know how long it was. (If you have not done so, next time you visit Paris, make sure to visit the mètre étalon near to the Luxembourg Palace.) Here is a picture I recently found, showing an interested German tourist studying the history of one of the few remaining mètres étalons: It was an exciting time (even if I was no longer around when the committee’s work was done). Our units served many European countries well into the nineteenth and large parts of the twentieth century. We made the meter, the second, and the kilogram. Four more base units (the ampere, the candela, the mole, and the kelvin) have been added since our work. And with these extensions, the metric system has served mankind very well for 200+ years. How the metric system took off after 1875, the year of the Metre Convention, can be seen by plotting how often the words kilogram, kilometer, and kilohertz appear in books: How often the words kilogram, kilometer, and kilohertz appear in books We defined only the meter, the seond, the liter, and the kilogram. Today many more name units belong to the SI: becquerel, coulomb, farad, gray, henry, hertz, joule, katal, lumen, lux, newton, ohm, pascal, siemen, sievert, tesla, volt, watt, and weber. Here is a list of the dimensional relations (no physical meaning implied) between the derived units: List of the dimensional relations between the derived units List of the dimensional relations between the derived units Many new named units have been added since my death, often related to electrical and magnetic phenomena that were not yet known when I was alive. And although I am a serious person in general, I am often open to a joke or a pun—I just don’t like when fun is made of units. Like Don Knuth’s Potrzebie system of units, with units such as the potrzebie, ngogn, blintz, whatmeworry, cowznofski, vreeble, hoo, and hah. Not only are their names nonsensical, but so are their values: Portzerbies and blintz units Or look at Max Pettersson’s proposal for units for biology. The names of the units and the prefixes might sound funny, but for me units are too serious a subject to make fun of: Max Pettersson's proposal for units for biology These unit names do not even rhyme with any of the proper names: Words that rhyme with meter Words that rhyme with mile To reiterate, I am all in favor of having fun, even with units, but it must be clear that it is not meant seriously: Converting humorous units of measurement Or explicitly nonscientific units, such as helens for beauty, puppies for happiness, or darwins for fame are fine with me: Measuring beauty in helens Measuring happiness in puppies Measuring fame in darwins I am so proud that the SI units are not just dead paper symbols, but tools that govern the modern world in an ever-increasing way. Although I am not a comics guy, I love the recent promotion of the base units to superheroes by the National Institute of Standards and Technology: Base units to superheroes Base units to superheroes Note that, to honor the contributions of the five great mathematicians to the metric system, the curves in the rightmost column of the unit-representing characters are given as mathematical formulas, e.g. for Dr. Kelvin we have the following purely trigonometric parametrization: Purely trigonometric parametrization of Dr. Kelvin So we can plot Dr. Kelvin: Plotting Dr. Kelvin Having the characters in parametric form is handy: when my family has reunions, the little ones’ favorite activity is coloring SI superheroes. I just print the curves, and then the kids can go crazy with the crayons. (I got this idea a couple years ago from a coloring book by the NCSA.) Printing randomly colored curves And whenever a new episode comes out, all us “measure men” (George Clooney, if you see this: hint, hint for an exciting movie set in the 1790s!) come together to watch it. As you can imagine, the last episode is our all-time favorite. Rumor has it up here that there will be a forthcoming book The Return of the Metrologists (2018 would be a perfect year) complementing the current book. And I am glad to see that the importance of measuring and the underlying metric system is in modern times honored through the World Metrology Day on May 20, which is today. In my lifetime, most of what people measured were goods: corn, potatoes, and other foods, wine, fabric, and firewood, etc. So all my country really needed were length, area, volume, angles, and, of course, time units. I always knew that the importance of measuring would increase over time. But I find it quite remarkable that only 200 years after I entered the heavenly spheres, hundreds and hundreds of different physical quantities are measured. Today even the International Organization for Standardization (ISO) lists, defines, and describes what physical quantities to use. Below is an image of an interactive Demonstration (download the notebook at the bottom of this post to interact with it) showing graphically the dimensions of physical quantities for subsets of selectable dimensions. First select two or three dimensions (base units). Then the resulting graphics show spheres with sizes proportional to the number of different physical quantities with these dimensions. Mouse over the spheres in the notebook to see the dimensions. For example, with “meter”, “second”, and “kilogram” checked, the diagram shows the units of physical quantities like momentum (kg1 m1 s–1) or energy (kg2 m1 s–2): Physical quantities of given dimensions Here is a an excerpt of the code that I used to make these graphics. These are all physical quantities that have dimensions L2 M1 T–1. The last one is the slightly exotic electrodynamic observable Excerpt of code from physical quantities of given dimensions demonstration Today with smart phones and wearable devices, a large number of physical quantities are measured all the time by ordinary people. “Measuring rules,” as I like to say. Or, as my (since 1907) dear friend William Thomson liked to say: Here is a graphical visualization of the physical quantities that are measured by the most common measurement devices: Graphical visualization of the physical quantities that are measured by the most common measurement devices Electrical and magnetic phenomena were just starting to become popular when I was around. Electromagnetic effects related to physical quantities that are expressed through the electric current only become popular much later: Electrical and magnetic phenomena timeline Electrical and magnetic phenomena timeline I remember how excited I was when in the second half of the nineteenth century and the beginning of the twentieth century the various physical quantities of electromagnetism were discovered and their connections were understood. (And, not to be forgotten: the recent addition of memristance.) Here is a diagram showing the most important electric/magnetic physical quantities qk that have a relation of the form qk=qi qj with each other: Diagram showing the most important electric/magnetic physical quantities q sub k, with relation of the form q subk = q sub i, q sub j, with each other On the other hand, I was sure that temperature-related phenomena would soon be fully understood after my death. And indeed just 25 years later, Carnot proved that heat and mechanical work are equivalent. Now I also know about time dilation and length contraction due to Einstein’s theories. But mankind still does not know if a moving body is colder or warmer than a stationary body (or if they have the same temperature). I hear every week from Josiah Willard about the related topic of negative temperatures. And recently, he was so excited about a value for a maximal temperature for a given volume V expressed through fundamental constants: Maximal temperature for a given volume V expressed through fundamental constants For one cubic centimeter, the maximal temperature is about 5PK: Maximal temperature for once cubic centimeter The rise of the constants Long after my physical death, some of the giants of physics of the nineteenth century and early twentieth century, foremost among them James Clerk Maxwell, George Johnstone Stoney, and Max Planck (and Gilbert Lewis) were considering units for time, length, and mass that were built from unchanging properties of microscopic particles and the associated fundamental constants of physics (speed of light, gravitational constant, electron charge, Planck constant, etc.): James Clerk Maxwell, George Johnstone Stoney, and Max Planck Maxwell wrote in 1870: Yet, after all, the dimensions of our Earth and its time of rotation, though, relative to our present means of comparison, very permanent, are not so by any physical necessity. The earth might contract by cooling, or it might be enlarged by a layer of meteorites falling on it, or its rate of revolution might slowly slacken, and yet it would continue to be as much a planet as before. But a molecule, say of hydrogen, if either its mass or its time of vibration were to be altered in the least, would no longer be a molecule of hydrogen. When we find that here, and in the starry heavens, there are innumerable multitudes of little bodies of exactly the same mass, so many, and no more, to the grain, and vibrating in exactly the same time, so many times, and no more, in a second, and when we reflect that no power in nature can now alter in the least either the mass or the period of any one of them, we seem to have advanced along the path of natural knowledge to one of those points at which we must accept the guidance of that faith by which we understand that “that which is seen was not made of things which do appear.’ At the time when Maxwell wrote this, I was already a man’s lifetime up here, and when I read it I applauded him (although at this time I still had some skepticism toward all ideas coming from Britain). I knew that this was the path forward to immortalize the units we forged in the French Revolution. There are many physical constants. And they are not all known to the same precision. Here are some examples: Examples of physical constants Converting the values of constants with uncertainties into arbitrary precision numbers is convenient for the following computations. The connection between the intervals and the number of digits is given as follows. The arbitrary precision number that corresponds to v ± δ is the number v with precision –log10(2 δ/v) Conversely, given an arbitrary precision number (numbers are always convenient for computations), we can recover the v ± δ form: Converting arbitrary precision numbers to intervals After the exactly defined constants, the Rydberg constant with 11 known digits stands out for a very precisely known constant. On the end of the spectrum is G, the gravitational constant. At least once a month Henry Cavendish stops at my place with yet another idea on how to build a tabletop device to measure G. Sometimes his ideas are based on cold atoms, sometimes on superconductors, and sometimes on high-precision spheres. If he could still communicate with the living, he would write a comment to Nature every week. A little over a year ago Henry was worried that he should have done his measurements in winter as well in summer, but he was relieved to see that no seasonal dependence of G’s value seems to exist. The preliminary proposal deadline for the NSF’s Big G Challenge was just four days ago. I think sometime next week I will take a heavenly peek at the program officer’s preselected experiments. There are more physical constants, and they are not all equal. Some are more fundamental than others, but for reasons of length I don’t want to get into a detailed discussion about this topic now. A good start for interested readers is Lévy-Leblond’s papers (also here), as well as this paper, this paper, and the now-classic Duff–Okun–Veneziano paper. For the purpose of making units from physical constants, the distinction of the various classes of physical constants is not so relevant. The absolute values of the constants and their relations to heaven, hell, and Earth is an interesting subject on its own. It is a hot topic of discussion for mortals (also see this paper), as well as up here. Some numerical coincidences (?) are just too puzzling: Absolute values of the constants and their relations to heaven, hell, and Earth Of course, using modern mathematical algorithms, such as lattice reduction, we can indulge in the numerology of the numerical part of physical constants: Numerology of the numerical part of physical constants For instance, how can we form 𝜋 out of fundamental constant products? Forming pi out of fundamental constant products Or let’s look at my favorite number, 10, the mathematical basis of the metric system: Forming 10 out of fundamental constant products And given a set of constants, there are many ways to form a unit of a given unit. There are so many physical constants in use today, you have to be really interested to keep up on them. Here are some of the lesser-known constants: Some of the lesser-known physical constants Physical constants appear in so many equations of modern physics. Here is a selection of 100 simple physics formulas that contain the fundamental constants: 100 simple physics formulas that contain the fundamental constants Of course, more complicated formulas also contain the physical constants. For instance, the gravitational constant appears (of course!) in the formula of the gravitational potentials of various objects, e.g. for the potential of a line segment and of a triangle: Gravitational constant appears in formula of gravitational potentials of various objects My friend Maurits Cornelis Escher loves these kinds of formulas. He recently showed me some variations of a few of his 3D pictures that show the equipotential surfaces of all objects in the pictures by triangulating all surfaces, then using the above formula—like his Escher solid. The graphic shows a cut version of two equipotential surfaces: Equipotential surfaces of all objects in the pictures by triangulating all surfaces I frequently stop by at Maurits Cornelis’, and often he has company—usually, it is Albrecht Dürer. The two love to play with shapes, surfaces, and polyhedra. They deform them, Kelvin-invert them, everse them, and more. Albrecht also likes the technique of smoothing with gravitational potentials, but he often does this with just the edges. Here is what a Dürer solid’s equipotential surfaces look like: Dürer solid's equipotential surfaces And here is a visualization of formulas that contain cα–hβ–Gγ in the exponent space αγβγγ. The size of the spheres is proportional to the number of formulas containing cα·hβ·Gγ; mousing over the balls in the attached notebook shows the actual formulas. We treat positive and negative exponents identically: Visualization of formulas that contain c^alpha-h^beta-G^gamma in the exponant space of alpha-beta-gamma One of my all-time favorite formulas is for the quantum-corrected gravitational force between two bodies, which contains my three favorite constants: the speed of light, the gravitational constants, and the Planck constant: Quantum-corrected gravitational force between two bodies Another of my favorite formulas is the one for the entropy of a black hole. It contains the Boltzmann constant in addition to c, h, and G: Entropy of a black hole And, of course, the second-order correction to the speed of light in a vacuum in the presence of an electric or magnetic field due to photon-photon scattering (ignoring a polarization-dependent constant). Even in very large electric and magnetic fields, the changes in the speed of light are very small: In my lifetime, we did not yet understand the physical world enough to have come up with the idea of natural units. That took until 1874, when Stoney proposed for the first time natural units in his lecture to the British Science Association. And then, in his 1906–07 lectures, Planck made use of the now-called Planck units extensively, already introduced in his famous 1900 article in Annalen der Physik. Unfortunately, both these unit systems use the gravitational constant G prominently. It is a constant that we today cannot measure very accurately. As a result, also the values of the Planck units in the SI have only about four digits: Use of Planck units These units were never intended for daily use because they are either far too small or far too large compared to the typical lengths, areas, volumes, and masses that humans deal with on a daily basis. But why not base the units of daily use on such unchanging microscopic properties? (Side note: The funny thing is that in the last 20 years Max Planck again doubts if his constant h is truly fundamental. He had hoped in 1900 to derive its value from a semi-classical theory. Now he hopes to derive it from some holographic arguments. Or at least he thinks he can derive the value of h/kB from first principles. I don’t know if he will succeed, but who knows? He is a smart guy and just might be able to.) Many exact and approximate relations between fundamental constants are known today. Some more might be discovered in the future. One of my favorites is the following identity—within a small integer factor, is the value of the Planck constant potentially related to the size of the universe? Is the value of the Planck constant potentially related to the size of the universe? Another one is Beck’s formula, showing a remarkable coincidence (?): Beck's formula But nevertheless, in my time we never thought it would be possible to express the height of a giraffe through the fundamental constants. But how amazed I was nearly ten years ago, when looking through the newly arrived arXiv preprints to find a closed form for the height of the tallest running, breathing organism derived by Don Page. Within a factor of two he got the height of a giraffe (Brachiosaurus and Sauroposeidon don’t count because they can’t run) derived in terms of fundamental constants—I find this just amazing: Typical height of a giraffe I should not have been surprised, as in 1983 Press, Lightman, Peierls, and Gold expressed the maximal running speed of a human (see also Press’ earlier paper): Maximal running speed of a human In the same spirit, I really liked Burrows’ and Ostriker’s work on expressing the sizes of a variety of astronomical objects through fundamental constants only. For instance, for a typical galaxy mass we obtain the following expression: Expression for a typical galaxy mass This value is within a small factor from the mass of the Milky Way: Mass of the Milky Way But back to units, and fast forward another 100+ years to the second half of the twentieth century: the idea of basing units on microscopic properties of objects gained more and more ground. Since 1967, the second has been defined through 9,192,631,770 periods of the light from the transition between the two hyperfine levels of the ground state of the cesium 133, and the meter has been defined since 1983 as the distance light travels in one second when we define the speed of light as the exact quantity 299,792,458 meters per second. To be precise, this definition is to be realized at rest, at a temperature of 0 K, and at sea level, as motion, temperature, and the gravitational potential influence the oscillation period and (proper) time. Ignoring the sea-level condition can lead to significant measurement errors; the center of the Earth is about 2.5 years younger than its surface due to differences in the gravitational potential. Now, these definitions for the unit second and meter are truly equal for all people. Equal not just for people on Earth right now, but also for in the future and far, far away from Earth for any alien. (One day, the 9,192,631,770 periods of cesium might be replaced by a larger number of periods of another element, but that will not change its universal character.) But if we wanted to ground all units in physical constants, which ones should we choose? There are often many, many ways to express a base unit through a set of constants. Using the constants from the table above, there are thirty (thirty!) ways to combine them to make a mass dimension: Thirty ways to combine constants to make a mass dimension Because of the varying precision of the constants, the combinations are also of varying precision (and of course, of different numerical values): Combinations are of varying precision Now the question is which constants should be selected to define the units of the metric system? Many aspects, from precision to practicality to the overall coherence (meaning there is no need for various prefactors in equations to compensate for unit factors) must be kept in mind. We want our formulas to look like F = m a, rather than containing explicit numbers such as in the Thanksgiving turkey cooking time formulas (assuming a spherical turkey): Turkey cooking time formulas Or in the PLANK formula (Max hates this name) for the calculation of indicated horsepower: Calculation of indicated horsepower Here in the clouds of heaven, we can’t use physical computers, so I am glad that I can use the more virtual Wolfram Open Cloud to do my calculations and mathematical experimentation. I have played for many hours with the interactive units-constants explorer below, and agree fully with the choices made by the International Bureau of Weights and Measures (BIPM), meaning the speed of light, the Planck constant, the elementary charge, the Avogadro constant, and the Boltzmann constant. I showed a preliminary version of this blog to Edgar, and he was very pleased to see this table based on his old paper: Tables based on Edgar's paper I want to mention that the most popular physical constant, the fine-structure constant, is not really useful for building units. Just by its special status as a unitless physical quantity, it can’t be directly connected to a unit. But it is, of course, one of the most important physical constants in our universe (and is probably only surpassed by the simple integer constant describing how many spatial dimensions our universe has). Often various dimensionless combinations can be found from a given set of physical constants because of relations between the constants, such as c2=1/(ε0 μ0). Here are some examples: Various dimensionless combinations found from a given set of physical constants But there is probably no other constant that Paul Adrien Maurice Dirac and I have discussed more over the last 32 years than the fine-structure constant α=e2/(4 𝜋 ε0 ħ c). Although up here we meet with the Lord regularly in a friendly and productive atmosphere, he still refuses to tell us a closed form of α . And he will not even tell us if he selected the same value for all times and all places. For the related topic of the values of the constants chosen, he also refuses to discuss fine tuning and alternative values. He says that he chose a beautiful expression, and one day we will find out. He gave some bounds, but they were not much sharper than the ones we know from the Earth’s existence. So, like living mortals, for now we must just guess mathematical formulas: Conjectured exact forms of the fine-structure constant Or guess combinations of constants: Guessing combinations of constants And here is one of my favorite coincidences: Favorite coincidence And a few more: A few more coincidences The rise in importance and usage of the physical constants is nicely reflected in the scientific literature. Here is a plot of how often (in publications per year) the most common constants appear in scientific publications from the publishing company Springer. The logarithmic vertical axis shows the exponential increase in how often physical constants are mentioned: How often the most common constants appear in scientific publications from the publishing company Springer While the fundamental constants are everywhere in physics and chemistry, one does not see them so much in newspapers, movies, or advertisements, as they deserve. I was very pleased to see the introduction of the Measures for Measure column in Nature recently. Fundamental constants in Measures for Measure column To give the physical constants the presence they deserve, I hope that before (or at least not long after) the redefinition we will see some interesting video games released that allow players to change the values of at least c, G, and h to see how the world around us would change if the constants had different values. It makes me want to play such a video game right now. With large values of h, not only could one build a world with macroscopic Schrödinger cats, but interpersonal correlations would also become much stronger. This could make the constants known to children at a young age. Such a video game would be a kind of twenty-first-century Mr. Tompkins adventure: Mr. Tompkins It will be interesting to see how quickly and efficiently the human brain will adapt to a possible life in a different universe. Initial research seems to be pretty encouraging. But maybe our world and our heaven are really especially fine-tuned. The current SI and the issue with the kilogram The modern system of units, the current SI has, in addition to the second, the meter, and the kilogram, other units. The ampere is defined as the force between two infinitely long wires, the kelvin through the triple point of water, the mole through the kilogram and carbon-12, and the candela through blackbody radiation. If you have never read the SI brochure, I strongly encourage you to do so. Two infinitely long wires are surely macroscopic and do not fulfill Maxwell’s demand (but it is at least an idealized system), and de facto it defines the magnetic constant. And the triple point of water needs a macroscopic amount of water. This is not perfect, but it’s OK. Carbon-12 atoms are already microscopic objects. Blackbody radiation is again an ensemble of microscopic objects, but a very reproducible one. So some of the current SI fulfills in some sense Maxwell’s goals. But most of my insomnia over the last 50 years has been caused by the kilogram. It caused me real headaches, and sometimes even nightmares, when we could not put it on the same level as the second and the meter. In the year of my physical death (1799), the first prototype of a kilogram, a little platinum cylinder, was made. About 39.7 mm in height and 39.4 mm in diameter, this was for 75 years “the” kilogram. It was made from the forged platinum sponge made by Janety. Miller gives a lot of the details of this kilogram. It is today in the Archives nationales. In 1879, Johnson Matthey (in Britain—the country I fought with my ships!), using new melting techniques, made the material for three new kilogram prototypes. Because of a slightly higher density, these kilograms were slightly smaller in size, at 39.14 mm in height. The cylinder was called KIII and became the current international prototype kilogram K. Here is the last sentence from the preface of the mass determination of the the international prototype kilogram from 1885, introducing K: A few kilograms were selected and carefully compared to our original kilogram; for the detailed measurements, see this book. All three kilograms had a mass less than 1 mg different from the original kilogram. But one stood out: it had a mass difference of less than 0.01 mg compared to the original kilogram. For a detailed history of the making of K, see Quinn. And so, still today, per definition, a kilogram is the mass of a small metal cylinder sitting in a safe at the International Bureau of Weights and Measures near Paris. (It’s technically actually not on French soil, but this is another issue.) In the safe, which needs three keys to be opened, under three glass domes, is a small platinum-iridium cylinder that defines what a kilogram is. For the reader’s geographical orientation, here is a map of Paris with the current kilogram prototype (in the southwest), our original one (in the northeast), both with a yellow border, and some other Paris visitor essentials: Map of Paris with current kilogram prototype (in the southwest) and our original one (in the northeast) In addition to being an artifact, it was so difficult to get access to the kilogram (which made me unhappy). Once a year, a small group of people checks if it is still there, and every few years its weight (mass) is measured. Of course, the result is, per definition and the agreement made at the first General Conference on Weights and Measures in 1889, exactly one kilogram. Over the years the original kilogram prototype gained dozens of siblings in the form of other countries’ national prototypes, all of the same size, material, and weight (up to a few micrograms, which are carefully recorded). (I wish the internet had been invented earlier, so that I had a communication path to tell what happened with the stolen Argentine prototype 45; since then, it has been melted down.) At least, when they were made they had the same weight. Same material, same size, similarly stored—one would expect that all these cylinders would keep their weight. But this is not what history showed. Rather than all staying at the same weight, repeated measurements showed that virtually all other prototypes got heavier and heavier over the years. Or, more probable, the international prototype has gotten lighter. From my place here in heaven I have watched many of these the comparisons with both great interest and concern. Comparing their weights (a.k.a. masses) is a big ordeal. First you must get the national prototypes to Paris. I have silently listened in on long discussions with TSA members (and other countries’ equivalents) when a metrologist comes with a kilogram of platinum, worth north of $50k in materials—and add another $20k for the making (in its cute, golden, shiny, special travel container that should only be opened in a clean room with gloves and mouth guard, and never ever touched by a human hand)—and explains all of this to the TSA. An official letter is of great help here. The instances that I have watched from up here were even funnier than the scene in the movie 1001 Grams. Then comes a complicated cleaning procedure with hot water, alcohol, and UV light. The kilograms all lose weight in this process. And they are all carefully compared with each other. And the result is that with very high probability, “the” kilogram, our beloved international prototype kilogram (IPK), loses weight. This fact steals my sleep. Here are the results from the third periodic verification (1988 to 1992). The graphic shows the weight difference compared to the international prototype: Weight difference between countries' national kilograms versus the international prototype For some newer measurements from the last two years, see this paper. What I mean by “the” kilogram losing weight is the following. Per definition (independent of its “real objective” mass), the international prototype has a mass of exactly 1 kg. Compared with this mass, most other kilogram prototypes of the world seem to gain weight. As the other prototypes were made, using different techniques over more than 100 years, very likely the real issue is that the international prototype is losing weight. (And no, it is not because of Ceaușescu’s greed and theft of platinum that Romania’s prototype is so much lighter; in 1889 the Romanian prototype was already 953 μg lighter than the international prototype kilogram.) Josiah Willard Gibbs, who has been my friend up here for more than 110 years, always mentions that his home country is still using the pound rather than the kilogram. His vote in this year’s election would clearly go to Bernie. But at least the pound is an exact fraction of the kilogram, so anything that will happen to the kilogram will affect the pound the same way: The pound is an exact fraction of the kilogram The new SI But soon all my dreams and centuries-long hopes will come true and I can find sleep again. In 2018, two years from now, the greatest change in the history of units and measures since my work with my friend Laplace and the others will happen. All units will be based on things that are accessible to everybody everywhere (assuming access to some modern physical instruments and devices). The so-called new SI will reduce all of the seven base units to seven fundamental constants of physics or basic properties of microscopic objects. Down on Earth, they started calling them “reference constants.” Some people also call the new SI quantum SI because of its dependence on the Planck constant h and the elementary charge e. In addition to the importance of the Planck constant h in quantum mechanics, the following two quantum effects are connecting h and e: the Josephson effect and its associated Josephson constant KJ = 2 e / h, and the quantum Hall effect with the von Klitzing constant RK = h / e2. The quantum metrological triangle: connecting frequency and electric current through a singe electron tunneling device, connecting frequency and voltage through the Josephson effect, and connecting voltage and electric current through the quantum Hall effect will be a beautiful realization of electric quantities. (One day in the future, as Penin has pointed out, we will have to worry about second-order QED effects, but this will be many years from now.) The BIPM already has a new logo for the future International System of Units: New logo for the future International System of Units Concretely, the proposal is: 1. The second will continue to be defined through cesium atom microwave radiation. 2. The meter will continue to be defined through an exactly defined speed of light. 3. The kilogram will be defined through an exactly defined value of the Planck constant. 4. The ampere will be defined through an exactly defined value of the elementary charge. 5. The kelvin will be defined through an exactly defined value of the Boltzmann constant. 6. The mole will be defined through an exact (counting) value. 7. The candela will be defined through an exact value of the candela steradian-to-watt ratio at a fixed frequency (already now the case). I highly recommend a reading of the draft of the new SI brochure. Laplace and I have discussed it a lot here in heaven, and (modulo some small issues) we love it. Here is a quick word cloud summary of the new SI brochure: Word cloud summary of new SI brochure Before I forget, and before continuing the kilogram discussion, some comments on the other units. The second I still remember when we discussed introducing metric time in the 1790s: a 10-hour day, with 100 minutes per hour, and 100 seconds per minute, and we were so excited by this prospect. In hindsight, this wasn’t such a good idea. The habits of people are sometimes too hard to change. And I am so glad I could get Albert Einstein interested in the whole metrology over the past 50 years. We have had so many discussions about the meaning of time and that the second measures local time, and the difference between measurable local time and coordinate time. But this is a discussion for another day. The uncertainty of a second is today less than 10−16. Maybe one day in the future, cesium will be replaced by aluminum or other elements to achieve 100 to 1,000 times smaller uncertainties. But this does not alter the spirit of the new SI; it’s just a small technical change. (For a detailed history of the second, see this article.) Clearly, today’s definition of second is much better than one that depends on the Earth. At a time when stock market prices are compared at the microsecond level, the change of the length of a day due to earthquakes, polar melting, continental drift, and other phenomena over a century is quite large: Change in the length of a day over time The mole I have heard some chemists complain that their beloved unit, the mole, introduced into the SI only in 1971, will become trivialized. In the currently used SI, the mole relates to an actual chemical, carbon-12. In the new SI, it will be just a count of objects. A true chemical equivalent to a baker’s dozen, the chemist’s dozen. Based on the Avogadro constant, the mole is crucial in connecting the micro world with the macro world. A more down-to-Earth definition of the mole matters for such quantitative values—for example, pH values. The second is the SI base unit of time; the mole is the SI base unit of the physical quantity, or amount of substance: Mole is the SI base unit of the physical quantity But not everybody likes the term “amount of substance.” Even this year (2016), alternative names are being proposed, e.g. stoichiometric amount. Over the last decades, a variety of names have been proposed to replace “amount of substance.” Here are some examples: Alternative names for "amount of substance" But the SI system only defines the unit “mole.” The naming of the physical quantity that is measured in moles is up to the International Union of Pure and Applied Chemistry. For recent discussions from this year, see the article by Leonard, “Why Is ‘Amount of Substance’ So Poorly Understood? The Mysterious Avogadro Constant Is the Culprit!”, and the article by Giunta, “What’s in a Name? Amount of Substance, Chemical Amount, and Stoichiometric Amount.” Wouldn’t it be nice if we could have made a “perfect cube” (number) that would represent the Avogadro number? Such a representation would be easy to conceptualize. This was suggested a few years back, and at the time was compatible with the value of the Avogadro constant, and would have been a cube of edge length 84,446,888 items. I asked Srinivasa Ramanujan, while playing a heavenly round of cricket with him and Godfrey Harold Hardy, his longtime friend, what’s special about 84,446,888, but he hasn’t come up with anything deep yet. He said that 84,446,888=2^3*17*620933, and that 620,933 appears starting at position 1,031,622 in the decimal digits of 𝜋, but I can’t see any metrological relevance in this. With the latest value of the Avogadro constant, no third power of an integer number falls into the possible values, so no wonder there is nothing special. Here is the latest CODATA (Committee on Data for Science and Technology) value from the NIST Reference on Constants, Units, and Uncertainty: Latest CODATA value from NIST Reference on Constants, Units, and Uncertainty The candidate number 84,446,885 cubed is too small, and adding a one gives too large a number: Candidate number 84,446,885 Interestingly, if we would settle for a body-centered lattice, with one additional atom per unit cell, then we could still maintain a cube interpretation: Maintaining a cube interpretation with a body-centered lattice A face-centered lattice would not work, either: Using a face-centered lattice But a diamond (silicon) lattice would work: Diamond (silicon) lattice To summarize: Lattice summary Here is a little trivia: Sometime amid the heights of the Cold War, the accepted value of the Avogadro constant suddenly changed in the third digit! This was quite a change, considering that there is currently a lingering controversy regarding the discrepancy in the sixth digit. Can you explain the sudden decrease in Avogadro constant during the Cold War? Do you know the answer? If not, see here or here. But I am diverting from my main thread of thoughts. As I am more interested in the mechanical units anyway, I will let my old friend Antoine Lavoisier judge the new mole definition, as he was the chemist on our team. The kelvin Josiah Willard Gibbs even convinced me that temperature should be defined mechanically. I am still trying to understand John von Neumann’s opinion on this subject, but because I never fully understand his evening lectures on type II and type III factors, I don’t have a firm opinion on the kelvin. Different temperatures correspond to inequivalent representations of the algebras. As I am currently still working my way through Ruetsche’s book, I haven’t made my mind up on how to best define the kelvin from an algebraic quantum field theory point of view. I had asked John for his opinion of a first-principle evaluation of h / k based on KMS states and Tomita–Takesaki theory, and even he wasn’t sure about it. He told me some things about thermal time and diamond temperature that I didn’t fully understand. And then there is the possibility of deriving the value of the Boltzmann constant. Even 40 years after the Koppe–Huber paper, it is not clear if it is possible. It is a subject I am still pondering, and I am taking various options into account. As mentioned earlier, the meaning of temperature and how to define its units are not fully clear to me. There is no question that the new definition of the kelvin will be a big step forward, but I don’t know if it will be the end of the story. The ampere This is one of the most direct, intuitive, and beautiful definitions in the new SI: the current is just the number of electrons that flow per second. Defining the value of the ampere through the number of elementary charges moved around is just a stroke of genius. When it was first suggested, Robert Andrews Millikan up here was so happy he invited many of us to an afternoon gathering in his yard. In practice (and in theoretical calculations), we have to exercise a bit more care, as we mainly measure the electric current of electrons in crystalline objects, and electrons are no longer “bare” electrons, but quasiparticles. But we’ve known since 1959, thanks to Walter Kohn, that we shouldn’t worry too much about this, and expect the charge of the electron in a crystal to be the same as the charge of a bare electron. As an elementary charge is a pretty small charge, the issue of measuring fractional charges as currents is not a practical one for now. I personally feel that Robert’s contribution to determining the value of the physical constants in the beginning of the twentieth century are not pointed out enough (Robert Andrews really knew what he was doing). The candela No, you will not get me started on my opinion the candela. Does it deserve to be a base unit? The whole story of human-centered physiological units is a complicated one. Obviously they are enormously useful. We all see and hear every day, even every second. But what if the human race continues to develop (in Darwin’s sense)? How will it fit together with our “for all time” mantra? I have my thoughts on this, but laying them out here and now would sidetrack me from my main discussion topic for today. Why seven base units? I also want to mention that originally I was very concerned about the introduction of some of the additional units that are in use today. In endless discussions with my chess partner Carl Friedrich Gauss here in heaven, he had originally convinced me that we can reduce all measurements of electric quantities to measurements of mechanical properties, and I already was pretty fluent in his CGS system, that originally I did not like it at all. But as a human-created unit system, it should be as useful as possible, and if seven units do the job best, it should be seven. In principle one could even eliminate a mass unit and express a mass through time and length. In addition to just being impractical, I strongly believe this is conceptually not the right approach. I recently discussed this with Carl Friedrich. He said he had the idea of just using time and length in the late 1820s, but abandoned such an approach. While alive, Carl Friedrich never had the opportunity to discuss the notion of mass as a synthetic a priori with Immanual, over the last century the two (Carl Friedrich and Immanuel) agreed on mass as an a priori (at least in this universe). Our motto for the original metric system was, “For all time, for all people.” The current SI already realizes “for all people,” and by grounding the new SI in the fundamental constants of physics, the first promise “for all time” will finally become true. You cannot imagine what this means to me. If at all, fundamental constants seem to change maximally with rates on the order of 10–18 per year. This is many orders of magnitude away from the currently realized precisions for most units. Granted, some things will get a bit numerically more cumbersome in the new SI. If we take the current CODATA values as exact values, then, for instance, the von Klitzing constant e2/h will be a big fraction: von Klitzing contant with current CODATA values and exact values as a big fraction The integer part of the last result is, of course, 25,812Ω. Now, is this a periodic decimal fraction or a terminating fraction? The prime factorization of the denominator tells us that it is periodic: Prime factorization of the denominator tells us that it is periodic Progress is good, but as happens so often, it comes at a price. While the new constant-based definitions of the SI units are beautiful, they are a bit harder to understand, and physics and chemistry teachers will have to come up with some innovative ways to explain the new definitions to pupils. (For recent first attempts, see this paper and this paper.) And in how many textbooks have I seen that the value of the magnetic constant (permeability of the vacuum) μ0 is 4 𝜋 10–7 N / A2? The magnetic and the electric constants will in the new SI become measured quantities with an error term. Concretely, from the current exact value: Current exact value With the Planck constant h exactly and the elementary charge e exactly, the value of μ0 would incur the uncertainty of the fine-structure constant α. Fortunately, the dimensionless fine-structure constant α is one of the best-known constants: Dimensionless fine-structure constant alpha But so what? Textbook publishers will not mind having a reason to print new editions of all their books. They will like it—a reason to sell more new books. With μ0 a measured quantity in the future, I predict one will see many more uses of the current underdog of the fundamental constant, the impedance of the vacuum Z in the future: Impedance of the vacuum Z I applaud all physicists and metrologist for the hard work they’ve carried out in continuation of my committee’s work over the last 225 years, which culminated in the new, physical constant-based definitions of the units. So do my fellow original committee members. These definitions are beautiful and truly forever. (I know it is a bit indiscreet to reveal this, but Joseph Louis Lagrange told me privately that he regrets a bit that we did not introduce base and derived units as such in the 1790s. Now with the Planck constant being too important for the new SI, he thought we should have had a named base unit for the action (the time integral over his Lagrangian). And then make mass a derived quantity. While this would be the high road of classical mechanics, he does understand that a base unit for the action would not have become popular with farmers and peasants as a daily unit needed for masses.) I don’t have the time today to go into any detailed discussion of the quarterly garden fests that Percy Williams Bridgman holds. As my schedule allows, I try to participate in every single one of them. It is also so intellectually stimulating to listen to the general discussions about the pros and cons of alternative unit systems. As you can imagine, Julius Wallot, Jan de Boer, Edward Guggenheim, William Stroud, Giovanni Giorgi, Otto Hölder, Rudolf Fleischmann, Ulrich Stille, Hassler Whitney, and Chester Page are, not unexpectedly, most outspoken at these parties. The discussion about coherence and completeness of unit systems and what is a physical quantity go on and on. At the last event, the discussion of whether probability is or is not a physical quantity went on for six hours, with no decision at the end. I suggested inviting Richard von Mises and Hans Reichenbach the next time. They might have something to contribute. At the parties, Otto always complains that mathematicians do not care enough anymore about units and unit systems as they did in the past, and he is so happy to see at least theoretical physicists pick up the topic from time to time, like the recent vector-based differentiation of physical quantities or the recent paper on the general structure of unit systems. And when he saw in an article from last year’s Dagstuhl proceedings that modern type theory met units and physical dimensions, he was the most excited he had been in decades. Interestingly, basically the same discussions came up three years ago (and since then regularly) in the monthly mountain walks that Claude Shannon organizes. Leo Szilard argues that the “bit” has to become a base unit of the SI in the future. In his opinion, information as a physical quantity has been grossly underrated. Once again: the new SI will be just great! There are a few more details that I would like to see changed. The current status of the radian and the steradian, which SP 811 now defines as derived units, saying, “The radian and steradian are special names for the number one that may be used to convey information about the quantity concerned.” But I see with satisfaction that the experts are discussing this topic recently quite in detail. To celebrate the upcoming new SI here in heaven, we held a crowd-based fundraiser to celebrate this event. We raised enough funds to actually hire the master himself, Michelangelo. He will be making a sculpture. Some early sketches shown to the committee (I am fortunate to have the honorary chairmanship) are intriguing. I am sure it will be an eternal piece rivaling the David. One day every human will have the chance to see it (may it be a long time until then, dependent on your current age and your smoking habits). In addition to the constants and the units on their own, he plans to also work Planck himself, Boltzmann, and Avogadro into the sculpture, as these are the only three constants named after a person. Max was immediately accessible to model, but we are still having issues getting permission for Boltzmann to leave hell for a while to be a model. (Millikan and Fletcher were, understandably, a bit disappointed.) Ironically, it was Paul Adrien Maurice Dirac who came up with a great idea on how to convince Lucifer to get Boltzmann a Sabbath-ical. Ironically—because Paul himself is not so keen on the new SI because of the time dependence of the constants themselves over billions of years. But anyway, Paul’s clever idea was to point out that three fundamental constants, the Planck constant (6.62… × 1034 J · s), the Avogradro constant (6.02… × 1023 / mol), and the gravitational constant (6.6… × 10–11 m3 / (kg · s)) all start with the digit 6. And forming the number of the beast, 666, through three fundamental constants really made an impression on Lucifer, and I expect him to approve Ludwig’s temporary leave. As an ex-mariner with an affinity for the oceans, I also pointed out to Lucifer that the mean ocean depth is exactly 66% of his height (2,443 m, according to a detailed re-analysis of Dante’s Divine Comedy). He liked this cute fact so much that he owes me a favor. Mean depth of the oceans So far, Lucifer insists on having the combination G(me / (h k))1/2 on the sculpture. For obvious reasons: Lucifer's favorite combination We will see how this discussion turns out. As there is really nothing wrong with this combination, even if it is not physically meaningful, we might agree to his demands. All of the new SI 2018 committee up here has also already agreed on the music, we will play Wojciech Kilar’s Sinfonia de motu, which uniquely represents the physical constants as a musical composition using only the notes c, g, e, h (b-flat in the English-speaking world), and a (where a represents the cesium atom). And we could convince Rainer Maria Rilke to write a poem for the event. Needless to say, Wojciech, who has now been with us for more than two years, agreed, and even offered to compose an exact version. Down on Earth, the arrival of the constants-based units will surely also be celebrated in many ways and many places. I am looking forward especially to the documentary The State of the Unit, which will be about the history of the kilogram and its redefinition through the Planck constant. The path to the redefinition of the kilogram As I already touched on, the most central point of the new SI will be the new definition of the kilogram. After all, the kilogram is the one artifact still present in the current SI that should be eliminated. In addition to the kilogram itself, many more derived units depend on it, say, the volt: 1 volt = 1 kilogram meters2/(ampere second3). Redefining the kilogram will make many (at least the theoretically inclined) electricians happy. Electrician have been using their exact conventional values for 25 years. Exact conventional values The value resulting from the convential value for the von Klitzing constant and the Josephson constant is very near to the latest CODATA value of the Planck constant: Value resulting from the convential value for the von Klitzing constant and the Josephson constant A side note on the physical quantity that the kilogram represents: The kilogram is the SI base unit for the physical quantity mass. Mass is most relevant for mechanics. Through Newton’s second law, Newton's second law, mass is intimately related to force. Assume we have understood length and time (and so also acceleration). What is next in line, force or mass? William Francis Magie wrote in 1912: It would be very improper to dogmatize, and I shall accordingly have to crave your pardon for a frequent expression of my own opinion, believing it less objectionable to be egotistic than to be dogmatic…. The first question which I shall consider is that raised by the advocates of the dynamical definition of force, as to the order in which the concepts of force and mass come in thought when one is constructing the science of mechanics, or in other words, whether force or mass is the primary concept…. He [Newton] further supplies the measurement of mass as a fundamental quantity which is needed to establish the dynamical measure of force…. I cannot find that Lagrange gives any definition of mass…. To get the measure of mass we must start with the intuitional knowledge of force, and use it in the experiments by which we first define and then measure mass…. Now owing to the permanency of masses of matter it is convenient to construct our system of units with a mass as one of the fundamental units. And Henri Poincaré in his Science and Method says, “Knowing force, it is easy to define mass; this time the definition should be borrowed from dynamics; there is no way of doing otherwise, since the end to be attained is to give understanding of the distinction between mass and weight. Here again, the definition should be led up to by experiments.” While I always had an intuitive feeling for the meaning of mass in mechanics, up until the middle of the twentieth century, I never was able to put it into a crystal-clear statement. Only over the last decades, with the help of Valentine Bargmann and Jean-Marie Souriau did I fully understand the role of mass in mechanics: mass is an element in the second cohomology group of the Lie algebra of the Galilei group. Mass as a physical quantity manifests itself in different domains of physics. In classical mechanics it is related to dynamics, in general relativity to the curvature of space, and in quantum field theory mass occurs as one of the Casimir operators of the Poincaré group. In our weekly “Philosophy of Physics” seminar, this year led by Immanuel himself, Hans Reichenbach, and Carl Friedrich von Weizsäcker (Pascual Jordan suggested this Dreimännerführung of the seminars), we discuss the nature of mass in five seminars. The topics for this year’s series are mass superselection rules in nonrelativistic and relativistic theories, the concept and uses of negative mass, mass-time uncertainty relations, non-Higgs mechanisms for mass generation, and mass scaling in biology and sports. I need at least three days of preparation for each seminar, as the recommended reading list is more than nine pages—and this year they emphasize the condensed matter appearance of these phenomena a lot! I am really looking forward to this year’s mass seminars; I am sure that I will learn a lot about the nature of mass. I hope Ehrenfest, Pauli, and Landau don’t constantly interrupt the speakers, as they did last year (the talk on mass in general relativity was particularly bad). In the last seminar of the series, I have to give my talk. In addition to metabolic scaling laws, my favorite example is the following: Shaking frequency of wet animal I also intend to speak about the recently found predator-prey power laws. For sports, I already have a good example inspired by Texier et al.: the relation between the mass of a sports ball and its maximal speed. The following diagram lets me conjecture speedmax~ln(mass). In the downloadable notebook, mouse over to see the sport, the mass of the ball, and the top speeds: Mass of sports ball and its maximal speed For the negative mass seminar, we had some interesting homework: visualize the trajectories of a classical point particle with complex mass in a double-well potential. As I had seen some of Bender’s papers on complex energy trajectories, the trajectories I got for complex masses did not surprise me: Trajectories for complex masses End side note. The complete new definition reads thus: The kilogram, kg, is the unit of mass; its magnitude is set by fixing the numerical value of the Planck constant to be equal to exactly 6.62606X*10–34 when it is expressed in the unit s–1 · m2 · kg, which is equal to J · s. Here X stands for some digits soon to be explicitly stated that will represent the latest experimental values. And the kilogram cylinder can finally retire as the world’s most precious artifact. I expect soon after this event the international kilogram prototype will finally be displayed in the Louvre. As the Louvre had been declared “a place for bringing together monuments of all the sciences and arts” in May 1791 and opened in 1793, all of us on the committee agreed that one day, when the original kilogram was to be replaced with something else, it would end up in the Louvre. Ruling the kingdom of mass for more than a century, IPK deserves its eternal place as a true monument of the sciences. I will make a bet—in a few years the retired kilogram, under its three glass domes, will become one of the Louvre’s most popular objects. And the queue that physicists, chemists, mathematicians, engineers, and metrologists will form to see it will, in a few years, be longer than the queue for the Mona Lisa. I would also make a bet that the beautiful miniature kilogram replicas will within a few years become the best-selling item in the Louvre’s museum store: Miniature kilogram replicas At the same time, as a metrologist, maybe the international kilogram prototype should stay where it is for another 50 years, so that it can be measured against a post-2018 kilogram made from an exact value of the Planck constant. Then we would finally know for sure if the international kilogram prototype is/was really losing weight. Let me quickly recapitulate the steps toward the new “electronic” kilogram. Intuitively, one could have thought to define the kilogram through the Avogadro constant as a certain number of atoms of, say, 12C. But because of binding energies and surface effects in a pile of carbon (e.g. diamond, graphene) made up from n = round(1 kg / m (12C)) atoms to realize the mass of one kilogram, all the n carbon-12 atoms would have to be well separated. Otherwise we would have a mass defect (remember Albert’s famous E = m c2 formula), and the mass equivalent for one kilogram or compact carbon versus the same number of individual, well-separated atoms is on the order of 10–10. Using the carbon-carbon bond energry, here is an estimation of the mass difference: Estimation of the mass difference using the carbon-carbon bond energy A mass difference of this size can for a 1 kg weight can be detected without problems with a modern mass comparator. To give a sense of scale, this would be equivalent to the (Einsteinian) relativistic mass conversion of the energy expenditure of fencing for most of a day: Energy expenditure of fencing for most of a day This does not mean one could not define a kilogram through the mass of an atom or a fraction of it. Given the mass of a carbon atom m (12C), the atomic mass constant u = m (12C) / 12 follows, and using u we can easily connect to the Planck constant: Connecting to the Planck constant I read with great interest the recent comparison of using different sets of constants for the kilogram definition. Of course, if the mass of a 12C atom would be the defined value, then the Planck constant would become a measured, meaning nonexact, value. For me, having an exact value for the Planck constant is aesthetically preferable. I have been so excited over the last decade following the steps toward the redefinition of the kilogram. For more than 20 years now, there has been a light visible at the end of the tunnel that would eliminate the one kilogram from its throne. And when I read 11 years ago the article by Ian Mills, Peter Mohr, Terry Quinn, Barry Taylor, and Edwin Williams entitled “Redefinition of the Kilogram: A Decision Whose Time Has Come” in Metrologia (my second-favorite, late-morning Tuesday monthly read, after the daily New Arrivals, a joint publication of Hells’ Press, the Heaven Publishing Group, Jannah Media, and Deva University Press), I knew that soon my dreams would come true. The moment I read the Appendix A.1 Definitions that fix the value of the Planck constant h, I knew that was the way to go. While the idea had been floating around for much longer, it now became a real program to be implemented within a decade (give or take a few years). James Clerk Maxwell wrote in his 1873 A Treatise on Electricity and Magnetism: In framing a universal system of units we may either deduce the unit of mass in this way from those of length and time already defined, and this we can do to a rough approximation in the present state of science; or, if we expect soon to be able to determine the mass of a single molecule of a standard substance, we may wait for this determination before fixing a universal standard of mass. Until around 2005, James Clerk thought that mass should be defined through the mass of an atom, but he came around over the last decade and now favors the definition through Planck’s constant. In a discussion with Albert Einstein and Max Planck (I believe this was in the early seventies) in a Vienna-style coffee house (Max loves the Sachertorte and was so happy when Franz and Eduard Sacher opened their now-famous HHS (“Heavenly Hotel Sacher”)), Albert suggested using his two famous equations, E = m c2 and E = h f, to solve for m to get m = h f / c2. So, if we define h as was done with c, then we know m because we can measure frequencies pretty well. (Compton was arguing that this is just his equation rewritten, and Niels Bohr was remarking that we cannot really trust E = m c2 because of its relatively weak experimental verification, but I think he was just mocking Einstein, retaliating for some of the Solvay Conference Gedankenexperiment discussions. And of course, Bohr could not resist bringing up Δm Δt ~ h / c2 as a reason why we cannot define the second and the kilogram independently, as one implies an error in the other for any finite mass measurement time. But Léon Rosenfeld convinced Bohr that this is really quite remote, as for a day measurement time this limits the mass measurement precision to about 10–52 kg for a kilogram mass m.) An explicit frequency equivalent f = m c2 / h is not practical for a mass of a kilogram as it would mean f ~ 1.35 1050 Hz, which is far, far too large for any experiment, dwarfing even the Planck frequency by about seven orders of magnitude. But some recent experiments from Berkeley from the last few years will maybe allow the use of such techniques at the microscopic scale. For more than 25 years now, in every meeting of the HPS (Heavenly Physical Society), Louis de Broglie insists on these frequencies being real physical processes, not just convenient mathematical tools. So we need to know the value of the Planck constant h. Still today, the kilogram is defined as the mass of the IPK. As a result, we can measure the value of h using the current definition of the kilogram. Once we know the value of h to a few times 10–8 (this is basically where we are right now), we will then define a concrete value of h (very near or at the measured value). From then on, the kilogram will become implicitly defined through the value of the Planck constant. At the transition, the two definitions overlap in their uncertainties, and no discontinuities arise for any derived quantities. The international prototype has lost over the last 100 years on the order of 50 μg weight, which is a relative change of 5 × 10–8, so a value for the Planck constant with an error less than 2 × 10–8 does guarantee that the mass of objects will not change in a noticeable manner. Looking back over the last 116 years, the value of the Planck constant gained about seven digits in precision. A real success story! In his paper “Ueber das Gesetz der Energieverteilung im Normalspectrum,” Max Planck for the first time used the symbol h, and gave for the first time a numerical value for the Planck constant (in a paper published a few months earlier, Max used the symbol b instead of h): Excerpts from "Ueber das Gesetz der Energieverteilung im Normalspectrum" (I had asked Max why he choose the symbol h, and he said he can’t remember anymore. Anyway, he said it was a natural choice in conjunction with the symbol k for the Boltzmann constant. Sometimes one reads today that h was used to express the German word Hilfsgrösse (auxiliary helping quantity); Max said that this was possible, and that he really doesn’t remember.) In 1919, Raymond Thayer Birge published the first detailed comparison of various measurements of the Planck constant: Various measurements of the Planck constant From Planck’s value 6.55 × 10–34 J · s to the 2016 value 6.626070073(94) × 10–34 J · s, amazing measurement progress has been made. The next interactive Demonstration allows you to zoom in and see the progress in measuring h over the last century. Mouse over the Bell curves (indicating the uncertainties of the values) in the notebook to see the experiment (for detailed discussions of many of the experiments for determining h, see this paper): History of measurement of the Planck constant h There have been two major experiments carried out over the last few years that my original group eagerly followed from the heavens: the watt balance experiment (actually, there is more than one of them—one at NIST, two in Paris, one in Bern…) and the Avogadro project. As a person who built mechanical measurements when I was alive, I personally love the watt balance experiment. Building a mechanical device that through a clever trick by Bryan Kibble eliminates an unknown geometric quantity gets my applause. The recent do-it-yourself LEGO home version is especially fun. With an investment of a few hundred dollars, everybody can measure the Planck constant at home! The world has come a long way since my lifetime. You could perhaps even check your memory stick before and after you put a file on it and see if its mass has changed. But my dear friend Lavoisier, not unexpectedly, always loved the Avogadro project that determines the value of the Avogadro constant to high precision. Having 99.995% pure silicon makes the heart of a chemist beat faster. I deeply admire the efforts (and results) in making nearly perfect spheres out of them. The product of the Avogadro constant with the Planck constant NA h is related to the Rydberg constant. Fortunately, as we saw above, the Rydberg constant is known to about 11 digits; this means that knowing NA h to a high precision allows us to find the value of our beloved Planck constant h to high precision. In my lifetime, we started to understand the nature of the chemical elements. We knew nothing about isotopes yet—if you had told me that there are more than 20 silicon isotopes, I would not even have understood the statement: Silicon isotopes I am deeply impressed how mankind today can even sort the individual atoms by their neutron count. The silicon spheres of the Avogadro project are 99.995 % silicon 28—much, much more than the natural fraction of this isotope: Silicon spheres of the Avogadro project While the highest-end beam balances and mass comparators achieve precisions of 10–11, they can only compare masses but not realize one. Once the Planck constant has a fixed value using the watt balance, a mass can be constructively realized. I personally think the Planck constant is one of the most fascinating constants. It reigns in the micro world and is barely visible at macroscopic scales directly, yet every macroscopic object holds together just because of it. A few years ago I was getting quite concerned that our dream of eternal unit definitions would never be realized. I could not get a good night’s sleep when the value for the Planck constant from the watt balance experiments and the Avogadro silicon sphere experiments were far apart. How relieved I was to see that over the last few years the discrepancies were resolved! And now the working mass is again in sync with the international prototype. Before ending, let me say a few words about the Planck constant itself. The Planck constant is the archetypal quantity that one expects to appear in quantum-mechanical phenomena. And when the Planck constant goes to zero, we recover classical mechanics (in a singular limit). This is what I myself thought until recently. But since I go to the weekly afternoon lectures of Vladimir Arnold, which he started giving in the summer of 2010 after getting settled up here, I now have strong reservations against such simplistic views. In his lecture about high-dimensional geometry, he covered the symplectic camel; since then, I view the Heisenberg uncertainty relations more as a classical relic than a quantum property. And since Werner Heisenberg recently showed me the Brodsky–Hoyer paper on ħ expansions, I have a much more reserved view on the BZO cube (the Bronshtein–Zelmanov–Okun cGh physics cube). And let’s not forget recent attempts to express quantum mechanics without reference to Planck’s constant at all. While we understand a lot about the Planck constant, its obvious occurrences and uses (such as a “conversion factor” between frequency and energy of photons in a vacuum), I think its deepest secrets have not yet been discovered. We will need a long ride on a symplectic camel into the deserts of hypothetical multiverses to unlock it. And Paul Dirac thinks that the role of the Planck constant in classical mechanics is still not well enough understood. For the longest time, Max himself thought that in phase space (classical or through a Wigner transform), the minimal volume would be on the order of his constant h. As one of the fathers of quantum mechanics, Max follows the conceptual developments still today, especially the decoherence program. How amazed was he when sub-h structures were discovered 15 years ago. Eugene Wigner told me that he had conjectured such fine structures since the late 1930s. Since then, he has loved to play around with plotting Wigner functions for all kind of hypergeometric potentials and quantum carpets. His favorite is still the Duffing oscillator’s Wigner function. A high-precision solution of the time-dependent Schrödinger equations followed by a fractional Fourier transform-based Wigner function construction can be done in a straightforward and fast way. Here is how a Gaussian initial wavepacket looks after three periods of the external force. The blue rectangle is an area with in the x p plane of area h: How Gaussian initial wavepacket looks after three periods of the external force Here are some zoomed-in (colored according to the sign of the Wigner function) images of the last Wigner function. Each square has an area of 4 h and shows a variety of sub-Planckian structures: Zoomed-in images of the last Wigner function For me, the forthcoming definition of the kilogram through the Planck constant is a great intellectual and technological achievement of mankind. It represents two centuries of hard work at metrological institutes, and cements some of the deepest physical truths found in the twentieth century into the foundations of our unit system. At once a whole slew of units, unit conversions, and fundamental constants will be known with greater precision. (Make sure you get a new CODATA sheet after the redefinition and have the pocket card with the new constant values with you always until you know all the numbers by heart!) This will open a path to new physics and new technologies. In case you make your own experiments determining the values of the constants, keep in mind that the deadline for the inclusion of your values is July 1, 2017. The transition from the platinum-iridium kilogram, historically denoted platinum-iridium kilogram, to the kilogram based on the Planck constant h can be nicely visualized graphically as a 3D object that contains both characters. Rotating it shows a smooth transition of the projection shape from platinum-iridium kilogram to h representing over 200 years of progress in metrology and physics: 3D object of both the platinum-iridium kilogram and the Planck constant h The interested reader can order a beautiful, shiny, 3D-printed version here. It will make a perfect gift for your significant other (or ask your significant other to get you one) for Christmas to be ready for the 2018 redefinition, and you can show public support for it as a pendent or as earrings. (Available in a variety of metals, platinum is, obviously, the most natural choice, and it is under $5k—but the $82.36 polished silver version looks pretty nice too.) Here are some images of golden-looking versions of KToh3D (up here, gold, not platinum is the preferred metal color): Golden-looking versions of KToh3D I realize that not everybody is (or can be) as excited as I am about these developments. But I see forward to the year 2018 when, after about 225 years, the kilogram as a material artifact will retire and a fundamental constant will replace it. The new SI will base our most important measurement standards on twenty-first century technology. If the reader has questions or comments, don’t hesitate to email me at jeancharlesdeborda@gmail.com; based on recent advances in the technological implications of EPR=ER, we now have a much faster and more direct connection to Earth. À tous les temps, à tous les peuples! Leave a Comment Marc E. Janety Monsieur, Je vous remercie pour le “shoutout”. –Janety, Saint-Germain des Prés, Paris, France Posted by Marc E. Janety    May 20, 2016 at 8:25 am Jean-Charles de Borda Michael, it was great working with you. One of my best collaborations. Posted by Jean-Charles de Borda    May 20, 2016 at 9:06 am Very nice post! I’m going to read one of the suggested books! Posted by Lou    May 21, 2016 at 9:00 am Citizen Trallès Not sure that I agree with Citizen Borda on absolutely everything, but I respect his views (comme toujours). Posted by Citizen Trallès    May 21, 2016 at 9:53 am Fantastic article Michael! It will take me at least a month just to absorb its many interconnecting ideas. It is amazing how even the 21st century is still dominated by the ideas of nineteenth century French, German, Italian and English mathematicians and physicists. Northern Europe and Russia still love their scientists, unfortunately not so much here in America anymore. Posted by Michael    May 22, 2016 at 1:15 pm David Carraher Michael, The breadth and depth of this article boggle the mind! I love the links you included for additional reading and consultation. We at the Poincaré Institute for Mathematics Education have been working hard to make the study of quantities integral to K-12 mathematics. There many useful leads in your article. Thanks. Posted by David Carraher    May 23, 2016 at 2:25 pm I started reading… kept on reading… and realized that this is a mini-novel. Added to Pocket! Can’t wait to finish it. Posted by constantine    June 2, 2016 at 1:14 pm Rob Ryan Posted by Rob Ryan    June 25, 2016 at 3:27 pm Posted by Raza    July 27, 2016 at 3:24 am Bruce Camber What a mensch! “What’s this?” was my first response, then when you settle into the rhyme and scheme, it makes you smile, laugh, and learn. Sweet. Oh, so sweet. A wonderful on the edge of knowledge! Posted by Bruce Camber    February 28, 2017 at 3:04 pm Bruce Camber That last sentence should read: A wonderful walk on the edges of knowledge! Posted by Bruce Camber    February 28, 2017 at 3:06 pm Leave a comment in reply to David Carraher
6a1806bc95a87565
Wednesday, August 26, 2015 TGD view about blackholes and Hawking radiation: part I The most recent revealation of Hawking was in Hawking radiation conference held in KTH Royal Institute of Technology in Stockholm. The title of the posting of Bee telling about what might have been revealed is "Hawking proposes new idea for how information might escape from black holes". Also Lubos has - a rather aggressive - blog post about the talk. A collaboration of Hawking, Andrew Strominger and Malcom Perry is behind the claim and the work should be published within few months. The first part of posting gives a critical discussion of the existing approach to black holes and Hawking gravitation. The intention is to demonstrate that a pseudo problem following from the failure of General Relativity below black hole horizon is in question. In the second past of posting I will discuss TGD view about blackholes and Hawking radiation. There are several new elements involved but concerning black holes the most relevant new element is the assignment of Euclidian space-time regions as lines of generalized Feynman diagrams implying that also blackhole interiors correspond to this kind of regions. Negentropy Maximization Principle is also an important element and predicts that number theoretically defined black hole negentropy can only increase. The real surprise was that the temperature of the variant of Hawking radiation at the flux tubes of proton Sun system is room temperature! Could TGD variant of Hawking radiation be a key player in quantum biology? Is information lost or not in blackhole collapse? The basic problem is that classically the collapse to blackhole seems to destroy all information about the matter collapsing to the blackhole. The outcome is just infinitely dense mass point. There is also a theorem of classical GRT stating that blackhole has no hair: blachole is characterized only by few conserved charges. Hawking has predicted that blackhole loses its mass by generating radiation, which looks like thermal. As blackhole radiates its mass away, all information about the material which entered to the blackhole seems to be lost. If one believes in standard quantum theory and unitary evolution preserving the information, and also forgets the standard quantum theory's prediction that state function reductions destroy information, one has a problem. Does the information really disappear? Or is the GRT description incapable to cope with the situation? Could information find a new representation? Superstring models and AdS/CFT correspondence have inspired the proposal that a hologram results at the horizon and this hologram somehow catches the information by defining the hair of the blackhole. Since the radius of horizon is proportional to the mass of blackhole, one can however wonder what happens to this information as the radius shrinks to zero when all mass is Hawking radiated out. What Hawking suggests is that a new kind of symmetry known as super-translations - a notion originally introduced by Bondi and Metzner - could somehow save the situation. Andrew Strominger has recently discussed the notion. The information would be "stored to super-translations". Unfortunately this statement says nothing to me nor did not say to Bee and New Scientist reporter. The idea however seems to be that the information carried by Hawking radiation emanating from the blackhole interior would be caught by the hologram defined by the blackhole horizon. Super-translation symmetry acts at the surface of a sphere with infinite radius in asymptotically flat space-times looking like empty Minkowski space in very distant regions. The action would be translations along sphere plus Poincare transformations. What comes in mind in TGD framework is conformal transformations of the boundary of 4-D lightcone, which act as scalings of the radius of sphere and conformal transformations of the sphere. Translations however translate the tip of the light-cone and Lorentz transformations transform the sphere to an ellipsoid so that one should restrict to rotation subgroup of Lorentz group. Besides this TGD allows huge group of symplectic transformations of δ CD× CP2 acting as isometries of WCW and having structure of conformal algebra with generators labelled by conformal weights. Sharpening of the argument of Hawking There is now a popular article explaining the intuitive picture behind Hawking's proposal. The blackhole horizon would involve tangential flow of light and particles of the infalling matter would induce supertranslations on the pattern of this light thus coding information about their properties to this light. After that this light would be radiated away as analog of Hawking radiation and carry out this information. The objection would be that in GRT horizon is no way special - it is just a coordinate singularity. Curvature tensor does not diverge either and Einstein tensor and Ricci scalar vanish. This argument has been used in the firewall debates to claim that nothing special should occur as horizon is traversed. Why light would rotate around it? I see no reason for this! The answer in TGD framework would be obvious: horizon is replaced for TGD analog of blackhole with a light-like 3-surface at which the induced metric becomes Euclidian. Horizon becomes analogous to light front carrying not only photons but all kinds of elementary particles. Particles do not fall inside this surface but remain at it! What are the problems? My fate is to be an aggressive dissident listened by no-one, and I find it natural to continue in the role of angry old man. Be cautious, I am arrogant, I can bite, and my bite is poisonous! 1. With all due respect to Big Guys, to me the problem looks like a pseudo problem caused basically by the breakdown of classical GRT. Irrespective of whether Hawking radiation is generated, the information about matter (apart from mass, and some charges) is lost if the matter indeed collapses to single infinitely dense point. This is of course very unrealistic and the question should be: how should we proceed from GRT. Blackhole is simply too strong an idealization and it is no wonder that Hawking's calculation using blackhole metric as a background gives rise to blackbody radiation. One might hope that Hawking radiation is genuine physical phenomenon, and might somehow carry the information by being not genuinely thermal radiation. Here a theory of quantum gravitation might help. But we do not have it! 2. What do we know about blackholes? We know that there are objects, which can be well described by the exterior Schwartschild metric. Galactic centers are regarded as candidates for giant blackholes. Binary systems for which another member is invisible are candidates for stellar blackholes. One can however ask wether these candidates actually consist of dark matter rather than being blackholes. Unfortunately, we do not understand what dark matter is! 3. Hawking radiation is extremely weak and there is no experimental evidence pro or con. Its existence assumes the existence of blackhole, which presumably represents the failure of classical GRT. Therefore we might be seeing a lot of trouble and inspired heated debates about something, which does not exist at all! This includes both blackholes, Hawking radiation and various problems such as firewall paradox. There are also profound theoretical problems. 1. Contrary to the intensive media hype during last three decades, we still do not have a generally accepted theory of quantum gravity. Super string models and M-theory failed to predict anything at fundamental level, and just postulate effective quantum field theory limit, which assumes the analog of GRT at the level of 10-D or 11-D target space to define the spontaneous compactification as a solution of this GRT type theory. Not much is gained. AdS/CFT correspondence is an attempt to do something in absence of this kind of theory but involves 10- or 11- D blackholes and does not help much. Reality looks much simpler to an innocent non-academic outsider like me. Effective field theorizing allows intellectual laziness and many problems of recent day physics will be probably seen in future as being caused by this lazy approach avoiding attempts to build explicit bridges between physics at different scales. Something very similar has occurred in hadron physics and nuclear physics and one has kind of stable of Aigeias to clean up before one can proceed. 2. A mathematically well-defined notion of information is lacking. We can talk about thermodynamical entropy - single particle observable - and also about entanglement entropy - basically a 2-particle observable. We do not have genuine notion of information and second law predicts that the best that one can achieve is no information at all! Could it be that our view about information as single particle characteristic is wrong? Could information be associated with entanglement and be 2-particle characteristic? Could information reside in the relationship of object with the external world, in the communication line? Not inside blackhole, not at horizon but in the entanglement of blackhole with the external world. 3. We do not have a theory of quantum measurement. The deterministic unitary time evolution of Schrödinger equation and non-deterministic state function reduction are in blatant conflict. Copenhagen interpretation escapes the problem by saying that no objective reality/realities exist. Easy trick once again! A closely related Pandora's box is that experienced time and geometric time are very different but we pretend that this is not the case. The only way out is to bring observer part of quantum physics: this requires nothing less than quantum theory of consciousness. But the gurus of theoretical physics have shown no interest to consciousness. It is much easier and much more impressive to apply mechanical algorithms to produce complex formulas. If one takes consciousness seriously, one ends up with the question about the variational principle of consciousness. Yes, your guess was correct! Negentropy Maximization Principle! Conscious experience tends to maximize conscious information gain. But how information is represented? In the second part I will discuss TGD view about blackholes and Hawking radiation. See the chapter Criticality and dark matter" or the article TGD view about black holes and Hawking radiation. No comments:
bf5a568a68d7a269
Analysis - Mathematical Physics Topic: Inverse problems for quantum graphs Speaker: Pavel Kurasov Affiliation: Stockholm University Date & Time: Friday January 17th, 2020, 3:30pm - 4:30pm Location: Simonyi Hall 101 To solve the inverse spectral problem for the Schrödinger equation on a metric graph one needs to determine:• the metric graph; • the potential in the Schrödinger equation; • the vertex conditions (connecting the edges together). The inverse problem is solved completely in the case of trees under mild restrictions on the vertex conditions. The main tool is a combination of the boundary control and M-function approaches to inverse problems. These two approaches are essentially equivalent in the case of single interval, but their different features may be effectively exploited to solve different partial inverse problems for trees. The bunch cutting procedure allows one to reduce the tree step-by-step by removing edges and vertices close to the boundary. To solve the inverse problem for graphs with cycles we propose to use magnetic boundary control and magnetic M-functions where spectral data for a fixed potential are considered as functions of the magnetic fluxes through graph cycles. To solve the inverse problem we use cycle opening procedure mapping spectral data for arbitrary graphs with cycles to spectral data for trees on the same edge  set. The graph and potential are reconstructed assuming so far standard vertex conditions.
8c63b146060774f1
Quantum Network Theory (Part 1) guest post by Tomi Johnson If you were to randomly click a hyperlink on this web page and keep doing so on each page that followed, where would you end up? As an esteemed user of Azimuth, I’d like to think you browse more intelligently, but the above is the question Google asks when deciding how to rank the world’s web pages. Recently, together with the team (Mauro Faccin, Jacob Biamonte and Piotr Migdał) at the ISI Foundation in Turin, we attended a workshop in which several of the attendees were asking a similar question with a twist. “What if you, the web surfer, behaved quantum mechanically?” Now don’t panic! I have no reason to think you might enter a superposition of locations or tunnel through a wall. This merely forms part of a recent drive towards understanding the role that network science can play in quantum physics. As we’ll find, playing with quantum networks is fun. It could also become a necessity. The size of natural systems in which quantum effects have been identified has grown steadily over the past few years. For example, attention has recently turned to explaining the remarkable efficiency of light-harvesting complexes, comprising tens of molecules and thousands of atoms, using quantum mechanics. If this expansion continues, perhaps quantum physicists will have to embrace the concepts of complex networks. To begin studying quantum complex networks, we found a revealing toy model. Let me tell you about it. Like all good stories, it has a beginning, a middle and an end. In this part, I’ll tell you the beginning and the middle. I’ll introduce the stochastic walk describing the randomly clicking web surfer mentioned above and a corresponding quantum walk. In part 2 the story ends with the bounding of the difference between the two walks in terms of the energy of the walker. But for now I’ll start by introducing you to a graph, this time representing the internet! If this taster gets you interested, there are more details available here: • Mauro Faccin, Tomi Johnson, Jacob Biamonte, Sabre Kais and Piotr Migdał, Degree distribution in quantum walks on complex networks, arXiv:1305.6078 (2013). What does the internet look like from above? As we all know, the idea of the internet is to connect computers to each other. What do these connections look like when abstracted as a network, with each computer a node and each connection an edge? The internet on a local scale, such as in your house or office, might look something like this: Local network with several devices connected to a central hub. Each hub connects to other hubs, and so the internet on a slightly larger scale might look something like this: Regional network What about the full global, not local, structure of the internet? To answer this question, researchers have developed representations of the whole internet, such as this one: Global network While such representations might be awe inspiring, how can we make any sense of them? Or are they merely excellent desktop wallpapers and new-age artworks? In terms of complex network theory, there’s actually a lot that can be said that is not immediately obvious from the above representation. For example, we find something very interesting if we plot the number of web pages with different incoming links (called degree) on a log-log axis. What is found for the African web is the following: Power law degree distribution This shows that very few pages are linked to by a very large number others, while a very large number of pages receive very few links. More precisely, what this shows is a power law distribution, the signature of which is a straight line on a log-log axis. In fact, power law distributions arise in a diverse number of real world networks, human-built networks such as the internet and naturally occurring networks. It is often discussed alongside the concept of the preferential attachment; highly connected nodes seem to accumulate connections more quickly. We all know of a successful blog whose success had led to an increased presence and more success. That’s an example of preferential attachment. It’s clear then that degree is an important concept in network theory, and its distribution across the nodes a useful characteristic of a network. Degree gives one indication of how important a node is in a network. And this is where stochastic walks come in. Google, who are in the business of ranking the importance of nodes (web pages) in a network (the web), use (up to a small modification) the idealized model of a stochastic walker (web surfer) who randomly hops to connected nodes (follows one of the links on a page). This is called the uniform escape model, since the total rate of leaving any node is set to be the same for all nodes. Leaving the walker to wander for a long while, Google then takes the probability of the walker being on a node to rank the importance of that node. In the case that the network is undirected (all links are reciprocated) this long-time probability, and therefore the rank of the node, is proportional to the degree of the node. So node degrees and the uniform escape model play an important role in the fields of complex networks and stochastic walks. But can they tell us anything about the much more poorly understood topics of quantum networks and quantum walks? In fact, yes, and demonstrating that to you is the purpose of this pair of articles. Before we move on to the interesting bit, the math, it’s worth just listing a few properties of quantum walks that make them hard to analyze, and explaining why they are poorly understood. These are the difficulties we will show how to overcome below. No convergence. In a stochastic walk, if you leave the walker to wander for a long time, eventually the probability of finding a walker at a node converges to a constant value. In a quantum walk, this doesn’t happen, so the walk can’t be characterized so easily by its long-time properties. Dependence on initial states. In some stochastic walks the long-time properties of the walk are independent of the initial state. It is possible to characterize the stochastic walk without referring to the initialization of the walker. Such a characterization is not so easy in quantum walks, since their evolution always depends on the initialization of the walker. Is it even possible then to say something useful that applies to all initializations? Stochastic and quantum generators differ. Those of you familiar with the network theory series know that some generators produce both stochastic and quantum walks (see part 16 for more details). However, most stochastic walk generators, including that for the uniform escape model, do not generate quantum walks and vice versa. How do we then compare stochastic and quantum walks when their generators differ? With the task outlined, let’s get started! Graphs and walks In the next couple of sections I’m going to explain the diagram below to you. If you’ve been following the network theory series, in particular part 20, you’ll find parts of it familiar. But as it’s been a while since the last post covering this topic, let’s start with the basics. Diagram outlining the main concepts A simple graph G can be used to define both stochastic and quantum walks. A simple graph is something like this: Illustration of a simple graph where there is at most one edge between any two nodes, there are no edges from a node to itself and all edges are undirected. To avoid complications, let’s stick to simple graphs with a finite number n of nodes. Let’s also assume you can get from every node to every other node via some combination of edges i.e. the graph is connected. In the particular example above the graph represents a network of n = 5 nodes, where nodes 3 and 4 have degree (number of edges) 3, and nodes 1, 2 and 5 have degree 2. Every simple graph defines a matrix A, called the adjacency matrix. For a network with n nodes, this matrix is of size n \times n, and each element A_{i j} is unity if there is an edge between nodes i and j, and zero otherwise (let’s use this basis for the rest of this post). For the graph drawn above the adjacency matrix is \left( \begin{matrix} 0 & 1 & 0 & 1 & 0 \\ 1 & 0 & 1 & 0 & 0 \\ 0 & 1 & 0 & 1 & 1 \\ 1 & 0 & 1 & 0 & 1 \\ 0 & 0 & 1 & 1 & 0 \end{matrix} \right) By construction, every adjacency matrix is symmetric: A =A^T (the T means the transposition of the elements in the node basis) and further, because each A is real, it is self-adjoint: (the \dagger means conjugate transpose). This is nice, since (as seen in parts 16 and 20) a self-adjoint matrix generates a continuous-time quantum walk. To recap from the series, a quantum walk is an evolution arising from a quantum walker moving on a network. A state of a quantum walk is represented by a size n complex column vector \psi. Each element \langle i , \psi \rangle of this vector is the so-called amplitude associated with node i and the probability of the walker being found on that node (if measured) is the modulus of the amplitude squared |\langle i , \psi \rangle|^2. Here i is the standard basis vector with a single non-zero ith entry equal to unity, and \langle u , v \rangle = u^\dagger v is the usual inner product. A quantum walk evolves in time according to the Schrödinger equation \displaystyle{ \frac{d}{d t} \psi(t)= - i H \psi(t) } where H is called the Hamiltonian. If the initial state is \psi(0) then the solution is written as The probabilities | \langle i , \psi (t) \rangle |^2 are guaranteed to be correctly normalized when the Hamiltonian H is self-adjoint. There are other matrices that are defined by the graph. Perhaps the most familiar is the Laplacian, which has recently been a topic on this blog (see parts 15, 16 and 20 of the series, and this recent post). The Laplacian L is the n \times n matrix L = D - A where the degree matrix D is an n \times n diagonal matrix with elements given by the degrees \displaystyle{ D_{i i}=\sum_{j} A_{i j} } For the graph drawn above, the degree matrix and Laplacian are: \left( \begin{matrix} 2 & 0 & 0 & 0 & 0 \\ 0 & 2 & 0 & 0 & 0 \\ 0 & 0 & 3 & 0 & 0 \\ 0 & 0 & 0 & 3 & 0 \\ 0 & 0 & 0 & 0 & 2 \end{matrix} \right) \qquad \mathrm{and} \qquad \left( \begin{matrix} 2 & -1 & 0 & -1 & 0 \\ -1 & 2 & -1 & 0 & 0 \\ 0 & -1 & 3 & -1 & -1 \\ -1 & 0 & -1 & 3 & -1 \\ 0 & 0 & -1 & -1 & 2 \end{matrix} \right) The Laplacian is self-adjoint and generates a quantum walk. The Laplacian has another property; it is infinitesimal stochastic. This means that its off diagonal elements are non-positive and its columns sum to zero. This is interesting because an infinitesimal stochastic matrix generates a continuous-time stochastic walk. To recap from the series, a stochastic walk is an evolution arising from a stochastic walker moving on a network. A state of a stochastic walk is represented by a size n non-negative column vector \psi. Each element \langle i , \psi \rangle of this vector is the probability of the walker being found on node i. A stochastic walk evolves in time according to the master equation \displaystyle{ \frac{d}{d t} \psi(t)= - H \psi(t) } where H is called the stochastic Hamiltonian. If the initial state is \psi(0) then the solution is written \psi(t) = \exp(- t H) \psi(0) The probabilities \langle i , \psi (t) \rangle are guaranteed to be non-negative and correctly normalized when the stochastic Hamiltonian H is infinitesimal stochastic. So far, I have just presented what has been covered on Azimuth previously. However, to analyze the important uniform escape model we need to go beyond the class of (Dirichlet) generators that produce both quantum and stochastic walks. Further, we have to somehow find a related quantum walk. We’ll see below that both tasks are achieved by considering the normalized Laplacians: one generating the uniform escape stochastic walk and the other a related quantum walk. Normalized Laplacians The two normalized Laplacians are: • the asymmetric normalized Laplacian S = L D^{-1} (that generates the uniform escape Stochastic walk) and • the symmetric normalized Laplacian Q = D^{-1/2} L D^{-1/2} (that generates a Quantum walk). For the graph drawn above the asymmetric normalized Laplacian S is \left( \begin{matrix} 1 & -1/2 & 0 & -1/3 & 0 \\ -1/2 & 1 & -1/3 & 0 & 0 \\ 0 & -1/2 & 1 & -1/3 & -1/2 \\ -1/2 & 0 & -1/3 & 1 & -1/2 \\ 0 & 0 & -1/3 & -1/3 & 1 \end{matrix} \right) The identical diagonal elements indicates that the total rates of leaving each node are identical, and the equality within each column of the other non-zero elements indicates that the walker is equally likely to hop to any node connected to its current node. This is the uniform escape model! For the same graph the symmetric normalized Laplacian Q is \left( \begin{matrix} 1 & -1/2 & 0 & -1/\sqrt{6} & 0 \\ -1/2 & 1 & -1/\sqrt{6} & 0 & 0 \\ 0 & -1/\sqrt{6} & 1 & -1/3 & -1/\sqrt{6} \\ -1/\sqrt{6} & 0 & -1/3 & 1 & -1/\sqrt{6} \\ 0 & 0 & -1/\sqrt{6} & -1/\sqrt{6} & 1 \end{matrix} \right) That the diagonal elements are identical in the quantum case indicates that all nodes are of equal energy, this is type of quantum walk usually considered. Puzzle 1. Show that in general S is infinitesimal stochastic but not self-adjoint. Puzzle 2. Show that in general Q is self-adjoint but not infinitesimal stochastic. So a graph defines two matrices: one S that generates a stochastic walk, and one Q that generates a quantum walk. The natural question to ask is whether these walks are related. The answer is that they are! Underpinning this relationship is the mathematical property that S and Q are similar. They are related by the following similarity transformation S = D^{1/2} Q D^{-1/2} which means that any eigenvector \phi_k of Q associated to eigenvalue \epsilon_k gives a vector \pi_k \propto D^{1/2} \phi_k that is an eigenvector of S with the same eigenvalue! To show this, insert the identity I = D^{-1/2} D^{1/2} into Q \phi_k = \epsilon_k \phi_k and multiply from the left with D^{1/2} to obtain \begin{aligned} (D^{1/2} Q D^{-1/2} ) (D^{1/2} \phi_k) &= \epsilon_k ( D^{1/2} \phi_k ) \\ S \pi_k &= \epsilon_k \pi_k \end{aligned} The same works in the opposite direction. Any eigenvector \pi_k of S gives an eigenvector \phi_k \propto D^{-1/2} \pi_k of Q with the same eigenvalue \epsilon_k. The mathematics is particularly nice because Q is self-adjoint. A self-adjoint matrix is diagonalizable, and has real eigenvalues and orthogonal eigenvectors. As a result, the symmetric normalized Laplacian can be decomposed as Q = \sum_k \epsilon_k \Phi_k where \epsilon_k is real and \Phi_k are orthogonal projectors. Each \Phi_k acts as the identity only on vectors in the space spanned by \phi_k and as zero on all others, such that \Phi_k \Phi_\ell = \delta_{k \ell} \Phi_k. Multiplying from the left by D^{1/2} and the right by D^{-1/2} results in a similar decomposition for S: S = \sum_k \epsilon_k \Pi_k with orthogonal projectors \Pi_k = D^{1/2} \Phi_k D^{-1/2} I promised above that I would explain the following diagram: Diagram outlining the main concepts (again) Let’s summarize what it represents now: G is a simple graph that specifies A the adjacency matrix (generator of a quantum walk), which subtracted from D the diagonal matrix of the degrees gives L the symmetric Laplacian (generator of stochastic and quantum walks), which when normalized by D returns both S the generator of the uniform escape stochastic walk and Q the quantum walk generator to which it is similar! What next? Sadly, this is where we’ll finish for now. We have all the ingredients necessary to study the walks generated by the normalized Laplacians and exploit the relationship between them. Next time, in part 2, I’ll talk you through the mathematics of the uniform escape stochastic walk S and how it connects to the degrees of the nodes in the long-time limit. Then I’ll show you how this helps us solve aspects of the quantum walk generated by Q. In other news Before I leave you, let me tell you about a workshop the ISI team recently attended (in fact helped organize) at the Institute of Quantum Computing, on the topic of quantum computation and complex networks. Needless to say, there were talks on papers related to quantum mechanics and networks! Some researchers at the workshop gave exciting talks based on numerical examinations of what happens if a quantum walk is used instead of a stochastic walk to rank the nodes of a network: • Giuseppe Davide Paparo and Miguel Angel Martín-Delgado, Google in a quantum network, Sci. Rep. 2 (2012), 444. • Eduardo Sánchez-Burillo, Jordi Duch, Jesús Gómez-Gardenes and David Zueco, Quantum navigation and ranking in complex networks, Sci. Rep. 2 (2012), 605. Others attending the workshop have numerically examined what happens when using quantum computers to represent the stationary state of a stochastic process: • Silvano Garnerone, Paolo Zanardi and Daniel A. Lidar, Adiabatic quantum algorithm for search engine ranking, Phys. Rev. Lett. 108 (2012), 230506. It was a fun workshop and we plan to organize/attend more in the future! 33 Responses to Quantum Network Theory (Part 1) 1. John Baez says: Great post! I especially like how you use quantum versus stochastic walks to organize your treatment of the various Laplacian-like operators associated to a graph! Previously these various operators seemed like a bit of a mess to me. It would be good (and probably not hard) to generalize this whole discussion to weighted simple graphs, i.e., those with a positive number labelling each edge. The idea is that the adjacency matrix of a weighted graph is a matrix A of numbers where A_{ij} is the weight of the edge between i to j, or zero if there’s no edge. Weighted graphs are important because in general, ‘not every link is created equal’. Things in general flow more easily, or more often, through some edges than others! Also, as we let the weight of an edge approach zero, our weighted graph can be seen as approaching a graph where that edge doesn’t exist. So, we get a nice topology on the set of all weighted graphs with a given set of vertices. And if you think about it a while, the resulting space of weighted graphs is just the space of symmetric matrices with nonnegative entries. So the math should get very nice. • Actually, in the paper we work entirely on weighted graphs (which are natural both for stochastic and quantum evolution). However, the tricky thing is about interpretation of the Hamiltonian (i.e. a Hermitian matrix) with real, nonnegative entries. If it were just for real entries, it would be a Hamiltonian with time-reversal symmetry. But do you have any ideas how to interpret the restriction on having only nonnegative entries? • Piotr, can you rephrase the question? Do you mean a ‘physics’ interpretation, or a ‘graph/ranking’ interpretation? For ordinary QM, on typically has H=T+V where T is kinetic part, i.e. laplacian, and V is the potential; some interaction term. In the above post, V=0. For graphs, the entries forming the laplacian must, by definition, sum to zero: the diagonal entries are positive, off-diagonal are negative. So if all matrix entries are positive, then there must be some (strong, non-local) V term that describes some interaction between different nodes. (a local V would be zero off-diagonal) To study V, take your H, subtract whatever you think your Laplacian is, and look what’s left over. It will presumably be recognizable as something: I dunno, maybe exp of adjacency matrix or something … I don’t get your time-reversal comment at all: for time-reversal in QM, you must change the sign of both time and energy (i.e. the energy eigenvalues); so this has little to do with the signs in the matrix. And stochastic systems are essentially never time-reversible (frobenius-perron eigenvalue gives the rate of decay to equilibrium). For stochastic systems, one also has the concept of a “wandering set”; a set of points that wander away from their initial locations, never to return. Here, the analog is, I guess, web pages that have no incoming links (or blocks of web pages that have no incoming links), so that their stochastic probability (page rank) shrinks to zero. If you think in terms of measure, that measure is leaving one location, and it has to go somewhere else: it accumulates somewhere (viz, becomes perfectly uniform on the final equilibrium state.) This doesn’t generalize to quantum systems, instead you get a kind-of poincare recurrance or ringing/beating/interference effect. So: free-associating: I’m guessing that perhaps your all-positive entries are trying to capture some net flow from one part of the graph to another. OK, this last sentence doesn’t actually make sense, I’m thinking aloud. • Piotr knows this, and we’re even working on this topic together, but: Time reversal symmetry should be with respect to a specfic quantity. Assuming no spin, and when considering transfer probabilities, then Hamiltonians with real entries in the site basis generate time-inversion symmetric transition rates. 2. John Baez says: I will be going to China in a few minutes, and will be there until August 20. During this time I may not post to this blog very often, or at all. 3. Almost everything developed here also applies equally well to any homogenous space, right? That is, the vector \psi is replaced by a point p in the homogenous space, and the various matricies above by group elements of whatever group it is that is acting on the homogenous space. This abstraction is very rarely done; I’ve always wondered what I’m missing because of it. I figure there are two reasons for this: 1) the authors are not familiar enough with the general idea of homogenous spaces to be able to make specific claims that they’re confident of (heck, I’m certainly not), and 2) some part of the problem definition fails to generalize. The 1) I can excuse, but 2) leaves me hanging. • Tomi Johnson says: Great comment, interesting point. I think I largely fall into the first category: not knowing much about homogeneous spaces. I agree, there should be no problem mathematically transfering what we’ve done here onto a continuous space. For the quantum case this would just be something like the standard single particle Schroedinger equation in a potential with a modified kinetic energy term to account for the geometry of the network. The classical case would some similar stochastic diffusion equation. I’ll have a think about this! • linasv says: Well, I think I’m saying that both quantum and classical are special cases of a general framework. The classical case uses points that live in a simplex (total probability sums to 1); Markov matrices relate/move the various points. The quantum case uses points that live in CP^n (viz, the n-body wave-function) with U(n) matrices to relate/move the various points. The general case has points on some general manifold, with some matrices to move them around. To keep some amount of symmetry/invariance in the problem, it seems like the manifold should be a homogenous space. The Schroedinger eqn is a thing that lives in the tangent space of the manifold. … Perhaps what I’m saying is that I don’t understand why the Laplacians are what they are. I mean, I understand at the shallow level: the quantum case must be symmetric, to get unitarity; the classical case must decay, to get probabilities that sum to 1. These are given axiomatically: these are the rules of the game. What’s the general case? How is the specific manifold forcing the Laplacian into the specific form its taking? I don’t quite see this ‘big picture’. • Tomi Johnson says: Perhaps the reason that the formalism for unitary quantum and stochastic dynamics above has not been generalized (to our knowledge) is that it is unclear what other physical objects \psi and evolutions d \psi / dt = H \psi that would fall under this generalization. Perhaps one is the generalized quantum dynamics, where a physical object \rho, the density matrix (trace 1, Hermitian, positive), is evolved according to d \rho / dt = L \rho, where L is not a Hamiltonian, but some superoperator that leads to the preserving of the trace, Hermiticity and positivity, e.g. L \rho = [H,\rho], where H is Hermitian. In fact, in this formalism you could simultaneously include both quantum dynamics under the symmetric normalized Laplacian and the stochastic dynamics under the asymmetric normalized Laplacian. When we’re back at ISI after the holidays, I’ll raise your point with the others there, and see what they think. • John Baez says: By the way, I fixed your LaTeX. The correct way to use LaTeX on this blog is described right above the box where you type your comments. You need to include the word ‘latex’ in the manner described. Sorry it took a while to approve your comments—I’ve had intermittent access to the internet. • linasv says: Thanks Tomi; of course, I’m just being lazy and could google up enough to keep me busy to answer my own questions. Here’s maybe another way to ask them: if one studies finite automata, one soon realizes that their state transitions live on a graph, and that the probabalistic finite automata is kind-of like a Markov chain. The other thing one discovers is that a finite automata is just the action of a monoid on a set. If the set is a simplex, and the representation of the monoid element is a Markov matrix, then you’ve got your classic radio signal engineering problem. If the set is CP^n and the representation of monoid elements is U(n) then you’ve got a quantum finite automata. But these are just two special cases; the general case is studied, e.g. google suggests: http://citeseerx.ist.psu.edu/viewdoc/summary?doi= The difference between the finite automata and what you are doing is that in the finite automata, the graph edges are labelled with a symbol (the monoid element), and thus a graph walk corresponds to a sequence of symbols (edges walked). A random walk on a graph induces a measure on the set of strings of symbols (aka ‘the language’). If the random walk is independent of the history of the path, then it is Markovian, and the measure factorizes. (if random walk is not independent of the history, then it must be generated by a push-down automaton (context free language) or even a full Turing machine). If I take a random walk on a graph, and, after coming to a vertex, I assign equal probability to leaving by any edge, then I get your stochastic Laplacian, above. Or, as John Baez suggests, I could twiddle the edge weights, and preferentially leave on some edges. Or I could twiddle the exit probabilities *at each vertex* (i.e. each edge has two weights not one, depending on whether one is coming or going), and recover a classic Markov chain (minus diagonal) instead of the Laplacian. And here is where I get confused. The word ‘Markov’ in my last paragraph is not really the same as in my first paragraph, and yet its closely related, and so mentally I circle back, and wonder “what other sets can the monoid act on?” … and perhaps I now can answer my question, like so: In measure theory, measures must the real (and positive), and so the measure assigned to a language (the set of all random walks on graph) is real-valued, and thus “stochastic”. Perhaps(?) one can contemplate measures with an additional U(1) in them, and perhaps this is what the quantum walk is providing!? So when I ask “what other sets can the monoid act on, and what would the Laplacian, etc. generalize to in such cases?” then perhaps I am contemplating set-valued measures on languages? Hmmmm. Sorry for the long post. Sometimes, mathematics is like a visit to a candy shop; each treat looks more delicious than the last, and picking out just one to enjoy is just too hard. • linasv says: p.s. I goofed in my last post, I mis-characterized what the definition of the language of an FA is. (It’s the sequence of vertexes, not the sequence of edges; the edge sequence is the coding; the vertexes are the plain-text.) Caveat Emptor. 4. amarashiki says: Fascinating! It seems this is going to be another of your great series, John! I can’t wait to read the next one. BTW, networks are pretty like hypergraphs (I think I told you it before…) and I found very brainy the “map-of-internet”. Off-topic: how did you write the text in boxes! Just curious! I am planning to release my domain soon and this class of tricks could be useful for me LaTeXing article series. Best, JFGH • Tomi Johnson says: Thanks for the comment. John might have changed it a little bit, but my original suggestion was to use the html to create the boxes (I just copied this from what is used on the Azimuth forum). Hope that helps! • John Baez says: Amarashiki wrote: Tomi Johnson wrote this, so he deserves all the credit… even for figuring out how to put text in boxes. It works like this: <div style="background:#fff1f1;border:solid black;border-width:2px 1px;padding:0 1em;margin:0 1em;overflow:auto;"> 5. domenico says: I am thinking, now, to an unification of the two physics description. If the Hamiltonian is a function of the wave functions, and it can be real or complex, then the stocastic and quantum description is the same: it is possible to write the Taylor series with wave function terms and complex number. In other word, the solution can be the same for stocastic, or quantum, evolution. • For example, if a Hamiltonian matrix is symmetric from this property, we can say that all stationary states can be chosen to take only real values. This is a physical (sub) consequence of what Piotr mentioned—sub because it is only a consequence of the fact that the Hamiltonian has real entries. If they’re real non-negative, this additional restriction results in additional mathematical properties. • Tomi Johnson says: I agree that it’s a very nice property that unitary quantum dynamics under Hamiltonian -iH is identical to stochastic dynamics under Hamiltonian H. The methods to solve/simulate one type of dynamics can therefore be transferred to the other. This is something I’ve worked on in the context of (tensor network-based) numerical methods for efficiently near-exactly simulating stochastic dynamics. These types of methods were devised for simulating quantum dynamics, but we applied them to stochastic dynamics in the following paper, • T. H. Johnson, S. R. Clark, and D. Jaksch, Dynamical simulations of classical stochastic systems using matrix product states, Phys. Rev. E 82 (2010), 036702; arXiv:1006.2639. if you’re interested. We plan to publish more work on this very soon. 6. I have found this post to be extremely well written and clearly explained, and overall very easy to follow. Nice work! Looking forward for the rest of the series. 7. Ramsay says: Thanks for the nice post. I am curious as to why the “aysmmetric normalized Laplacian” is defined as it is. Specifically, I am more used to seeing an operator that is the transpose of S, i.e., X = D^{-1}L. Then, at least in the contexts that I have studied, it is natural to introduce the inner product \langle x, y \rangle_{D} = x^{T} D y, and the operator X is self-adjoint with respect to this inner product. The inner product is natural in many contexts, where the elements of D encode a measure of the “importance” or weight associated with each node. • Tomi Johnson says: Thanks for the comment! I think the matter of the transpose is easily resolved. We consider stochastic (or transition) matrices to act to the right on column probability vectors. Others consider them to act to the left on row probability vectors. The difference between the two formalisms is just a transpose. As for the inner product, that is a nice fact, thanks for pointing it out. Do you have a link to any of the contexts in which you came across it? • My thoughts are: i. I think it’s the difference between letting operators act to the left, or in our case, the right, on probability vectors. ii. Even in the wikipedia definition of “random walk normalized Laplacian” it’s defined as mentioned. iii. For the inner product, please provide a link so I can look at how it’s used exactly. iv. We are forced to work in the site basis, of the walker. This is the basis that an operator must be self adjoint with respect to. We can redefine the inner product, to take away the asymmetry, but we could also just multiply by D to accomplish the same goal While it’s true that you define self adjoint with respect to an inner product, it’s not clear how that helps us in any way. In fact, if you try to write the operator with respect to this new inner product, it just removes D as we already mentioned. • Ramsay says: Thanks. I should clarify that “the contexts that I have studied” are quite far removed from the random walker setting being discussed here. I also asked the question naively, without having read your paper. The contexts that I am familiar with are discrete Laplacians modeled on the Laplacian of a Riemannian manifold. Typically the domain would be a geometric simplicial complex, and D^{-1}L would be such That D is a diagonal matrix whose entries are the volumes of closed (Voronoi) cells associated with the vertices: they sum to the volume of the manifold. The same idea is sometimes used with graphs. For example, see the discussion on p. 297 (5th page) of Fujiwara’s paper “Growth and the spectrum of the Laplacian of an infinite graph” There is a lot of development of this idea into a “discrete exterior calculus”, where D can be seen as an instance of a discrete Hodge star operator. For me to really appreciate the point (iv) you made, I would need to study your paper, but I haven’t done that yet. Your point (i) answers my question on why the definition is as it is, (i.e., a convention to use row vectors instead of column vectors), although my first hit at wikipedia contradicts your point (ii): 8. […] Last time I told you how a random walk called the ‘uniform escape walk’ could be used to analyze a network. In particular, Google uses it to rank nodes. For the case of an undirected network, the steady state of this random walk tells us the degrees of the nodes—that is, how many edges come out of each node. […] 9. Arjun Jain says: Nice post. Small typo: phi_k -> phi_k above the diagram in the section on the normalized laplacian. 10. In this blog post I will introduce some basics of quantum mechanics, with the emphasis on why a particle being in a few places at once behaves measurably differently from a particle whose position we just don’t know. It’s a kind of continuation of the “Quantum Network Theory” series by Tomi Johnson about our work in Jake Biamonte’s group at the ISI Foundation in Turin. Leave a Reply to Arjun Jain Cancel reply WordPress.com Logo Google photo Twitter picture Facebook photo Connecting to %s
c4f79cfa4279253a
This Quantum World/Implications and applications/Why energy is quantized From Wikibooks, open books for an open world < This Quantum World‎ | Implications and applications Jump to: navigation, search Why energy is quantized[edit] {d^2\psi(x)\over dx^2}=A(x)\,\psi(x),\qquad A(x)={2m\over\hbar^2}\Big[V(x)-E\Big]. Since this equation contains no complex numbers except possibly \psi itself, it has real solutions, and these are the ones in which we are interested. You will notice that if V>E, then A is positive and \psi(x) has the same sign as its second derivative. This means that the graph of \psi(x) curves upward above the x axis and downward below it. Thus it cannot cross the axis. On the other hand, if V<E, then A is negative and \psi(x) and its second derivative have opposite signs. In this case the graph of \psi(x) curves downward above the x axis and upward below it. As a result, the graph of \psi(x) keeps crossing the axis — it is a wave. Moreover, the larger the difference E-V, the larger the curvature of the graph; and the larger the curvature, the smaller the wavelength. In particle terms, the higher the kinetic energy, the higher the momentum. Potential energy well.svg Observe, to begin with, that at x_1 and x_2, where E=V, the slope of \psi(x) does not change since d^2\psi(x)/dx^2=0 at these points. This tells us that the probability of finding the particle cannot suddenly drop to zero at these points. It will therefore be possible to find the particle to the left of x_1 or to the right of x_2, where classically it could not be. (A classical particle would oscillates back and forth between these points.) Next, take into account that the probability distributions defined by \psi(x) must be normalizable. For the graph of \psi(x) this means that it must approach the x axis asymptotically as x\rightarrow\pm\infty. Suppose that we have a normalized solution for a particular value E. If we increase or decrease the value of E, the curvature of the graph of \psi(x) between x_1 and x_2 increases or decreases. A small increase or decrease won't give us another solution: \psi(x) won't vanish asymptotically for both positive and negative x. To obtain another solution, we must increase E by just the right amount to increase or decrease by one the number of wave nodes between the "classical" turning points x_1 and x_2 and to make \psi(x) again vanish asymptotically in both directions. The bottom line is that the energy of a bound particle — a particle "trapped" in a potential well — is quantized: only certain values E_k yield solutions \psi_k(x) of the time-independent Schrödinger equation: