chash stringlengths 16 16 | content stringlengths 267 674k |
|---|---|
a73c116fa8b198bb | Quantum Machine Learning: Introduction to Quantum Computation
In this guide we discuss several paradigms for quantum computing: gate-model quantum computing, adiabatic quantum computing, and quantum annealing.
3 years ago • 9 min read
By Peter Foy
In this guide we're going to discuss several paradigms for quantum computation, in particular:
1. Gate-Model Quantum Computing
2. Adiabatic Quantum Computing
3. Quantum Annealing
4. Implementation of Quantum Architectures
5. Variational Circuits & the Quantum Approximate Optimization Algorithm
6. Summary of Quantum Computation
This guide is based on this course on Quantum Machine Learning from U of T, and if you want to read more about this subject check out our other guides from this course.
IBM’s Q System One | Credit: IBM
1. Gate-Model Quantum Computing
Gate-model quantum computing is also called the universal quantum computing model.
Most of the academic and commercial efforts in quantum computing today focus on this model including Google, Rigetti, IBM Q, and many others.
As we'll see, the software stack to create an algorithm for a quantum computer is different from what we're used to classically. But first, we need to understand how the gates of quantum computers work.
The diagram below shows how we can go from a problem we want to solve to computing it with a quantum processor or simulator:
Let's say we want to solve the travelling salesman problem, which asks the question:
This is an NP-hard problem in combinatorial optimization, and we want to find the appropriate matching quantum optimization algorithm to solve this.
The algorithm we'll discuss later is called QAOA: the Quantum Approximate Optimization Algorithm.
The quantum algorithm then splits into a quantum circuit that has gates and unitary operations.
Below this level, the actual compilation is happening in the quantum compiler.
Once the compilation is finished you can either execute the program on a quantum processor (QPU) or on a quantum simulator.
There is a theorem called the Solovay-Kitaev theorem, which says that you can approximate any unitary operation by a finite set of gates.
This theorem has been described as:
One of the most important fundamental results in the field of quantum computation.
This is what makes the gate model of a quantum computer universal: it can take any unitary operation and decompose it into elementary gates.
So a quantum computer is universal in the sense that it can take any quantum state and transform it into any other quantum state.
Let's look at circuits in more detail.
Quantum Circuits
Quantum circuits are composed of three things:
• Qubit registers
• Gates acting on them
• Measurements on the registers
Qubit registers are indexed from 0, so we say qubit 0, qubit 1, etc.
This is not to be confused with the state of the qubit, as qubit 1 can have a state \(|0\rangle\), for example.
Quantum Gates
Let's move on to gates. In a classical computer the processor transforms bit strings with logical gates.
Any bit string can be achieved with just two gates, making universal computation possible with just these two gates.
The same is also true for a quantum computer: any unitary operation can be decomposed into gates.
The difference with quantum computation is that three types of gates are sufficient (although we can have an infinite number of single-qubit operations since we can map any qubit on the surface of the Bloch sphere).
A few of the most common unitary gates include:
• X gate - the Pauli-X or NOT gate has a matrix of \(\begin{bmatrix}0 & 1\\ 1& 0\end{bmatrix}\)
• Z gate - the Pauli-Z gate has a matrix of \(\begin{bmatrix}1 & 0\\ 0& -1\end{bmatrix}\)
• H gate - the Hadamard gate has a matrix of \(\frac{1}{\sqrt{2}}\begin{bmatrix}1 & 1\\ 1& -1\end{bmatrix}\)
To see more of quantum gate examples check out the Quantum logic gate Wikipedia page.
The quantum computers that we have today can only execute gate sequences with ~10-20 gates, but ideally, we want to scale this up so we can execute longer algorithms.
2. Adiabatic Quantum Computing
We now know that the gate model quantum computer is a universal way of transforming quantum states into other quantum states.
Let's now look at a different model called adiabatic quantum computing (AQC).
Adiabatic quantum computers can also achieve universal quantum computation, and they rely on the adiabatic theorem to do calculations.
Adiabatic quantum computers are regarded as a subclass of quantum annealing (which we'll discuss next), but there are subtle differences between the two.
The Hamiltonian & Schrödinger's Equation
To understand adiabatic quantum computing we need to introduce the Hamiltonian, which is an object describing the energy of a classical or quantum system.
The Hamiltonian also provides a description of a system evolving with time.
Formally, we can express this with the Schrödinger equation:
i\hbar {\frac {d}{dt}}|\psi(t)\rangle = H|\psi(t)\rangle,
where \(\hbar\) is the reduced Planck constant.
As discussed earlier, it is a unitary operator that evolves its state, and this is what we get if we solve the Schrödinger equation for some time \(t\): \(U = \exp(-i Ht/\hbar)\).
The Adiabatic Theorem
An adiabatic process is when gradually changing conditions allow a system to adapt to its new configuration:
Gradually changing conditions allow the system to adapt its configuration, hence the probability density is modified by the process. If the system starts in an eigenstate of the initial Hamiltonian, it will end in the corresponding eigenstate of the final Hamiltonian.
In simpler terms, in an adiabatic process the conditions change slowly enough for the system to adapt to a new configuration.
For example, we can change Hamiltonian \(H_0\) to another other Hamiltonian \(H_1\), the simplest of which could be linear:
H(t) = (1-t) H_0 + t H_1,
Since this Hamiltonian depends on time, the Schrödinger equation becomes significantly more complicated to solve.
But since the adiabatic theorem says the change occurs slowly, a resulting simple Hamiltonian evolves adiabatically from the complicated Hamiltonian.
As Wikipedia describes:
First, a (potentially complicated) Hamiltonian is found whose ground state describes the solution to the problem of interest. Next, a system with a simple Hamiltonian is prepared and initialized to the ground state. Finally, the simple Hamiltonian is adiabatically evolved to the desired complicated Hamiltonian.
Adiabatic quantum computers use this phenomenon to perform universal calculations.
This paper found that adiabatic computation has shown to be polynomially equivalent to conventional quantum computing model.
Adiabatic quantum computing is somewhat idealized, so let's now move on to a less idealized version: quantum annealing.
3. Quantum Annealing
Quantum annealing is the second most common paradigm for quantum computation.
Unlike the gate model and adiabatic quantum computing, quantum annealing solves a more specific problem, and universality is not a requirement.
One aspect of adiabatic quantum computing that makes it somewhat idealized is that calculating the speed limit for change is not trivial, and can in many cases be more challenging than solving the original problem.
Instead of requiring a speed limit, quantum annealing repeats the transition (or the annealing) over and over again.
After it has collected a number of samples, we can use the spin configuration with the lowest energy as the solution, although there is no guarantee that this is the ground state.
As we can see from the diagram below, quantum annealing also has a different software stack that gate-model quantum computers:
The difference is that instead of using a quantum circuit, we encode the problem into the classical Ising model.
Quantum annealers also suffer from limited connectivity, so next we have to find a graph minor embedding - this combines several physical qubits into a logical qubit
The minor graph embedding is actually itself an NP-hard problem, so most quantum annealers use probabilistic heuristics to find an embedding.
Once we do that, quantum annealing provides is an interesting approach to an optimization problem.
The most prominent company focused on quantum annealing is D-Wave Sytems, which holds the record for the number of qubits in a superconducting quantum annealer: 2048.
4. Implementation of Quantum Architectures
Now that we've looked at a few of the common paradigms, let's discuss a few ways that these quantum computers are actually built.
Superconducting Architectures
The most popular approach is with superconducting architectures, which use silicon-based technology.
This is very useful because you can create them with the same facilities that you would use for building a digital circuit for classical computers.
The main difference is that you cool the environment down to 10 millikelvin and then use microwave pulses to control the gates and the interaction between qubits.
One of the issues with silicon-based technology is that it is fundamentally two-dimensional.
This means that the moment that we get to 5 qubits, we can't have full connectivity between every pair of qubits.
This is where the quantum compiler plays an important role.
The compiler would implement a swap gate, and this allows you to swap two qubits.
Trapped Ions
Another way we can implement quantum computation is with trapped ions.
We take individual ions, which are charged atomic particles, and line them up and trap them in an electromagnetic field.
We then use laser pulses to control their interaction.
One of the advantages of this is that they have a long coherence time.
One of the disadvantages is that the speed of control is much slower than superconducting architectures.
Scalability of the trapped ion implementation, however, is still unclear.
We currently have systems implemented with ~70 qubits, but it's unclear if it will be able to scale to thousands of qubits.
Photonic Systems
Finally, we have photonic systems which use light.
For example, you can the polarization of light as qubit states - this would be left polarized, right polarized, and the superposition of the two.
The advantage of photonic systems is that they can operate at room temperature.
One of the disadvantages is that photonic circuits have photon loss, which cannot be restored.
It's still unclear which one of the architectures will lead to large-scale quantum computing.
5. Variational Circuits & the Quantum Approximate Optimization Algorithm
We've now discussed gate-model quantum computers, quantum annealers, and how to implement them.
We also introduced how quantum computers have imperfections, and these prevent us from running many of the most famous quantum algorithms.
Variational Circuits
In the last few years, however, we have seen a new breed of quantum algorithms develop: variational circuits.
Variational circuits are designed specifically for our current and near-term quantum computers, which are noisy and imperfect.
As discussed in our article on Quantum Systems, current architectures are always interacting with an environment, and this causes things like decoherence.
What we do with variational circuits is run a short sequence of calculations on the Quantum Processing Unit (QPU) and then extract the results to a CPU.
This results in the quantum circuit being parameterized, which allows us to go back and adjust the parameters of the quantum processor and run another short calculation.
This becomes an iterative loop between QPU and CPU that improves over time.
The Quantum Approximate Optimization Algorithm
One of the most famous variational circuits is the Quantum Approximate Optimization Algorithm (QAOA), which draws inspiration from quantum annealing.
QAOA tries to approximate the adiabatic pathway, but on a gate-model quantum computer.
Recall that we're calculating the ground state of some simple Hamiltonian, and then we follow the adiabatic pathway and can find the ground state for the system we're interested in.
What we do with QAOA is break the pathway up into discrete parts, and we parameterize the circuit by having a more and more accurate approximation of the transition.
We have a transition that we want to approximate, so it is a time-evolving Hamiltonian, and this approximation is known as Trotterization.
In the end, you just read the ground state the same way you would with a quantum annealer.
So with QAOA we can formulate an optimization problem using the parameters and find the ground state of the system we're interested in.
If you want to learn more about QAOA check out pyQAOA from Rigetti, which is:
a Python module for running the Quantum Approximate Optimization Algorithm on an instance of a quantum abstract machine.
6. Summary of Quantum Computation
In this guide, we discussed several paradigms for quantum computation, including:
• Gate-Model Quantum Computing
• Adiabatic Quantum Computing
• Quantum Annealing
Most academic and commercial efforts focus on the quantum gate model, but quantum annealing also plays an important role in industry today.
We then looked at the physical implementation of quantum computation. It is still unclear which architecture will prove scalable, but a few notable implementations include:
• Superconducting architectures
• Trapped ions
• Photonic systems
Finally, we looked at a family of algorithms called variational circuits, which are designed for the imperfect and noisy quantum computers that we have today.
Variational circuits are classical-quantum hybrid algorithms, as they create an iterative loop between QPUs and CPUs.
One of the most famous variational circuits is the Quantum Approximate Optimization Algorithm, which draws inspiration from quantum annealing.
In the next article in this series on quantum machine learning, we're going to dive into more detail about classical-quantum hybrid algorithms.
Spread the word
Keep reading |
216c6e532d497b82 | Quantum Interference Effects in Hořava-Lifshitz Gravity
Quantum Interference Effects in Hořava-Lifshitz Gravity
The relativistic quantum interference effects in the spacetime of slowly rotating object in the Hořava-Lifshitz gravity as the Sagnac effect and phase shift of interfering particle in neutron interferometer are derived. We consider the extension of Kehagias-Sfetsos (KS) solution in the Hořava-Lifshitz gravity for the slowly rotating gravitating object. Using the covariant Klein-Gordon equation in the nonrelativistic approximation, it is shown that the phase shift in the interference of particles includes the gravitational potential term with the KS parameter . It is found that in the case of the Sagnac effect, the influence of the KS parameter is becoming important due to the fact that the angular velocity of the locally non rotating observer is increased in Hořava gravity. From the results of the recent experiments we have obtained lower limit for the coupling KS constant as . Finally, as an example, we apply the obtained results to the calculation of the UCN (ultra-cold neutrons) energy level modification in the gravitational field of slowly rotating gravitating object in the Hořava-Lifshitz gravity.
Keywords: Hořava gravity; Neutron interferometer; Sagnac effect.
Received (Day Month Year)
Revised (Day Month Year)
PACS Nos.: 04.50.-h, 04.40.Dg, 97.60.Gb.
1 Introduction
One of the biggest difficulties in attempts toward the theory of quantum gravity is the fact that general relativity is non-renormalizable. This would imply loss of theoretical control and predictability at high energies. In January 2009, Petr Hořava proposed a new theory of quantum gravity with dynamical critical exponent equal to in the UV (Ultra-Violet) in order to evade this difficulty by invoking a Lifshitz-type anisotropic scaling at high energy. This theory, often called Hořava-Lifshitz gravity, is power counting renormalizable and is expected to be renormalizable and unitary .
Having a new candidate theory for quantum gravity, it is important to investigate its astrophysical and cosmological implications. Thus the Hořava theory has received a great deal of attention and since its formulation various properties and characteristics have been extensively analyzed, ranging from formal developments , cosmology , dark energy , dark matter , and spherically symmetric or axial symmetric solutions .
In the paper Ref. ? the possibility of observationally testing Hořava gravity at the scale of the Solar System, by considering the classical tests of general relativity (perihelion precession of the planet Mercury, deflection of light by the Sun and the radar echo delay) for the Kehagias-Sfetsos asymptotically flat black hole solution of Horava-Lifshitz gravity has been considered. The stability of the Einstein static universe by considering linear homogeneous perturbations in the context of an Infra-Red (IR) modification of Hořava gravity has been studied in the paper . In the paper Ref. ? author considered potentially observable properties of black holes in the deformed Hořava-Lifshitz gravity with Minkowski vacuum: the gravitational lensing and quasinormal modes.
The role of the tidal charge in the orbital resonance model of quasiperiodic oscillations in black hole systems and in neutron star binary systems have been studied intensively. The motion of test particles around black hole immersed in uniform magnetic field in Hořava gravity and influence of parameter on radii of innermost stable circular orbit have been studied in papers Ref. ?, ?.
The experiment to test the effect of the gravitational field of the Earth on the phase shift in a neutron interferometer were first proposed by Overhauser and Colella . Then this experiment was successfully performed by Collela, Overhauser and Werner . After that, there were found other effects, related with the phase shift of interfering particles. Among them the effect due to the rotation of the Earth , which is the quantum mechanical analog of the Sagnac effect, and the Lense-Thirring effect which is a general relativistic effect due to the dragging of the reference frames. So we do not consider the neutron spin in this paper.
In the paper Ref. ? a unified way of study of the effects of phase shift in neutron interferometer was proposed. Here we extend this formalism to the case of slowly rotating stationary gravitational fields in the framework of Hořava-Lifshitz gravity in order to derive such phase shift due to either existence or nonexistence of the KS parameter .
The Sagnac effect is well known and thoroughly studied in the literature, see e.g. paper Ref. ?. It presents the fact that between light or matter beam counter-propagating along a closed path in a rotating interferometer a fringe shift arises. This phase shift can be interpreted as a time delay between two beams, as it can be seen below, does not include the mass or energy of particles. That is why we may consider the Sagnac effect as the ”universal” effect of the geometry of space-time, independent of the physical nature of the interfering beams. Here we extend the recent results obtained in the papers Ref. ?, ? where it has been shown a way of calculation of this effect in analogy with the Aharonov-Bohm effect, to the case of slowly rotating compact object in Hořava-Lifshitz gravity.
In this paper we study quantum interference effects in particular the Sagnac effect and phase shift effect in a neutron interferometer in the Hořava model which is organized as follows. In section 2, we start from the covariant Klein-Gordon equation in the Hořava model and consider terms of the phase difference of the wave function. Recently Granit experiment verified the quantization of the energy level of ultra-cold neutrons (UCN) in the Earth’s gravity field and new, more precise experiments are planned to be performed. Experiments with UCN have high accuracy and that is the reason to look for verification of the gravitational effects in such experiments. In this section as an example we investigate modification of UCN energy levels caused by the existence of KS (Kehagias and Sfetsos) parameter . In section 3 we consider interference in Mach-Zender interferometer and in Section 4 we study the Sagnac effect in the background spacetime of slowly rotating object in Hořava gravity.
Throughout, we use space-like signature , geometrical units system (However, for those expressions with an astrophysical application we have written the speed of light explicitly.). Greek indices are taken to run from 0 to 3 and Latin indices from 1 to 3; covariant derivatives are denoted with a semi-colon and partial derivatives with a comma.
2 The Phase shift
The four-dimensional metric of the spherical-symmetric spacetime written in the ADM formalism has the following form:
where , are the metric functions to be defined.
The IR (Infrared) - modified Horava action is given by (see for more details Ref. ?, ?, ?, ?, ?, ?)
where and are constant parameters, the Cotton tensor is defined as
is the three-dimensional curvature tensor, and the extrinsic curvature is defined as
where dot denotes a derivative with respect to .
Imposing the case , which reduces to the action in IR limit, one can obtain the Kehagias and Sfetsos (KS) asymptotically flat solution for the metric outside the gravitating spherical symmetric object in Horava gravity:
where is the total mass, is the KS parameter and the constant is chosen.
Up to the second derivative terms in the action, one can easily find the known topological rotating solutions given in Ref. ?, ?. This metric in the slow rotation limit has the form:
here is the specific angular momentum of the gravitating object.
Using the Klein-Gordon equation
for particles with mass one cane define the wave function of interfering particles as
where is the nonrelativistic wave function.
In the present situation, both parameters and are sufficiently small and their higher order terms can be neglected. Therefore, to the first order in , and neglecting the terms of , the Klein-Gordon equation in Horava-Lifshitz gravity becomes
where we have used the following notations:
which correspond to the square of the total orbital angular momentum and component of the orbital angular momentum operators of the particle with respect to the center of the Earth, respectively.
After the coordinate transformation , where is the angular velocity of the Earth, we obtain the Schrödinger equation for an observer fixed on the Earth in the following form:
is the Hamiltonian for a freely propagating particle, is the Horava-Lifshitz gravitational potential energy, is concerned to the rotation, is related to the effect of dragging of the inertial frames. The phase shift terms due to and are
where represents the position vector of the instrument from the center of the Earth, , is the area of the interferometer, and is the unit normal vector. If we assume that the Earth is a sphere of radius with uniform density then
if R is perpendicular and parallel to , respectively. Here is the free fall acceleration of Earth.
Now one can easily calculate the phase shift due to the gravitational potential. For the purpose of the present discussion, the quasi-classical approximation is valid and the phase shift
is given by the integration along a classical trajectory. Here is the area of interferometer, is de Broglie wavelength (see the Fig. 1).
Fig. 1: Schematic illustration of alternate paths separated in the vertical direction in a neutron interferometer.
Recently published paper Ref. ? describes the precise measurement of the gravitational redshift by the interference of matter waves in the gravitational field of the Earth. Comparing their experimental results with our theoretical predictions one can easily obtain the lower limit on the value of KS parametr .
Astrophysically it is interesting to apply the obtained result for the Hamiltonian of the particle, moving around rotating gravitating object in Hořava gravity, to the calculation of energy level of ultra-cold neutrons (UCN) (as it was done for slowly rotating space-times in the papers Ref. ?, ?). The effect of the angular momentum perturbation of the Hamiltonian on the energy levels of UCN was studied in and subsequent papers. Our purpose is to generalize this correction to the case of the gravitating object (the Earth in particular case) in Hořava model. Denote as the unperturbed non-relativistic stationary state of the 2- spinor (describing UCN) in the field of the rotating gravitating object in Hořava gravity. Then we have
is the Laplacian in the spherical coordinates. By adopting new Cartesian coordinates within and axis being local vertical, when the stationary state is assumed to have the form
one can easily derive from (19) that
where the following notation
has been used.
Following to the papers Ref. ?, ? one can compute ”KS parameter ” modification of the energy level as the first-order perturbation:
Assume (where is the latitude angle) and to be equal to 1, that is . Assuming now one can extend (24) as
We remember that is the average value of for the stationary state . For the further calculation we use formula for from
Now one can easily estimate the relative ”KS parameter ” modification of the energy level of the neutrons as
We numerically estimate the obtained modification using the typical parameters for the Earth: , , , , and ,
From the obtained result (28) one can see, that the in influence of parameter will be stronger in the vicinity of compact gravitating objects with small . Recent experiments on measuring energy levels of UCN has an error , which does not allow to obtain the influence of parameter on energy levels of UCN. Further improvements of the experiments would give either exact value or lower limit for the above mentioned parameter.
3 The interference in a Mach-Zehnder-type interferometer
The components of the tetrad frame for the proper observer for metric (6) are
and the acceleration of the Killing trajectories is
and we obtain for the nonvanishing component of the acceleration:
The nonvanishing orthonormal (”hatted”) components of rotation tensor of the stationary congruence in the slowly rotating Hořava-Lifshitz gravity are given by
The simple form of the vector potential of the electromagnetic field in the Lorentz gauge in the spacetime (6) is . Here the integration constant , where gravitational source is immersed in the uniform magnetic field being parallel to its axis of rotation (properties of black hole immersed in external magnetic field have been studied, for example, in Ref. ?, ?, ?, ?, ?, ?), and the other integration constant can be calculated from the asymptotic properties of spacetime (6) at the infinity:
One can write the total energy of the particle in the weak field approximation in the following form:
where is electric charge of the particle. This is interpreted as total conserved energy consisting of gravitationally modified kinetic and rest energy , a modified electrostatic energy .
For the further use note the measured components of the electromagnetic field, which are the electric and magnetic fields:
where is the field tensor, is the pseudo-tensorial expression for the Levi-Civita symbol , .
Now one can obtain the total phase shift as
where , is the angle of the baseline with respect to and is the tilt angle. Therefore one can independently vary the angles and , and extract from the phase shift measurements the following combinations of terms:
Using above obtained results one can estimate lower limit for KS parameter . Using the results of the Earth based atom interferometry experiments would give us an estimate .
4 The Sagnac effect in the Horava gravity
It is well known that the Sagnac effect for counter-propagating beams of particles on a round trip in an interferometer rotating in a flat space-time may be obtained by a formal analogy with the Aharonov-Bohm effect. Here we study the interference process of matter or light beams in the spacetime of slowly rotating compact gravitating object in braneworld in terms of the Aharonov-Bohm effect . The phase shift
is detected at uniformly rotating interferometer and the time difference between the propagation times of the co-rotating and counter-rotating beams is equal to
In the expressions (45) – (46) indicates the mass (or the energy) of the particle of the interfering beams, is the gravito-magnetic vector potential which is obtained from the expression
and is the unit four-velocity of particles:
From (6) and coordinate transformation , where one can see that the unit vector field along the trajectories will be
where we have used the following notation
Now inserting the components of into the equation (47) one can obtain
Integrating vector potential as it is shown in equations (45) and (46) one can get the following expressions for and (here we returned to the physical units):
where is the angular velocity of Lense-Thirring.
Following to the paper Ref. ? one can find a critical angular velocity
which corresponds to zero time delay . is the angular velocity of zero angular momentum observers (ZAMO).
5 Conclusion
We have studied quantum interference effects including e.g. the phase shift and time delay in Sagnac effect in the spacetime of rotating gravitational objects in Hořava gravity and found that they can be affected by the KS parameter . Then, we have derived an additional term for the phase shift in a neutron interferometer due to the presence of KS parameter and studied the feasibility of its detection with the help of ”figure-eight” interferometer. We have also investigated the application of the obtained results to the calculation of energy levels of UCN and found modifications to be rather small for the Earth but more relevant for the compact astrophysical objects. The result shows that the phase shift for a Mach-Zehnder interferometer in spacetime of gravitational object in Hořava gravity is influenced by the KS parameter . Obtained results can be further used in laboratory experiments to detect the interference effects related to the phenomena of Hořava gravity. Recently authors of the paper Ref. ? from Solar system tests obtained values for parameter as follow: (from perihelion precession of the Mercury), (light deflection by the Sun), (radar echo delay). Here we have estimated lower limit for parameter as using the experimental results of the recent paper Ref. ? on the precise measurement of the gravitational redshift by the interference of matter waves.
The work was supported by the UzFFR (projects 5-08 and 29-08) and projects FA-F2-F079 and FA-F2-F061 of the UzAS. This work is partially supported by the ICTP through the OEA-PRJ-29 project. Authors gratefully thank Viktoriya Morozova for useful discussions. AB acknowledges the TWAS for associateship grant. AA and AB thank the IUCAA for the hospitality where the research has been completed.
• 1. P. Hořava, JHEP 0903, 020 (2009).
• 2. P. Hořava, Phys. Rev. D 79, 084008 (2009).
• 3. M. Visser, Phys. Rev D 80, 025011 (2009).
• 4. T. P. Sotiriou, M. Visser and S. Weinfurtner, Phys. Rev. Lett. 102, 251601 (2009).
• 5. P. Hořava, Phys. Rev. Lett. 102, 161301 (2009).
• 6. R. G. Cai, Y. Liu and Y. W. Sun, JHEP 0906, 010 (2009).
• 7. D. Orlando and S. Reffert, Class. Quant. Grav. 26, 155021 (2009).
• 8. T. P. Sotiriou, M. Visser and S. Weinfurtner, JHEP 0910, 033 (2009).
• 9. G. Calcagni, Phys. Rev. D, 81, 044006 (2010).
• 10. C. Germani, A. Kehagias and K. Sfetsos, JHEP, Issue 09, 060 (2009).
• 11. T. Takashi and J. Soda, Phys. Rev. Lett. 102, 231301 (2009).
• 12. G. Calcagni, JHEP 09,112 (2009). ;
• 13. S. Kalyana Rama, Phys. Rev. D 79, 124031 (2009).
• 14. A. Wang and R. Maartens, Phys. Rev. D, 81, 024009 (2010).
• 15. C. G. Boehmer, L. Hollenstein, F. S. Lobo and S. S. Seahra, [arXiv:gr-qc/1001.1266].
• 16. S. Mukohyama, arXiv:1007.5199 (2010).
• 17. E. N. Saridakis, Eur. Phys. J. C 67 229 (2010).
• 18. M. I. Park, JCAP 1001, 001 (2010).
• 19. S. Mukohyama, Physical Review D 80, 064005 (2009).
• 20. T. Harko, Z Kovacs , F.S.N. Lobo, 2009arXiv0908.2874H .
• 21. A. Ghodsi, E. Hatefi, Physical Review D, 81, 044016 (2010).
• 22. R. G. Cai, L. M. Cao and N. Ohta, Phys. Rev. D 80, 024003 (2009).
• 23. R. A. Konoplya, Phys. Lett. B 679, 499 (2009).
• 24. S. Chen and J. Jing, Phys. Rev. D 80, 024036 (2009).
• 25. A. Castillo and A. Larranaga, [arXiv:gr-qc/0906.4380].
• 26. D. Y. Chen, H. Yang and X. T. Zu, Phys. Let B 681, 463 (2009).
• 27. F.S.N. Lobo, T. Harko and Z. Kova’cs, [arXiv:1001.3517v1 [gr-qc].
• 28. C.G. Böhmer and F.S.N. Lobo, arXiv:0909.3986v2 [gr-qc].
• 29. R. A. Konoplya, Phys. Lett. B 679, 499 (2009).
• 30. Z. Stuchlik and A. Kotrlová, Gen. Rel. Grav., 41, 1305 (2009).
• 31. A. Kotrlová, Z. Stuchlik and G. Török, Class. Quantum Grav. 25, 225016 (2008).
• 32. A.A. Abdujabbarov, A.A.Hakimov and B.J. Ahmedov, submitted (2010).
• 33. B. Gwak and B.-H. Lee, arXiv:1005.2805v2 (2010).
• 34. A.W. Overhauser and R.Colella, Phys. Rev. Lett. 33, 1237 (1974).
• 35. R.Colella, A.W. Overhauser and S.A. Werner, Phys. Rev. Lett. 34, 1472 (1975).
• 36. L.A. Page, Phys. Rev. Lett. 35, 543 (1975).
• 37. S.A. Werner, J.L. Staudenmann and R.Colella, Phys. Rev. Lett. 42, 1103 (1979).
• 38. B.Mashhoon, F.W. Hehl and D.S. Theiss, Gen. Rel. Grav. 16, 711 (1984).
• 39. J. Kuroiwa, M.Kasai and T. Futamase, Phys. Lett. A 182, 330 (1993).
• 40. G. Rizzi and M.L. Ruggiero, gr-qc/0305084 (2004).
• 41. G.Rizzi and M.L. Ruggiero, Gen. Rel. Grav. 35, 1743 (2003).
• 42. M.L. Ruggiero, Gen. Rel. Grav. 37, 1845 (2005).
• 43. V. V. Nesvizhevsky et. al., Phys. Rev. D 67, 102002 (2003).
• 44. A. Kehagias and K. Sfetsos, Phys. Lett. B 678, 123 (2009).
• 45. D. Klemm, V. Moretti and L. Vanzo, Phys. Rev. D 57, 6127 (1998).
• 46. H. Muller, A. Peters, S. Chu, Nature 463, (2010) doi: 10.1038/nature08776.
• 47. M. Arminjon, Phys. Lett. A 372, 2196 (2008).
• 48. V.S. Morozova and B.J. Ahmedov, Int. J. Mod. Phys. D 18, 107 (2009).
• 49. C. Plonka-Spehr, A. Kraft, P. Iaydjiev, J. Klepp, V.V. Nesvizhevsky, P. Geltenbort and Th. Lauer, Nucl. Instr. M. Phys. R. A 618, 239 (2010).
• 50. V. Kagramanova, J. Kunz and C. Lämmerzahl, Class. Quant. Grav. 25, 105023 (2008).
• 51. A. A. Abdujabbarov, B. J. Ahmedov and V. G. Kagramanova, Gen. Rel. Grav. 40, 2515(2008).
• 52. A.A. Abdujabbarov, B.J. Ahmedov, Phys. Rev. D, 81, 044022 (2010).
• 53. R. A. Konoplya, Phys. Lett. B 644, 219 (2007).
• 54. R. A. Konoplya, Phys. Rev. D 74, 124015 (2006).
• 55. R.M. Wald, Phys. Rev. D, 10, 1680 (1974).
• 56. S. Dimopoulos, P.W. Graham, J.M. Hogan and M.E. Kasevich. Phys. Rev. D, 78, 042003 (2008).
• 57. M.L. Ruggiero, Gen. Rel. Grav. 37, 1845 (2005).
• 58. C.G. Böhmer, T. Harko and F. S. N. Lobo, Class. Quant. Grav. 25, 045015 (2008).
• 59. S. Jalalzadeh., M. Mehrnia and H. R. Sepangi, Class. Quant. Grav. 26, 155007 (2009).
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
You are asking your first question!
How to quickly get a good answer:
• Keep your question short and to the point
• Check for grammar or spelling errors.
• Phrase it like a question
Test description |
9ce224ba29ca3fb2 | Quantum Zeno effect
From Wikipedia, the free encyclopedia
Jump to navigation Jump to search
The quantum Zeno effect (also known as the Turing paradox) is a feature of quantum-mechanical systems allowing a particle's time evolution to be arrested by measuring it frequently enough with respect to some chosen measurement setting.[1]
Sometimes this effect is interpreted as "a system cannot change while you are watching it".[2] One can "freeze" the evolution of the system by measuring it frequently enough in its known initial state. The meaning of the term has since expanded, leading to a more technical definition, in which time evolution can be suppressed not only by measurement: the quantum Zeno effect is the suppression of unitary time evolution in quantum systems provided by a variety of sources: measurement, interactions with the environment, stochastic fields, among other factors.[3] As an outgrowth of study of the quantum Zeno effect, it has become clear that applying a series of sufficiently strong and fast pulses with appropriate symmetry can also decouple a system from its decohering environment.[4]
The name comes from Zeno's arrow paradox, which states that because an arrow in flight is not seen to move during any single instant, it cannot possibly be moving at all.[note 1] The first rigorous and general derivation of the quantum Zeno effect was presented in 1974 by Degasperis, Fonda, and Ghirardi,[5] although it had previously been described by Alan Turing.[6] The comparison with Zeno's paradox is due to a 1977 article by George Sudarshan and Baidyanath Misra.[1]
According to the reduction postulate, each measurement causes the wavefunction to collapse to an eigenstate of the measurement basis. In the context of this effect, an observation can simply be the absorption of a particle, without the need of an observer in any conventional sense. However, there is controversy over the interpretation of the effect, sometimes referred to as the "measurement problem" in traversing the interface between microscopic and macroscopic objects.[7][8]
Another crucial problem related to the effect is strictly connected to the time–energy indeterminacy relation (part of the indeterminacy principle). If one wants to make the measurement process more and more frequent, one has to correspondingly decrease the time duration of the measurement itself. But the request that the measurement last only a very short time implies that the energy spread of the state in which reduction occurs becomes increasingly large. However, the deviations from the exponential decay law for small times is crucially related to the inverse of the energy spread, so that the region in which the deviations are appreciable shrinks when one makes the measurement process duration shorter and shorter. An explicit evaluation of these two competing requests shows that it is inappropriate, without taking into account this basic fact, to deal with the actual occurrence and emergence of Zeno's effect.[9]
Closely related (and sometimes not distinguished from the quantum Zeno effect) is the watchdog effect, in which the time evolution of a system is affected by its continuous coupling to the environment.[10][11][12][13]
Unstable quantum systems are predicted to exhibit a short-time deviation from the exponential decay law.[14][15] This universal phenomenon has led to the prediction that frequent measurements during this nonexponential period could inhibit decay of the system, one form of the quantum Zeno effect. Subsequently, it was predicted that measurements applied more slowly could also enhance decay rates, a phenomenon known as the quantum anti-Zeno effect.[16]
In quantum mechanics, the interaction mentioned is called "measurement" because its result can be interpreted in terms of classical mechanics. Frequent measurement prohibits the transition. It can be a transition of a particle from one half-space to another (which could be used for an atomic mirror in an atomic nanoscope[17]) as in the time-of-arrival problem,[18][19] a transition of a photon in a waveguide from one mode to another, and it can be a transition of an atom from one quantum state to another. It can be a transition from the subspace without decoherent loss of a qubit to a state with a qubit lost in a quantum computer.[20][21] In this sense, for the qubit correction, it is sufficient to determine whether the decoherence has already occurred or not. All these can be considered as applications of the Zeno effect.[22] By its nature, the effect appears only in systems with distinguishable quantum states, and hence is inapplicable to classical phenomena and macroscopic bodies.
The mathematician Robin Gandy recalled Turing's formulation of the quantum Zeno effect in a letter to fellow mathematician Max Newman, shortly after Turing's death:
[I]t is easy to show using standard theory that if a system starts in an eigenstate of some observable, and measurements are made of that observable N times a second, then, even if the state is not a stationary one, the probability that the system will be in the same state after, say, one second, tends to one as N tends to infinity; that is, that continual observations will prevent motion. Alan and I tackled one or two theoretical physicists with this, and they rather pooh-poohed it by saying that continual observation is not possible. But there is nothing in the standard books (e.g., Dirac's) to this effect, so that at least the paradox shows up an inadequacy of Quantum Theory as usually presented.
— Quoted by Andrew Hodges in Mathematical Logic, R. O. Gandy and C. E. M. Yates, eds. (Elsevier, 2001), p. 267.
As a result of Turing's suggestion, the quantum Zeno effect is also sometimes known as the Turing paradox. The idea is implicit in the early work of John von Neumann on the mathematical foundations of quantum mechanics, and in particular the rule sometimes called the reduction postulate.[23] It was later shown that the quantum Zeno effect of a single system is equivalent to the indetermination of the quantum state of a single system.[24][25][26]
Various realizations and general definition[edit]
The treatment of the Zeno effect as a paradox is not limited to the processes of quantum decay. In general, the term Zeno effect is applied to various transitions, and sometimes these transitions may be very different from a mere "decay" (whether exponential or non-exponential).
One realization refers to the observation of an object (Zeno's arrow, or any quantum particle) as it leaves some region of space. In the 20th century, the trapping (confinement) of a particle in some region by its observation outside the region was considered as nonsensical, indicating some non-completeness of quantum mechanics.[27] Even as late as 2001, confinement by absorption was considered as a paradox.[28] Later, similar effects of the suppression of Raman scattering was considered an expected effect,[29][30][31] not a paradox at all. The absorption of a photon at some wavelength, the release of a photon (for example one that has escaped from some mode of a fiber), or even the relaxation of a particle as it enters some region, are all processes that can be interpreted as measurement. Such a measurement suppresses the transition, and is called the Zeno effect in the scientific literature.
In order to cover all of these phenomena (including the original effect of suppression of quantum decay), the Zeno effect can be defined as a class of phenomena in which some transition is suppressed by an interaction – one that allows the interpretation of the resulting state in the terms 'transition did not yet happen' and 'transition has already occurred', or 'The proposition that the evolution of a quantum system is halted' if the state of the system is continuously measured by a macroscopic device to check whether the system is still in its initial state.[32]
Periodic measurement of a quantum system[edit]
Consider a system in a state , which is the eigenstate of some measurement operator. Say the system under free time evolution will decay with a certain probability into state . If measurements are made periodically, with some finite interval between each one, at each measurement, the wave function collapses to an eigenstate of the measurement operator. Between the measurements, the system evolves away from this eigenstate into a superposition state of the states and . When the superposition state is measured, it will again collapse, either back into state as in the first measurement, or away into state . However, its probability of collapsing into state after a very short amount of time is proportional to , since probabilities are proportional to squared amplitudes, and amplitudes behave linearly. Thus, in the limit of a large number of short intervals, with a measurement at the end of every interval, the probability of making the transition to goes to zero.
According to decoherence theory, the collapse of the wave function is not a discrete, instantaneous event. A "measurement" is equivalent to strongly coupling the quantum system to the noisy thermal environment for a brief period of time, and continuous strong coupling is equivalent to frequent "measurement". The time it takes for the wave function to "collapse" is related to the decoherence time of the system when coupled to the environment. The stronger the coupling is, and the shorter the decoherence time, the faster it will collapse. So in the decoherence picture, a perfect implementation of the quantum Zeno effect corresponds to the limit where a quantum system is continuously coupled to the environment, and where that coupling is infinitely strong, and where the "environment" is an infinitely large source of thermal randomness.
Experiments and discussion[edit]
Experimentally, strong suppression of the evolution of a quantum system due to environmental coupling has been observed in a number of microscopic systems.
In 1989, David J. Wineland and his group at NIST[33] observed the quantum Zeno effect for a two-level atomic system that was interrogated during its evolution. Approximately 5,000 9Be+ ions were stored in a cylindrical Penning trap and laser-cooled to below 250 mK (−273 °C; −459 °F). A resonant RF pulse was applied, which, if applied alone, would cause the entire ground-state population to migrate into an excited state. After the pulse was applied, the ions were monitored for photons emitted due to relaxation. The ion trap was then regularly "measured" by applying a sequence of ultraviolet pulses during the RF pulse. As expected, the ultraviolet pulses suppressed the evolution of the system into the excited state. The results were in good agreement with theoretical models. A recent review describes subsequent work in this area.[34]
In 2001, Mark G. Raizen and his group at the University of Texas at Austin observed the quantum Zeno effect for an unstable quantum system,[35] as originally proposed by Sudarshan and Misra.[1] They also observed an anti-Zeno effect. Ultracold sodium atoms were trapped in an accelerating optical lattice, and the loss due to tunneling was measured. The evolution was interrupted by reducing the acceleration, thereby stopping quantum tunneling. The group observed suppression or enhancement of the decay rate, depending on the regime of measurement.
In 2015, Mukund Vengalattore and his group at Cornell University demonstrated a quantum Zeno effect as the modulation of the rate of quantum tunnelling in an ultracold lattice gas by the intensity of light used to image the atoms.[36]
The quantum Zeno effect is used in commercial atomic magnetometers and naturally by birds' magnetic compass sensory mechanism (magnetoreception).[37]
It is still an open question how closely one can approach the limit of an infinite number of interrogations due to the Heisenberg uncertainty involved in shorter measurement times. It has been shown, however, that measurements performed at a finite frequency can yield arbitrarily strong Zeno effects.[38] In 2006, Streed et al. at MIT observed the dependence of the Zeno effect on measurement pulse characteristics.[39]
The interpretation of experiments in terms of the "Zeno effect" helps describe the origin of a phenomenon. Nevertheless, such an interpretation does not bring any principally new features not described with the Schrödinger equation of the quantum system.[40][41]
Even more, the detailed description of experiments with the "Zeno effect", especially at the limit of high frequency of measurements (high efficiency of suppression of transition, or high reflectivity of a ridged mirror) usually do not behave as expected for an idealized measurement.[17]
It was shown that the quantum Zeno effect persists in the many-worlds and relative-states interpretations of quantum mechanics.[42]
See also[edit]
1. ^ The idea depends on the instant of time, a kind of freeze-motion idea that the arrow is "strobed" at each instant and is seemingly stationary, so how can it move in a succession of stationary events?
1. ^ a b c Sudarshan, E. C. G.; Misra, B. (1977). "The Zeno's paradox in quantum theory". Journal of Mathematical Physics. 18 (4): 756–763. Bibcode:1977JMP....18..756M. doi:10.1063/1.523304.
2. ^ https://phys.org/news/2015-10-zeno-effect-verifiedatoms-wont.html.
3. ^ Nakanishi, T.; Yamane, K.; Kitano, M. (2001). "Absorption-free optical control of spin systems: the quantum Zeno effect in optical pumping". Physical Review A. 65 (1): 013404. arXiv:quant-ph/0103034. Bibcode:2002PhRvA..65a3404N. doi:10.1103/PhysRevA.65.013404.
4. ^ Facchi, P.; Lidar, D. A.; Pascazio, S. (2004). "Unification of dynamical decoupling and the quantum Zeno effect". Physical Review A. 69 (3): 032314. arXiv:quant-ph/0303132. Bibcode:2004PhRvA..69c2314F. doi:10.1103/PhysRevA.69.032314.
5. ^ Degasperis, A.; Fonda, L.; Ghirardi, G. C. (1974). "Does the lifetime of an unstable system depend on the measuring apparatus?". Il Nuovo Cimento A. 21 (3): 471–484. Bibcode:1974NCimA..21..471D. doi:10.1007/BF02731351.
6. ^ Hofstadter, D. (2004). Teuscher, C. (ed.). Alan Turing: Life and Legacy of a Great Thinker. Springer. p. 54. ISBN 978-3-540-20020-8.
7. ^ Greenstein, G.; Zajonc, A. (2005). The Quantum Challenge: Modern Research on the Foundations of Quantum Mechanics. Jones & Bartlett Publishers. p. 237. ISBN 978-0-7637-2470-2.
8. ^ Facchi, P.; Pascazio, S. (2002). "Quantum Zeno subspaces". Physical Review Letters. 89 (8): 080401. arXiv:quant-ph/0201115. Bibcode:2002PhRvL..89h0401F. doi:10.1103/PhysRevLett.89.080401. PMID 12190448.
9. ^ Ghirardi, G. C.; Omero, C.; Rimini, A.; Weber, T. (1979). "Small Time Behaviour of Quantum Nondecay Probability and Zeno's Paradox in Quantum Mechanics". Il Nuovo Cimento A. 52 (4): 421. Bibcode:1979NCimA..52..421G. doi:10.1007/BF02770851.
10. ^ Kraus, K. (1981-08-01). "Measuring processes in quantum mechanics I. Continuous observation and the watchdog effect". Foundations of Physics. 11 (7–8): 547–576. Bibcode:1981FoPh...11..547K. doi:10.1007/bf00726936. ISSN 0015-9018.
11. ^ Belavkin, V.; Staszewski, P. (1992). "Nondemolition observation of a free quantum particle". Phys. Rev. A. 45 (3): 1347–1356. arXiv:quant-ph/0512138. Bibcode:1992PhRvA..45.1347B. doi:10.1103/PhysRevA.45.1347. PMID 9907114.
12. ^ Ghose, P. (1999). Testing Quantum Mechanics on New Ground. Cambridge University Press. p. 114. ISBN 978-0-521-02659-8.
13. ^ Auletta, G. (2000). Foundations and Interpretation of Quantum Mechanics. World Scientific. p. 341. ISBN 978-981-02-4614-3.
14. ^ Khalfin, L. A. (1958). "Contribution to the decay theory of a quasi-stationary state". Soviet Physics JETP. 6: 1053. Bibcode:1958JETP....6.1053K. OSTI 4318804.
15. ^ Raizen, M. G.; Wilkinson, S. R.; Bharucha, C. F.; Fischer, M. C.; Madison, K. W.; Morrow, P. R.; Niu, Q.; Sundaram, B. (1997). "Experimental evidence for non-exponential decay in quantum tunnelling" (PDF). Nature. 387 (6633): 575. Bibcode:1997Natur.387..575W. doi:10.1038/42418. Archived from the original (PDF) on 2010-03-31.
16. ^ Chaudhry, Adam Zaman (2016-07-13). "A general framework for the Quantum Zeno and anti-Zeno effects". Scientific Reports. 6 (1): 29497. arXiv:1604.06561. Bibcode:2016NatSR...629497C. doi:10.1038/srep29497. ISSN 2045-2322. PMC 4942788. PMID 27405268.
17. ^ a b Kouznetsov, D.; Oberst, H.; Neumann, A.; Kuznetsova, Y.; Shimizu, K.; Bisson, J.-F.; Ueda, K.; Brueck, S. R. J. (2006). "Ridged atomic mirrors and atomic nanoscope". Journal of Physics B. 39 (7): 1605–1623. Bibcode:2006JPhB...39.1605K. doi:10.1088/0953-4075/39/7/005.
18. ^ Allcock, J. (1969). "The time of arrival in quantum mechanics I. Formal considerations". Annals of Physics. 53 (2): 253–285. Bibcode:1969AnPhy..53..253A. doi:10.1016/0003-4916(69)90251-6.
19. ^ Echanobe, J.; Del Campo, A.; Muga, J. G. (2008). "Disclosing hidden information in the quantum Zeno effect: Pulsed measurement of the quantum time of arrival". Physical Review A. 77 (3): 032112. arXiv:0712.0670. Bibcode:2008PhRvA..77c2112E. doi:10.1103/PhysRevA.77.032112.
20. ^ Stolze, J.; Suter, D. (2008). Quantum computing: a short course from theory to experiment (2nd ed.). Wiley-VCH. p. 99. ISBN 978-3-527-40787-3.
21. ^ "Quantum computer solves problem, without running". Phys.Org. 22 February 2006. Retrieved 2013-09-21.
22. ^ Franson, J.; Jacobs, B.; Pittman, T. (2006). "Quantum computing using single photons and the Zeno effect". Physical Review A. 70 (6): 062302. arXiv:quant-ph/0408097. Bibcode:2004PhRvA..70f2302F. doi:10.1103/PhysRevA.70.062302.
23. ^ von Neumann, J. (1932). Mathematische Grundlagen der Quantenmechanik. Springer. Chapter V.2. ISBN 978-3-540-59207-5. See also von Neumann, J. (1955). Mathematical Foundations of Quantum Mechanics. Princeton University Press. p. 366. ISBN 978-0-691-02893-4.); Menskey, M. B. (2000). Quantum Measurements and Decoherence. Springer. §4.1.1, pp. 315 ff. ISBN 978-0-7923-6227-2.; Wunderlich, C.; Balzer, C. (2003). Bederson, B.; Walther, H. (eds.). Quantum Measurements and New Concepts for Experiments with Trapped Ions. Advances in Atomic, Molecular, and Optical Physics. 49. Academic Press. p. 315. ISBN 978-0-12-003849-7.
24. ^ O. Alter and Y. Yamamoto (April 1997). "Quantum Zeno Effect and the Impossibility of Determining the Quantum State of a Single System". Phys. Rev. A. 55 (5): R2499–R2502. Bibcode:1997PhRvA..55.2499A. doi:10.1103/PhysRevA.55.R2499.
25. ^ O. Alter and Y. Yamamoto (October 1996). "The quantum Zeno effect of a single system is equivalent to the indetermination of the quantum state of a single system" (PDF). In F. De Martini, G. Denardo and Y. Shih (ed.). Quantum Interferometry. Wiley-VCH. pp. 539–544.
26. ^ O. Alter and Y. Yamamoto (2001). Quantum Measurement of a Single System (PDF). Wiley-Interscience. doi:10.1002/9783527617128. ISBN 9780471283089. Slides. Archived from the original (PDF) on 2014-02-03.
27. ^ Mielnik, B. (1994). "The screen problem". Foundations of Physics. 24 (8): 1113–1129. Bibcode:1994FoPh...24.1113M. doi:10.1007/BF02057859.
28. ^ Yamane, K.; Ito, M.; Kitano, M. (2001). "Quantum Zeno effect in optical fibers". Optics Communications. 192 (3–6): 299–307. Bibcode:2001OptCo.192..299Y. doi:10.1016/S0030-4018(01)01192-0.
29. ^ Thun, K.; Peřina, J.; Křepelka, J. (2002). "Quantum Zeno effect in Raman scattering". Physics Letters A. 299 (1): 19–30. Bibcode:2002PhLA..299...19T. doi:10.1016/S0375-9601(02)00629-1.
30. ^ Peřina, J. (2004). "Quantum Zeno effect in cascaded parametric down-conversion with losses". Physics Letters A. 325 (1): 16–20. Bibcode:2004PhLA..325...16P. doi:10.1016/j.physleta.2004.03.026.
31. ^ Kouznetsov, D.; Oberst, H. (2005). "Reflection of Waves from a Ridged Surface and the Zeno Effect". Optical Review. 12 (5): 1605–1623. Bibcode:2005OptRv..12..363K. doi:10.1007/s10043-005-0363-9.
32. ^ Panov, A. D. (2001). "Quantum Zeno effect in spontaneous decay with distant detector". Physics Letters A. 281 (1): 9. arXiv:quant-ph/0101031. Bibcode:2001PhLA..281....9P. doi:10.1016/S0375-9601(01)00094-9.
33. ^ Itano, W.; Heinzen, D.; Bollinger, J.; Wineland, D. (1990). "Quantum Zeno effect" (PDF). Physical Review A. 41 (5): 2295–2300. Bibcode:1990PhRvA..41.2295I. doi:10.1103/PhysRevA.41.2295. Archived from the original (PDF) on 2004-07-20.
34. ^ Leibfried, D.; Blatt, R.; Monroe, C.; Wineland, D. (2003). "Quantum dynamics of single trapped ions". Reviews of Modern Physics. 75 (1): 281–324. Bibcode:2003RvMP...75..281L. doi:10.1103/RevModPhys.75.281.
35. ^ Fischer, M.; Gutiérrez-Medina, B.; Raizen, M. (2001). "Observation of the Quantum Zeno and Anti-Zeno Effects in an Unstable System". Physical Review Letters. 87 (4): 040402. arXiv:quant-ph/0104035. Bibcode:2001PhRvL..87d0402F. doi:10.1103/PhysRevLett.87.040402. PMID 11461604.
36. ^ Patil, Y. S.; Chakram, S.; Vengalattore, M. (2015). "Measurement-Induced Localization of an Ultracold Lattice Gas". Physical Review Letters. 115 (14): 140402. arXiv:1411.2678. Bibcode:2015PhRvL.115n0402P. doi:10.1103/PhysRevLett.115.140402. ISSN 0031-9007.
37. ^ Kominis, I. K. (2008). "Quantum Zeno Effect Underpinning the Radical-Ion-Pair Mechanism of Avian Magnetoreception". arXiv:0804.2646 [q-bio.BM].
38. ^ Layden, D.; Martin-Martinez, E.; Kempf, A. (2015). "Perfect Zeno-like effect through imperfect measurements at a finite frequency". Physical Review A. 91 (2): 022106. arXiv:1410.3826. Bibcode:2015PhRvA..91b2106L. doi:10.1103/PhysRevA.91.022106.
39. ^ Streed, E.; Mun, J.; Boyd, M.; Campbell, G.; Medley, P.; Ketterle, W.; Pritchard, D. (2006). "Continuous and Pulsed Quantum Zeno Effect". Physical Review Letters. 97 (26): 260402. arXiv:cond-mat/0606430. Bibcode:2006PhRvL..97z0402S. doi:10.1103/PhysRevLett.97.260402. PMID 17280408.
40. ^ Petrosky, T.; Tasaki, S.; Prigogine, I. (1990). "Quantum zeno effect". Physics Letters A. 151 (3–4): 109. Bibcode:1990PhLA..151..109P. doi:10.1016/0375-9601(90)90173-L.
41. ^ Petrosky, T.; Tasaki, S.; Prigogine, I. (1991). "Quantum Zeno effect". Physica A. 170 (2): 306. Bibcode:1991PhyA..170..306P. doi:10.1016/0378-4371(91)90048-H.
42. ^ Home, D.; Whitaker, M. A. B. (1987). "The many-worlds and relative states interpretations of quantum mechanics, and the quantum Zeno paradox". Journal of Physics A. 20 (11): 3339–3345. Bibcode:1987JPhA...20.3339H. doi:10.1088/0305-4470/20/11/036.
External links[edit]
• Zeno.qcl A computer program written in QCL which demonstrates the Quantum Zeno effect |
7d622fb0e6c52d2e | World Library
Flag as Inappropriate
Email this Article
Muffin-tin approximation
Article Id: WHEBN0019127190
Reproduction Date:
Title: Muffin-tin approximation
Author: World Heritage Encyclopedia
Language: English
Subject: KKR (disambiguation), Density of states, Screening effect, Free electron model
Publisher: World Heritage Encyclopedia
Muffin-tin approximation
The muffin-tin approximation is a shape approximation of the potential field in an atomistic environment. It is most commonly employed in quantum mechanical simulations of electronic band structure in solids. The approximation was proposed by John C. Slater. Augmented plane wave method is a method which uses muffin tin approximation. It is a method to approximate the energy states of an electron in a crystal lattice. The basis approximation lies in the potential in which the potential is assumed to be spherically symmetric in the muffin tin region and constant in the interstitial region. Wave functions (the augmented plane waves) are constructed by matching solutions of the Schrödinger equation within each sphere with plane-wave solutions in the interstitial region, and linear combinations of these wave functions are then determined by the variational method[1][2] Many modern electronic structure methods employ the approximation.[3][4] Among them are the augmented plane wave (APW) method, the linear muffin-tin orbital method (LMTO) and various Green's function methods.[5] One application is found in the variational theory developed by Korringa (1947) and by Kohn and Rostocker (1954) referred to as the KKR method.[6][7][8] This method has been adapted to treat random materials as well, where it is called the KKR coherent potential approximation.[9]
In its simplest form, non-overlapping spheres are centered on the atomic positions. Within these regions, the screened potential experienced by an electron is approximated to be spherically symmetric about the given nucleus. In the remaining interstitial region, the potential is approximated as a constant. Continuity of the potential between the atom-centered spheres and interstitial region is enforced.
In the interstitial region of constant potential, the single electron wave functions can be expanded in terms of plane waves. In the atom-centered regions, the wave functions can be expanded in terms of spherical harmonics and the eigenfunctions of a radial Schrödinger equation.[2][10] Such use of functions other than plane waves as basis functions is termed the augmented plane-wave approach (of which there are many variations). It allows for an efficient representation of single-particle wave functions in the vicinity of the atomic cores where they can vary rapidly (and where plane waves would be a poor choice on convergence grounds in the absence of a pseudopotential).
See also
|
8859a889e9d6eebf | We gratefully acknowledge support from
the Simons Foundation and member institutions.
Full-text links:
Current browse context:
Change to browse by:
References & Citations
(what is this?)
Physics > Computational Physics
Title: Higher-order splitting algorithms for solving the nonlinear Schrödinger equation and their instabilities
Authors: Siu A. Chin
Abstract: Since the kinetic and the potential energy term of the real time nonlinear Schr\"odinger equation can each be solved exactly, the entire equation can be solved to any order via splitting algorithms. We verified the fourth-order convergence of some well known algorithms by solving the Gross-Pitaevskii equation numerically. All such splitting algorithms suffer from a latent numerical instability even when the total energy is very well conserved. A detail error analysis reveals that the noise, or elementary excitations of the nonlinear Schr\"odinger, obeys the Bogoliubov spectrum and the instability is due to the exponential growth of high wave number noises caused by the splitting process. For a continuum wave function, this instability is unavoidable no matter how small the time step. For a discrete wave function, the instability can be avoided only for $\dt k_{max}^2{<\atop\sim}2 \pi$, where $k_{max}=\pi/\Delta x$.
Comments: 10 pages, 8 figures, submitted to Phys. Rev. E
Subjects: Computational Physics (physics.comp-ph); Atomic Physics (physics.atom-ph)
DOI: 10.1103/PhysRevE.76.056708
Cite as: arXiv:0710.0396 [physics.comp-ph]
(or arXiv:0710.0396v1 [physics.comp-ph] for this version)
Submission history
From: Siu Chin [view email]
[v1] Mon, 1 Oct 2007 22:11:43 GMT (156kb)
Link back to: arXiv, form interface, contact. |
a00cc5d873879b67 | DC and AC Josephson Effects
Initializing live version
Download to Desktop
Requires a Wolfram Notebook System
The upper graphic shows a Josephson junction: two superconductors, shown in gray, separated by a thin insulating barrier, all immersed in a liquid-helium cryostat. The junction is subjected to a DC voltage as well as a much lower-amplitude high-frequency AC voltage , produced by microwave radiation. The current through the junction is monitored. By varying the radiation amplitude and frequency, as well as the barrier opacity, both the DC and the AC Josephson effects can be exhibited on the oscilloscope in the lower graphic. The voltage is swept across a range of several μV. The step structure shown is somewhat idealized, and will usually be much more irregular.
Contributed by: S. M. Blinder (January 2013)
Open content licensed under CC BY-NC-SA
Brian Josephson discovered in 1962 (and was awarded the Physics Nobel Prize in 1973) that Cooper pairs of superconducting electrons could tunnel through an insulating barrier, of the order of 10 Å wide, in the absence of any external voltage. This is to be distinguished from the tunneling of single electrons, which does require a finite voltage. This phenomenon is known as the DC Josephson effect.
A superconductor can be described by an order parameter, which is, in effect, the wavefunction describing the collective state of all the Cooper pairs. On the two sides of the barrier the wavefunctions can be represented by and , respectively, in which and are Cooper-pair densities and and are the phases of the wavefunctions. The wavefunctions obey the coupled time-dependent Schrödinger equations
and ,
where is a barrier-opacity constant that depends on the width and composition of the insulating barrier and the temperature. For superconductivity to be operative, the entire system is immersed in a liquid helium cryostat, so For simplicity, the two superconductors are assumed to have the same composition; niobium (Nb) is a popular choice. The energy difference is , where is the voltage across the junction and is the charge of a Cooper pair. The phase difference is given by The current tunneling across the barrier is proportional to the time rate of change of the Cooper-pair densities .
The preceding can be solved for the two fundamental equations for the Josephson effect:
and , where is the critical current, above which superconductivity is lost. The critical current depends on and also on any external magnetic field (which we assume absent here).
Suppose first that we apply a DC voltage . The current is then given by . Since in the second term is so small, the argument of the sine is immensely large and time-averages to zero. Thus the tunneling supercurrent is zero if . Only when do we observe a current , with a maximum amplitude of . The lower graphic shows an oscilloscope trace as is swept over a range of the order of or . This is, in essence, the DC Josephson effect, in which current flows only if , but drops to zero if a DC voltage is applied. The "cross" at comes from superposed views of the supercurrent flowing in either direction in the course of each oscilloscope cycle.
With a DC voltage across the junction, the energy difference for a Cooper pair crossing the junction equals , which would correspond to a photon of frequency Such radiation has been measured with highly sensitive detectors.
The AC Josephson effect is observed if the DC voltage is augmented by a small-amplitude, high-frequency AC contribution, such that . When RF amplitude is zero, the system reverts to the DC effect. This is most readily accomplished by irradiating the junction with low-intensity microwave radiation in the range of 10–20 GHz. The junction current then given by
, for , , again neglecting sinusoidal voltages of unobservably high frequencies.
As the voltage is swept, those points at which the radiation frequency obeys the resonance condition exhibit a series of stepwise increases in current, equal to . These steps were first observed by S. Shapiro in 1963 and can be considered a definitive validation of the AC Josephson effect. The width of each voltage step is very precisely equal to .
All plots are shown without numerical axes labels, since these magnitudes depend on the specific individual characteristics of instruments and materials. Thus, a qualitative descriptions of these phenomena suffices.
Josephson junctions have important applications in SQUIDs (superconducting quantum interference devices), which can be very sensitive magnetometers, and for RSFQ (rapid single flux quantum) digital electronic devices. They have been suggested in several proposed designs for quantum computers. The Josephson effect provides a highly accurate frequency to voltage conversion, as expressed by the Josephson constant, . Since frequency can be very precisely defined by the cesium atomic clock, the Josephson effect is now used as the basis of a practical high-precision definition of the volt.
Snapshot 1: DC Josephson effect, with zero external voltage
Snapshot 2: no tunneling when barrier is sufficiently opaque
Snapshot 3: oscilloscope trace for typical AC Josephson effect, showing Shapiro steps
[1] R. P. Feynman, R. B. Leighton, and M. Sands, The Feynman Lectures on Physics, Volume III, Reading, MA: Addison-Wesley, 1965 pp. 21–14 ff.
[2] Wikipedia: Josephson effect
Feedback (field required)
Email (field required) Name
Occupation Organization |
d4ace050feab1220 | Quantum theory
Photo courtesy ESA/Hubble/NASA, Fillipenko, Jansen
Splitting the Universe
Hugh Everett blew up quantum mechanics with his Many-Worlds theory in the 1950s. Physics is only just catching up
Sean Carroll
Photo courtesy ESA/Hubble/NASA, Fillipenko, Jansen
Sean Carroll
is a theoretical physicist at the California Institute of Technology. He specialises in quantum mechanics, gravitation, cosmology, statistical mechanics and foundations of physics. His latest book is Something Deeply Hidden: Quantum Worlds and the Emergence of Spacetime (2019). He lives in Los Angeles.
2,600 words
Edited by Pam Weintraub
Republishing not permitted
Aeon for Friends
Find out more
One of the most radical and important ideas in the history of physics came from an unknown graduate student who wrote only one paper, got into arguments with physicists across the Atlantic as well as his own advisor, and left academia after graduating without even applying for a job as a professor. Hugh Everett’s story is one of many fascinating tales that add up to the astonishing history of quantum mechanics, the most fundamental physical theory we know of.
Everett’s work happened at Princeton in the 1950s, under the mentorship of John Archibald Wheeler, who in turn had been mentored by Niels Bohr, the godfather of quantum mechanics. More than 20 years earlier, Bohr and his compatriots had established what came to be called the ‘Copenhagen Interpretation’ of quantum theory. It was never a satisfying set of ideas, but Bohr’s personal charisma and the desire on the part of scientists to get on with the fun of understanding atoms and particles quickly established Copenhagen as the only way for right-thinking physicists to understand quantum theory.
In the Copenhagen view, we distinguish between microscopic quantum systems and macroscopic observers. Quantum systems exist in superpositions of different possible measurement outcomes, called ‘wave functions’. A spinning electron, for example, has a wave function describing a superposition of ‘spin-up’ and ‘spin-down’. It’s not merely that we don’t know the spin of the electron, but that the value of the spin does not exist until it is measured. An observer, by contrast, obeys all the rules of familiar classical physics. At the moment that an observer measures a quantum system, that system’s wave function suddenly and unpredictably collapses, revealing some definite spin or whatever has been measured.
There are apparently, therefore, two completely different ways in which quantum systems evolve. When we’re not looking at them, wave functions change smoothly according to the Schrödinger equation, written down by Erwin Schrödinger in 1926. But when we do look at them, wave functions act in a totally different way, collapsing onto some particular outcome.
If this seems unsatisfying, you’re not alone. What exactly counts as a measurement? And what makes observers so special? If I’m made up of atoms that obey the rules of quantum mechanics, shouldn’t I obey the rules of quantum mechanics myself? Nevertheless, the Copenhagen approach became enshrined as conventional wisdom, and by the 1950s it was considered somewhat ill-mannered to question it.
That didn’t bother Everett. The seeds of his visionary idea, now known as the Many-Worlds formulation of quantum mechanics, can be traced to a late-night discussion in 1954 with fellow young physicists Charles Misner (also a student of Wheeler’s) and Aage Peterson (an assistant of Bohr’s, visiting from Copenhagen). All parties agree that copious amounts of sherry were consumed on the occasion.
Under Wheeler’s guidance, Everett had begun thinking about quantum cosmology: the study of the entire Universe as a quantum system. Clearly, he reasoned, if we’re going to talk about the Universe in quantum terms, we can’t carve out a separate classical realm. Every part of the Universe will have to be treated according to the rules of quantum mechanics, including the observers within it. There will be only a single quantum state, described by what Everett called the ‘universal wave function’.
If everything is quantum, and the Universe is described by a single wave function, how is measurement supposed to occur? It must be, Everett reasoned, when one part of the Universe interacts with another part of the Universe in some appropriate way. That is something that’s going to happen automatically, he noticed, simply due to the evolution of the universal wave function according to the Schrödinger equation. We don’t need to invoke any special rules for measurement at all; things bump into each other all the time.
Imagine that we have a spinning electron in some superposition of up and down. We also have a measuring apparatus, which according to Everett is a quantum system in its own right. Imagine that it can be in superpositions of three different possibilities: it can have measured the spin to be up, it can have measured the spin to be down, or it might not yet have measured the spin at all, which we call the ‘ready’ state.
The world has ‘branched’ into a superposition of these two possibilities
The fact that the measurement apparatus does its job tells us how the quantum state of the combined spin + apparatus system evolves according to the Schrödinger equation. Namely, if we start with the apparatus in its ready state and the electron in a purely spin-up state, we are guaranteed that the apparatus evolves to a pure measured-up state, like so:
The initial state on the left can be read as ‘the electron is in the up state, and the apparatus is in its ready state’, while the one on the right, where the pointer indicates the up arrow, is ‘the electron is in the up state, and the apparatus has measured it to be up’.
Likewise, the ability to successfully measure a pure-down spin implies that the apparatus must evolve from ‘ready’ to ‘measured down’:
What we want, of course, is to understand what happens when the initial spin is not in a pure up or down state, but in some superposition of both. The good news is that we already know everything we need. The rules of quantum mechanics are clear: if you know how the system evolves starting from two different states, the evolution of a superposition of both those states will just be a superposition of the two evolutions. In other words, starting from a spin in some superposition and the measurement device in its ready state, we have:
The final state now is an entangled superposition: the spin is up and it was measured to be up, plus the spin is down and it was measured to be down. This is the clear, unambiguous, definitive final wave function for the combined spin + apparatus system, if all we do is evolve it according to the Schrödinger equation. The world has ‘branched’ into a superposition of these two possibilities.
Everett’s insight was as simple as it was brilliant: accept the Schrödinger equation. Both of those parts of the final superposition are actually there. But they can’t interact with each other; what happens in one branch has no effect on what happens in the other. They should be thought of as separate, equally real worlds.
This is the secret to Everettian quantum mechanics. We didn’t put the worlds in; they were always there, and the Schrödinger equation inevitably brings them to life. The problem is that we never seem to come across superpositions involving big macroscopic objects in our experience of the world.
The traditional remedy has been to monkey with the fundamental rules of quantum mechanics in one way or another. The Copenhagen approach is to disallow the treatment of the measurement apparatus as a quantum system in the first place, and to treat wave-function collapse as a separate way the quantum state can evolve. As Everett would later put it: ‘The Copenhagen Interpretation is hopelessly incomplete because of its a priori reliance on classical physics … as well as a philosophic monstrosity with a “reality” concept for the macroscopic world and denial of the same for the microcosm.’
The Many-Worlds formulation of quantum mechanics removes once and for all any mystery about the measurement process and collapse of the wave function. We don’t need special rules about making an observation: all that happens is that the wave function keeps chugging along in accordance with the Schrödinger equation. And there’s nothing special about what constitutes ‘a measurement’ or ‘an observer’ – a measurement is any interaction that causes a quantum system to become entangled with the environment, creating a branching into separate worlds, and an observer is any system that brings about such an interaction. Consciousness, in particular, has nothing to do with it. The ‘observer’ could be an earthworm, a microscope or a rock. There’s not even anything special about macroscopic systems, other than the fact that they can’t help but interact and become entangled with the environment. The price we pay for such a powerful and simple unification of quantum dynamics is a large number of separate worlds.
Everett’s theory was a direct assault on Bohr’s picture, and he enjoyed illustrating this assault in vivid language
Even in theoretical physics, people do sometimes get lucky, hitting upon an important idea more because they were in the right place at the right time than because they were particularly brilliant. That’s not the case with Everett; those who knew him testify uniformly to his incredible intellectual gifts, and it’s clear from his writings that he had a thorough understanding of the implications of his ideas. Were he still alive, he would be perfectly at home in modern discussions of the foundations of quantum mechanics.
What was hard was getting others to appreciate those ideas, and that included his advisor. Wheeler was personally very supportive of Everett, but he was also devoted to his own mentor Bohr, and was convinced of the basic soundness of the Copenhagen approach. He simultaneously wanted Everett’s ideas to get a wide hearing, and to ensure that they weren’t interpreted as a direct assault on Bohr’s way of thinking about quantum mechanics.
Yet Everett’s theory was a direct assault on Bohr’s picture. Everett himself knew it, and enjoyed illustrating the nature of this assault in vivid language. In an early draft of his thesis, he used an analogy of an amoeba dividing to illustrate the branching of the wave function:
[O]ne can imagine an intelligent amoeba with a good memory. As time progresses, the amoeba is constantly splitting, each time the resulting amoebas having the same memories as the parent. Our amoeba hence does not have a life line, but a life tree.
Wheeler was put off by the blatantness of this (quite accurate) metaphor, scribbling in the margin of the manuscript: ‘Split? Better words needed.’ Advisor and student were constantly tussling over the best way to express the new theory, with Wheeler advocating caution and prudence while Everett favoured bold clarity.
In 1956, as Everett was working on finishing his dissertation, Wheeler visited Copenhagen and presented the new scenario to Bohr and his colleagues, including Petersen. He attempted to present it, anyway; by this time, the wave-functions-collapse-and-don’t-ask-embarrassing-questions-about-exactly-how school of quantum theory had hardened into conventional wisdom, and those who accepted it weren’t interested in revisiting the foundations when there was so much interesting applied work to be done. Letters from Wheeler, Everett and Petersen flew back and forth across the Atlantic, continuing when Wheeler returned to Princeton and helped Everett to craft the final form of his dissertation. It omitted many of the juicier sections Everett had originally composed, including examinations of the foundations of probability and information theory, and an overview of the quantum measurement problem, focusing instead on applications to quantum cosmology. (No amoebas appear in the published paper, but Everett did manage to insert the word ‘splitting’ in a footnote added in proof while Wheeler wasn’t looking.)
But Everett decided not to continue the academic fight. Before finishing his PhD, he accepted a job at the Weapons Systems Evaluation Group for the US Department of Defense, where he studied the effects of nuclear weapons. He would go on to do research on strategy, game theory and optimisation, and played a role in starting several new companies. It’s unclear to what extent Everett’s conscious decision not to apply for professorial positions was motivated by criticism of his upstart new theory, or simply by impatience with academia in general.
He did, however, maintain an interest in quantum mechanics, even if he never published on it again. After he defended his PhD and was already working for the Pentagon, Wheeler persuaded Everett to visit Copenhagen for himself and talk to Bohr and others. The visit didn’t go well; afterward Everett judged that it had been ‘doomed from the beginning’.
Bryce DeWitt, an American physicist who had edited the journal where Everett’s thesis appeared, wrote a letter to him complaining that the real world obviously didn’t ‘branch’, since we never experience such things. Everett replied with a reference to Copernicus’s similarly daring idea that the Earth moves around the Sun, rather than vice-versa: ‘I can’t resist asking: do you feel the motion of the Earth?’ DeWitt had to admit that was a pretty good response.
After mulling over the matter for a while, by 1970 DeWitt had become an enthusiastic Everettian. He put a great deal of effort into pushing the theory, which had languished in obscurity, toward greater public recognition. His strategies included an influential article in Physics Today in 1970, followed by an essay collection in 1973 that included at last the long version of Everett’s dissertation, as well as a number of commentaries. The collection was called simply The Many-Worlds Interpretation of Quantum Mechanics, a vivid name that has stuck ever since.
Presumably nature doesn’t work like that; it’s just quantum from the start
In 1976, Wheeler retired from Princeton and took up a position at the University of Texas, where DeWitt was also on the faculty. Together they organised a workshop in 1977 on the Many-Worlds theory, and Wheeler coaxed Everett into taking time off from his defence work in order to attend. The conference was a success, and Everett made a significant impression on the assembled physicists in the audience. Wheeler went so far as to propose a new research institute in Santa Barbara where Everett could return to full-time work on quantum mechanics, but ultimately nothing came of it.
Everett died in 1982, aged 51, of a sudden heart attack. He had not lived a healthy lifestyle, over-indulging in eating, smoking and drinking. His son Mark Oliver Everett (who would go on to form the band Eels) has said that he was originally upset with his father for not taking better care of himself. He later changed his mind:
I realise that there is a certain value in my father’s way of life. He ate, smoked and drank as he pleased, and one day he just suddenly and quickly died. Given some of the other choices I’d witnessed, it turns out that enjoying yourself and then dying quickly is not such a hard way to go.
But physics hasn’t forgotten him; if anything, Everett’s ideas are more relevant than ever. His attempts to understand quantum cosmology were ahead of their time, but modern physics has made slow but steady progress on appreciating how to reconcile gravity with quantum theory. And Everett was right; once the whole Universe is your subject of study, it doesn’t make much sense to carve out a special place for a classical observer.
In my own research, I’ve gone even farther, arguing that the quest for quantum gravity is being held back by physicists’ traditional strategy of taking a classical theory (such as Albert Einstein’s general relativity) and ‘quantising’ it. Presumably nature doesn’t work like that; it’s just quantum from the start. What we should do, instead, is start from a purely quantum wave function, and ask whether we can pinpoint individual ‘worlds’ within it that look like the curved spacetime of general relativity. Preliminary results are promising, with emergent geometry being defined by the amount of quantum entanglement between different parts of the wave function. Don’t quantise gravity; find gravity within quantum mechanics.
That approach fits very naturally into the Many-Worlds perspective, while not making much sense in other approaches to quantum foundations. Niels Bohr might have won the public-relations race in the 20th century, but Hugh Everett appears ready to pull ahead in the 21st.
This is an edited extract from ‘Something Deeply Hidden: Quantum Worlds and the Emergence of Spacetime’ by Sean Carroll, published by Dutton, an imprint of Penguin Publishing Group, a division of Penguin Random House LLC, a division of Penguin Random House LLC. Copyright © 2019 by Sean Carroll.
Sean Carroll
Republishing not permitted
Aeon is not-for-profit
and free for everyone
Make a donation
Get Aeon straight
to your inbox
Join our newsletter
Animals and humans
All we owe to animals
Jeff Sebo
Fate of the Universe
Are we part of a dying reality or a blip in eternity? The value of the Hubble Constant could tell us which terror awaits
Corey S Powell |
803822777b7277b9 | Skip to main content
Normal Incident Long Wave Infrared Quantum Dash Quantum Cascade Photodetector
We demonstrate a quantum dash quantum cascade photodetector (QDash-QCD) by incorporating self-assembled InAs quantum dashes into the active region of a long wave infrared QCD. Sensitive photoresponse to normal incident light at 10 μm was observed, which is attributed to the intersubband (ISB) transitions in the quantum well/quantum dash (QW/QDash) hybrid absorption region and the following transfer of excited electrons on the extraction stair-like quantum levels separated by LO-phonon energy. The high density InAs quantum dashes were formed in the Stranski-Krastanow mode and stair-like levels were formed by a lattice matched InGaAs/InAlAs superlattice. A stable responsivity from 5 mA/W at 77 K to 3 mA/W at as high as 190 K was observed, which makes the QDash-QCD promising in high temperature operation.
Quantum cascade photodetector is one kind of ISB photodetectors based on electrons’ transitions between quantized subbands in the conduction band of semiconductor heterostructures. As a photovoltaic detector, quantum cascade photodetector (QCD) works without an external bias voltage due to asymmetric conduction band profile. This asymmetry is derived from the stair-like subbands separated by LO-phonon energy by choosing appropriate layer thicknesses of the superlattice in the extraction region. This design guarantees a negligible dark current, which makes QCDs promising in large focal plan array and small pixel applications [1, 2]. QCDs have been studied extensively from short wavelength to THz wavelength through the entire infrared spectrum [310]. However, the absorption of normal incident light was limited by polarization selection rule for ISB transitions in quantum wells, which restricts the possible applications of QCDs. This leads to strong interest in exploring the possibility of using the intersubband transitions in quantum dot (QD) [1113], quantum wire [14, 15], and also the dot-in-a-well structure [16, 17] instead of in a QW in terms of polarization relaxation. Effective results have been achieved, because the in-plane confinement of the carriers allows the absorption of photons at normal incidence. Nevertheless, these devices sensitive to normal incident light usually work in photoconductive scheme. The strong dark current derived from photoconductive working scheme seems to be an unavoidable weakness. Lately, quantum dot quantum cascade detectors (QD-QCDs) were demonstrated on both GaAs-based [18] and InP-based [19] material system. The hybrid QW/QD absorption region and the stairs-like extraction region allow the detector to respond to normal incident light and work in photovoltaic scheme at the same time.
Inspired by the concept of QD-QCD, we incorporated quantum dashes into the absorption well of a long-wave infrared photodetector (LWIR) QCD [6] to form the QDash-QCD. In this letter, the high density InAs dashes were formed in the Stranski-Krastanow mode on unstrained InAlAs layer. This device shares advantages of low dark current and semi-3D confinement derived from quantum dashes [20, 21]. Operating with zero bias, the device responded to normal incident radiation with negligible dark current.
The QDash-QCD structures were grown by molecular beam epitaxy on semi-insulating InP (001) substrates. Nineteen periods of the active region—consisting of a 10-nm-wide QW/QDash hybrid region followed by an extraction In0.52Al0.48As/In0.53Ga0.47As chirped superlattice—were inserted between a 500-nm-thick n-doped (1 × 1018 cm−3) In0.53Ga0.47As bottom contact layer and a 300-nm-thick n-In0.53Ga0.47As top contact layer. The active region has two components: the active infrared absorption hybrid region and the extraction region, which can be seen in Fig. 1. The absorption hybrid region A consisted of an InAs quantum dash layer and an InGaAs quantum well layer with a thin GaAs barrier layer between them. This region was n-doped with Si to about 4 × 1017 cm−3. The following extraction region from B to E was formed by a chirped In0.53Ga0.47As/In0.52Al0.48As superlattice. The thickness of the layer sequence of a whole period starting from the QDash-layer was as follows (in angstroms): 9(QDash)/8(GaAs)/83(QW)/47/39/25/43/19/54/16/66/17, with InAlAs layers in bold and InGaAs layers in regular. A control QW-QCD structure with a 10-nm InGaAs quantum well instead of the hybrid region and 30 periods of the active region was also grown. After growth, QDash- and QW-quantum cascade detectors with mesa of 200 μm × 200 μm were fabricated by a standard photolithography, wet chemical etching, metal deposition, and lift-off process.
Fig. 1
Energy band scheme of one period of the QDash-QCD
The InAs QDashes were obtained self-assembly based on the Stranski-Krastanow epitaxial growth mode and the nominal growth rate is about 0.4 ML/s. After the quantum dash layer was deposited, 20 s of ripening time was given under As4 protection. The InAs QDash in this study is a kind of elongated nanostructure, as depicted in Fig. 2, whose cross section is similar to that of a quantum dot presented in ref. [22] with dimensions in [110] axis of ~17 nm and [001] axis of ~2.3 nm. The dimension in [\( 1\overline{1}0 \)] axis is about a hundred of nanometers in average. Figure 1 shows an atomic force microscopy (AFM) image of a non-overgrown sample with QDashes layer on top of the device structure, where QDashes assemble in a rather dense and parallel manner.
Fig. 2
AFM of an uncapped InAs self-assembled QDashes layer on top of the QDash-QCD structure
Measured and simulated x-ray diffraction rocking curves of QDash-QCD are shown in Fig. 3 and the measured QW-QCD curve is also shown. As can be seen from Fig. 3, the dynamic simulation curve matches the experimental curve well, showing a good control over the growth parameters across the entire epitaxy sequence. Besides, the incorporation of QDashes did not weaken the quality of the superlattice. The clear satellite peaks with good periodicity and narrow linewidths (full width of half maximum ~40 arcsecs) demonstrate a high interfacial quality.
Fig. 3
XRD curves of the measured and simulated QDash-QCD structure along with the XRD curve of a QW-QCD structure
Results and Discussion
Conduction energy band scheme of one period of QDash-QCD at null bias is shown in Fig. 1. The computation was based on a simplified model by solving one-dimensional Schrödinger equation under envelope-function approximation without considering the quantum confinement of QDashes in the growth plane. The ground-state energy of the QW/QDash hybrid region was determined from the photoluminescence measurements. The InAs layer was simplified into a quantum well whose thickness was adjusted to match the measured ground-state energy. The calculation includes the energy dependence of the effective mass and the effect on the band offset of the strain of the InAs layer with respect to the InP substrate. The active absorption region is a “W-shaped” hybrid structure, consisting of InAs (QDash)/GaAs/InGaAs(QW). The dominant transition is between the hybrid levels A1 and A2, similar to the QW/QD mixed mode reported in ref. [18, 19, 23, 24], leading to a detection wavelength of 10 μm. To ensure an efficient escape process from the absorption region to the cascade extraction region and, at the same time, a considerable resistance to suppress dark current, a resonantly tunneling process was designed between level A2 and level B1. Once the carriers tunnel to level B1, they will transfer through a set of quantum stairs (from B1 to E1) separated by LO phonon energy rapidly in the cascade region. Finally, the excited electrons are transferred from level E1 toward the fundamental level A’1 of the next absorption region by emission of a LO phonon.
To figure out the impact of InAs QDashes on the device, we measured the polarization dependent response. Figure 4a shows the schematic diagram of the polarized spectral response measurements. The infrared light incoming vertically to the 45° polished facet of the substrate. When the polarization of the incident infrared light is s-polarized, the E-field is paralleled with the growth plane in [110] direction. While in the case of p-polarized light, the E-field has a 50 % component along the growth direction i.e., in [001] direction and the other 50 % component is paralleled with the growth plane in [\( 1\overline{1}0 \)] direction. Figure 4b shows the 45° facet configuration experimental setup in a cryostat. The whole device was mounted on a 45° facet holder which adhered to the cold finger of a cryostat. Figure 4c shows the dependence of spectral response on polarization. The polarized response spectra were measured through a Nicolet 8700 Fourier transform infrared spectrometer (FTIR) in 45° configuration under p-polarized and s-polarized light, respectively. As a control sample, a QW-QCD was also measured. It is to be noted that, a red shift of s-polarized response (1002 cm−1) of the QDash-QCD compared to the p-polarized response (1029 cm−1) was observed. That is because the dimension of a dash in [\( 1\overline{1}0 \)] axis is about a hundred of nanometers which possesses weaker quantum confinement, in the case of p-polarized light, the component of the E-field in this direction can hardly induce ISB transitions. The dominant ISB transitions are derived from the other 50 % E-field component in [001] direction due to the quantum confinement deriving from the QDash and QW at the same time. While in the case of s-polarized light, the E-field is in the [110] direction, the quantum confinement originating from the QDash can overcome the limitation of the polarization selection rule and induce ISB transitions. That is to say, in the case of s-polarized response, the QW/QDash hybrid subbands participating in the transitions are more QDash-like than in p-polarized light. The incorporation of QDashes into the quantum well leads to the downward shift of both the ground-state A1 and the excited state A2 and a tiny decrease of the spacing between these two states, which leads to a redshift of the response spectrum under s-polarized light. The peak of the s-polarized response spectrum lies exactly in the normal response peak position, showing that the QDashes worked dominantly for normal incident absorption, which can be seen in Fig. 5a. Besides, the broadening of the spectra compared to the control sample shows obvious evidence of QDashes due to the size distribution. What is more, the enhancement of s/p ratio from 4.2 to 12.6 % also indicates that the incorporation of quantum dashes into the quantum well enhances normal incidence absorption. The enhanced s/p ratio of 12.6 % is comparable to the traditional GaAs-based quantum dot infrared photodetector (QDIP) whose s/p ratio is about 13 % [25].
Fig. 4
a The schematic diagram of the polarized spectral response measurements. b The schematic diagram of the 45° facet configuration experimental setup in a cryostat. c The s- and p-polarized infrared spectral responses of the QDash-QCD and the control QW-QCD
Fig. 5
a Response spectra of the QDash-QCD at 77 and 190 K. Left inset: responsivity of the QDash-QCD from 77 to 190 K. Right inset: responsivity of the QDash-QCD at 10.26 μm from 77 to 300 K in every 10 K. b Dependence of detectivity and R 0 A (product of resistance at zero bias by area of the mesa) of the QDash-QCD on temperature
Normal incidence spectral reponse were also measured by a Nicolet 8700 FTIR, and the responsivity were calibrated by a circular polarization CO2 laser with the wavelength of 10.26 μm. Figure 5a shows the spectra measured at 77 and 190 K at zero bias voltage, the main peak of the photoresponse lies in 10 μm within the atmosphere window of 8–12 μm. The left inset shows the responsivity of the QDash-QCD versus temperature from 77 to 190 K. The QDash-QCD shows a stable responsivity from 5 mA/W at 77 K to 3 mA/W at 190 K, decreased by 40 %. As a comparison, a decrease of 83 % from 28 mA/W at 77 K to 4.2 mA/W at 190 K was achieved in a control QW-QCD with the same doping. This characteristic is similar to the QD-QCD in ref. [22], the incorporation of QDs into the QCD leading to a more stable responsivity than the control sample but with a lower value. That is because the photoresponse is based on the ISB transitions deriving from the hybrid region, the quantum confinement of the nanostructures in plane leads to the stable responsivity. The factors that leads to the low responsivity such as size dispersion of the nanostructure is now under study. To improve the responsivity of QDash-QCD, we may improve the doping density as what we did in ref. [22]. However, for a QCD, the Johnson noise limited detectivity can be obtained by
$$ {D}_{\mathrm{J}}^{*}={R}_{\mathrm{p}}\sqrt{\frac{R_0A}{4{k}_{\mathrm{B}}T}}, $$
where R p is peak responsivity, R 0 A is the product of resistance at zero bias by area of the mesa, k B is the Boltzmann constant, and T is temperature. As to optimizing the performance, it is necessary to have a high spectral responsivity and a high resistance at the same time. But, increasing the doping density in a LWIR detectors seems to produce an unavoidable noise. In this condition, improving the quantum confinement of the nanostructures to obtain an enhanced s/p ratio is an efficient way to improve the responsivity rather than increasing the doping density. We can improve the quantum confinement of the nanostructures from semi-3D confinement of QDashes to 3D confinement of QDs or even submonolayers of QDs to further improve the quantum confinement[25], only if we can design a smart growing method like in ref. [19, 24]. The right inset of Fig. 5a shows the response of the QDash-QCD at 10.26 μm from 77 K to 300 K. A responsivity of 0.59 mA/W was observed at 300 K, indicating the probability of uncooled operation. In addition to the main peak lying at 1022 cm−1, a side peak lying at 750 cm−1 was observed. It is originated from the transitions from level A1 to level C1 in the extraction region and this is also a thermally excited channel which can be deduced from the dark current measurement.
Dark currents at different temperatures were measured using a keithley 2635B sourcemeter through pulsed current measurement mode and the sample was optically and thermally shielded. The current density at zero bias is about 10−5 A/cm2, which is a desirable value for long wave infrared photodetector. The low dark current originates mainly from its photovoltaic working scheme and partially from the semi-3D confinement of QDashes. R 0 A (the product of resistance at zero bias by area of the mesa) at different temperatures were obtained from the dark I-V curves and were plotted in Fig. 5b as a function of inverse of temperature. The active energy of 62 meV was then deduced from the slope of Arrhenius plots of R 0 A versus reciprocal of temperature. Together with the Fermi energy of Si doping for 21 meV, a leakage current channel was formed between levels A1 and C1 with an energy spacing of 93 meV, which means a lot of carriers participated in this transition and made a contribution to the side peak lying at 750 cm−1. The Johnson noise limited detectivity then can be obtained according to Eq (1). Figure 5b shows the detectivity versus temperature from 77 K to 190 K. A detectivity of 2 × 108 cm Hz1/2 W−1 was achieved at 77 K and this value decreased to 4.6 × 106 cm Hz1/2 W−1 at 190 K.
Normal incident response has been demonstrated in the QDash-QCD by incorporating quantum dashes into the absorption region of a LWIR-QCD. With a detection wavelength of 10 μm, the QDash-QCD possessed a detectivity of 2 × 108 cm Hz1/2 W−1 along with a responsivity of 5 mA/W at liquid nitrogen temperature. It is noteworthy that a stable responsivity compared to the control QW-QCD has been achieved and the QDash-QCD presented here can work up to 190 K, indicating the high temperature operation.
Quantum cascade photodetector
Quantum dash
Quantum well
Quantum dot
Longitudinal optical
Three dimension
Atomic force microscopy
Fourier transform infrared spectrometer
Quantum dot infrared photodetector
R 0 A :
The product of resistance at zero bias by area of the mesa
Long-wave infrared
1. 1.
Gendron L, Koeniguer C, Berger V, Marcadet X (2005) High resistance narrow band quantum cascade photodetectors. Appl Phys Lett 86:121116
2. 2.
Harrer A, Schwarz B, Schuler S, Reininger P, Wirthmuller A, Detz H et al (2016) 4.3 mum quantum cascade detector in pixel configuration. Opt Express 24:15
3. 3.
Andresen BF, Buffaz A, Carras M, Doyennette L, Nedelcu A, Bois P et al (2010) State of the art of quantum cascade photodetectors. Proc SPIE 7660:76603Q
4. 4.
Giorgetta FR, Baumann E, Graf M, Yang QK, Manz C, Köhler K et al (2009) Quantum cascade detectors. IEEE J Quantum Electron 45:8
5. 5.
Sakr S, Giraud E, Dussaigne A, Tchernycheva M, Grandjean N, Julien FH (2012) Two-color GaN/AlGaN quantum cascade detector at short infrared wavelengths of 1 and 1.7μm. Appl Phys Lett 100:181103
6. 6.
Graf M, Hoyler N, Giovannini M, Faist J, Hofstetter D (2006) InP-based quantum cascade detectors in the mid-infrared. Appl Phys Lett 88:241118
7. 7.
Kong N, Liu JQ, Li L, Liu FQ, Wang LJ, Wang ZG et al (2010) A 10.7 μm InGaAs/InAlAs quantum cascade detector. Chin Phys Lett 27:128503
8. 8.
Giorgetta FR, Baumann E, Graf M, Ajili L, Hoyler N, Giovannini M et al (2007) 16.5 μm quantum cascade detector using miniband transport. Appl Phys Lett 90:231111
9. 9.
Zhai SQ, Liu JQ, Wang XJ, Zhuo N, Liu FQ, Wang ZG et al (2013) 19 μm quantum cascade infrared photodetectors. Appl Phys Lett 102:191120
10. 10.
Graf M, Scalari G, Hofstetter D, Faist J, Beere H, Linfield E et al (2004) Terahertz range quantum well infrared photodetector. Appl Phys Lett 84:475
11. 11.
Ryzhii V (1996) The theory of quantum-dot infrared phototransistors. Semicond Sci Technol 11:759–765
12. 12.
Pan D, Towe E, Kennerly S (1998) Normal-incidence intersubband (In, Ga)As/GaAs quantum dot infrared photodetectors. Appl Phys Lett 73:14
13. 13.
Martyniuk P, Rogalski A (2008) Quantum-dot infrared photodetectors: status and outlook. Prog Quantum Electron 32:89–120
14. 14.
Tsai CL, Cheng KY, Chou ST, Lin SY (2007) InGaAs quantum wire infrared photodetector. Appl Phys Lett 91:181105
15. 15.
Das B, Singaraju P (2005) Novel quantum wire infrared photodetectors. Infrared Physics & Technology 46:209–218
16. 16.
Krishna S (2005) Quantum dots-in-a-well infrared photodetectors. J Phys D: Appl Phys 38:2142–2150
17. 17.
Raghavan S, Rotella P, Stintz A, Fuchs B, Krishna S, Morath C et al (2002) High-responsivity, normal-incidence long-wave infrared (λ 7.2 μm) InAs/In 0.15Ga0.85As dots-in-a-well detector. Appl Phys Lett 81:1369
18. 18.
Barve AV, Krishna S (2012) Photovoltaic quantum dot quantum cascade infrared photodetector. Appl Phys Lett 100:021105
19. 19.
Wang XJ, Zhai SQ, Zhuo N, Liu JQ, Liu FQ, Liu SM et al (2014) Quantum dot quantum cascade infrared photodetector. Appl Phys Lett 104:171108
20. 20.
Liverini V, Bismuto A, Nevou L, Beck M, Gramm F, Müller E et al (2011) InAs/AlInAs quantum-dash cascade structures with electroluminescence in the mid-infrared. J Cryst Growth 323:491–495
21. 21.
Miska P, Even J, Platz C, Salem B, Benyattou T, Bru-Chevalier C et al (2004) Experimental and theoretical investigation of carrier confinement in InAs quantum dashes grown on InP(001). J Appl Phys 95:1074
22. 22.
Wang FJ, Zhuo N, Liu SM, Ren F, Ning ZD, Ye XL et al (2016) Temperature independent infrared responsivity of a quantum dot quantum cascade photodetector. Appl Phys Lett 108:251103
23. 23.
Chou ST, Tseng CC, Chen CN, Lin WH, Lin SY, Wu MC (2008) Quantum-dot/quantum-well mixed-mode infrared photodetectors for multicolor detection. Appl Phys Lett 92:253510
24. 24.
Zhuo N, Liu FQ, Zhang JC, Wang LJ, Liu JQ, Zhai SQ et al (2014) Quantum dot cascade laser. Nanoscale Res Lett 9:144
25. 25.
Kim JO, Sengupta S, Barve AV, Sharma YD, Adhikary S, Lee SJ et al (2013) Multi-stack InAs/InGaAs sub-monolayer quantum dots infrared photodetectors. Appl Phys Lett 102:011131
Download references
This work was supported by the National Basic Research Program of China (Grant Nos. 2013CB632804/02 and 2014CB643903) and the National Natural Science Foundation of China (Grant Nos. 11274301 and 61376501). We thank Ping Liang and Ying Hu for their help in device processing.
Authors’ contributions
FJW designed the structure, fabricated the device, performed the testing, and wrote the paper. FQL and SML designed the structure, provided the concept, wrote the paper, and supervised the project. FR and JQL supervised the testing. NZ and SQZ completed the MBE growth. ZGW supervised the project. All authors read and approved the final manuscript.
Competing interests
The authors declare that they have no competing interests.
Author information
Correspondence to Shu-Man Liu or Feng-Qi Liu.
Rights and permissions
Reprints and Permissions
About this article
Verify currency and authenticity via CrossMark
Cite this article
Wang, F., Ren, F., Liu, S. et al. Normal Incident Long Wave Infrared Quantum Dash Quantum Cascade Photodetector. Nanoscale Res Lett 11, 392 (2016).
Download citation
• Quantum dash
• Quantum cascade
• Infrared detector |
afc806ddf58aa1d3 | Current Issue
Volume 34 Issue 6
Online Date:
Previous Issue Next Issue
From Nothing to Something II: Nonlinear Systems via Consistent Correlated Bang
Sen-Yue Lou
Chin. Phys. Lett. 2017, 34 (6): 060201 . DOI: 10.1088/0256-307X/34/6/060201
Abstract PDF(pc) (390KB) ( 331 ) PDF(mobile)(395KB) ( 43 ) HTML ( 30
The Chinese ancient sage Laozi said that everything comes from 'nothing'. In the work [Chin. Phys. Lett. 30 (2013) 080202], infinitely many discrete integrable systems have been obtained from nothing via simple principles (Dao). In this study, a new idea, the consistent correlated bang, is introduced to obtain nonlinear dynamic systems including some integrable ones such as the continuous nonlinear Schrödinger equation, the (potential) Korteweg de Vries equation, the (potential) Kadomtsev–Petviashvili equation and the sine-Gordon equation. These nonlinear systems are derived from nothing via suitable 'Dao', the shifted parity, the charge conjugate, the delayed time reversal, the shifted exchange, the shifted-parity-rotation and so on.
A High-Order Conservative Numerical Method for Gross–Pitaevskii Equation with Time-Varying Coefficients in Modeling BEC
Xiang Li, Xu Qian, Ling-Yan Tang, Song-He Song
Chin. Phys. Lett. 2017, 34 (6): 060202 . DOI: 10.1088/0256-307X/34/6/060202
Abstract PDF(pc) (1906KB) ( 191 ) PDF(mobile)(1904KB) ( 18 ) HTML ( 15
We propose a high-order conservative method for the nonlinear Schrödinger/Gross–Pitaevskii equation with time-varying coefficients in modeling Bose–Einstein condensation (BEC). This scheme combined with the sixth-order compact finite difference method and the fourth-order average vector field method, finely describes the condensate wave function and physical characteristics in some small potential wells. Numerical experiments are presented to demonstrate that our numerical scheme is efficient by the comparison with the Fourier pseudo-spectral method. Moreover, it preserves several conservation laws well and even exactly under some specific conditions.
Stability of Dirac Equation in Four-Dimensional Gravity
F. Safari, H. Jafari, J. Sadeghi, S. J. Johnston, D. Baleanu
Chin. Phys. Lett. 2017, 34 (6): 060301 . DOI: 10.1088/0256-307X/34/6/060301
Abstract PDF(pc) (411KB) ( 250 ) PDF(mobile)(411KB) ( 20 ) HTML ( 2
We introduce the Dirac equation in four-dimensional gravity which is a generally covariant form. We choose the suitable variable and solve the corresponding equation. To solve such equation and to obtain the corresponding bispinor, we employ the factorization method which introduces the associated Laguerre polynomial. The associated Laguerre polynomials help us to write the Dirac equation of four-dimensional gravity in the form of the shape invariance equation. Thus we write the shape invariance condition with respect to the secondary quantum number. Finally, we obtain the spinor wave function and achieve the corresponding stability of condition for the four-dimensional gravity system.
Observation of Topological Links Associated with Hopf Insulators in a Solid-State Quantum Simulator
X.-X. Yuan, L. He, S.-T. Wang, D.-L. Deng, F. Wang, W.-Q. Lian, X. Wang, C.-H. Zhang, H.-L. Zhang, X.-Y. Chang, L.-M. Duan
Chin. Phys. Lett. 2017, 34 (6): 060302 . DOI: 10.1088/0256-307X/34/6/060302
Abstract PDF(pc) (7970KB) ( 417 ) PDF(mobile)(8102KB) ( 33 ) HTML ( 48
Hopf insulators are intriguing three-dimensional topological insulators characterized by an integer topological invariant. They originate from the mathematical theory of Hopf fibration and epitomize the deep connection between knot theory and topological phases of matter, which distinguishes them from other classes of topological insulators. Here, we implement a model Hamiltonian for Hopf insulators in a solid-state quantum simulator and report the first experimental observation of their topological properties, including nontrivial topological links associated with the Hopf fibration and the integer-valued topological invariant obtained from a direct tomographic measurement. Our observation of topological links and Hopf fibration in a quantum simulator opens the door to probe rich topological properties of Hopf insulators in experiments. The quantum simulation and probing methods are also applicable to the study of other intricate three-dimensional topological model Hamiltonians.
Spherically Symmetric Wormhole Gravitational Lens Deflection Angle Signifying Braneworld Cosmology
Anuar Alias, Wan Ahmad Tajuddin Wan Abdullah
Chin. Phys. Lett. 2017, 34 (6): 060401 . DOI: 10.1088/0256-307X/34/6/060401
Abstract PDF(pc) (576KB) ( 214 ) PDF(mobile)(583KB) ( 6 ) HTML ( 0
Previously, the gravitational lens of a wormhole was introduced by various researchers. Their treatment was focused basically on the lens signature that describes wormhole geometrical character such as the differences from a black hole or between any various types of wormhole models. The braneworld scenario provides the idea of spacetime with underlying extra-dimensions. The inclusion of extra-dimensional terms in the lens object spacetime line element will result in some variation in the expression for its gravitational lens deflection angle. Thus in this paper we investigate such variation by deriving this deflection angle expression. As such, this paper not only shows the existence of such variation but also suggests the potential utilization of gravitational lensing to prove the existence of extra dimensions by studying the deflection angle characteristic in accordance with the spacetime expansion rate of the universe.
Localization of Vector Field on Pure Geometrical Thick Brane
Tao-Tao Sui, Li Zhao
Chin. Phys. Lett. 2017, 34 (6): 061101 . DOI: 10.1088/0256-307X/34/6/061101
Abstract PDF(pc) (562KB) ( 173 ) PDF(mobile)(553KB) ( 1 ) HTML ( 2
We investigate the localization of a five-dimensional vector field on a pure geometrical thick brane. By introducing two types of interactions between the vector field and the background scalar field, we obtain a typical volcano potential for the first type of coupling and a Pöschl–Teller potential for the second one. These two types of couplings guarantee that the vector zero mode can be localized on the pure geometrical thick brane under certain conditions.
The $B-L+xY$ Model and the Higgs-strahlung Processes at a Collider
Wen-Qi Cao, Chong-Xing Yue
Chin. Phys. Lett. 2017, 34 (6): 061401 . DOI: 10.1088/0256-307X/34/6/061401
Abstract PDF(pc) (543KB) ( 189 ) PDF(mobile)(536KB) ( 5 ) HTML ( 2
The gauge extension of the standard model with the $U(1)_{B-L+xY}$ symmetry predicts the existence of a light gauge boson $Z'$ with small couplings to ordinary fermions. We discuss its contributions to the muon anomalous magnetic moment $a_{\mu}$. Taking account of the constraints on the relevant free parameters, we further calculate the contributions of the light gauge boson $Z'$ to the Higgs-strahlung processes $e^{+}e^{-}\rightarrow ZH$ and $e^{+}e^{-}\rightarrow Z'H$.
Elliptic Flow Splitting between Particles and their Antiparticles in Au+Au Collisions from a Multiphase Transport Model
Zhen-Yu Xu, Jian-Li Liu, Pan-Pan Zhang, Jing-Bo Zhang, Lei Huo
Chin. Phys. Lett. 2017, 34 (6): 062501 . DOI: 10.1088/0256-307X/34/6/062501
Abstract PDF(pc) (707KB) ( 184 ) PDF(mobile)(698KB) ( 6 ) HTML ( 4
The elliptic flow $v_2$, for $\pi^\pm$, $K^\pm$, $p$ and $\bar {p}$ in Au+Au collisions at center-of-mass energies $\sqrt {s_{_{\rm NN}}}=7.7$, 11.5, 14.5 and 19.6 GeV, is analyzed using a multiphase transport model. A significant difference in the $v_2$ values for $p$ and $\bar {p}$ is observed, and the values of $v_2$ splitting are larger compared with $\pi^+$ and $\pi^-$, $K^+$ and $K^-$. The difference increases with decreasing the center-of-mass energy. The effect of the quark coalescence mechanism in a multi-phase transport model to the value of elliptic difference $\Delta v_2$ between $p$ and $\bar {p}$ has been discussed. The simulation of Au+Au collisions at 14.5 GeV shows that the effect of hadron cascade to $\Delta v_2$ is not obvious, and a larger parton-scattering cross section can lead to a larger $\Delta v_2$.
Angle-Resolved Electron Spectra of F$^{-}$ Ions by Few-Cycle Laser Pulses
Jian-Hong Chen, Song-Feng Zhao, Guo-Li Wang, Xiao-Ping Zheng, Zheng-Rong Zhang
Chin. Phys. Lett. 2017, 34 (6): 063201 . DOI: 10.1088/0256-307X/34/6/063201
Abstract PDF(pc) (962KB) ( 173 ) PDF(mobile)(949KB) ( 6 ) HTML ( 2
The above-threshold detachment of F$^{-}$ ions induced by a linearly polarized few-cycle laser pulse is investigated theoretically using the strong-field approximation model without considering the rescattering mechanism. We first derive an analytical form of transition amplitude for describing the strong-field photodetachment of F$^{-}$ ions. The integration over time in transition amplitude can be performed using the numerical integration method or the saddle-point (SP) method of Shearer et al. [Phys. Rev. A 88 (2013) 033415]. The validity of the SP method is carefully examined by comparing the energy spectra and photoelectron angular distributions (PADs) with those obtained from the numerical integration method. By considering the volume effect of a focused laser beam, both the energy spectra and the low-energy PADs calculated by the numerical integration method agree very well with the experimental results.
Measurement of Heating Rates in a Microscopic Surface-Electrode Ion Trap
Jiu-Zhou He, Lei-Lei Yan, Liang Chen, Ji Li, Mang Feng
Chin. Phys. Lett. 2017, 34 (6): 063701 . DOI: 10.1088/0256-307X/34/6/063701
Abstract PDF(pc) (739KB) ( 222 ) PDF(mobile)(735KB) ( 4 ) HTML ( 2
We report measurement of heating rates of $^{40}$Ca$^{+}$ ions confined in our home-made microscopic surface-electrode trap by a Doppler recooling method. The ions are trapped with approximately 800 μm above the surface, and are subjected to heating due to various noises in the trap. There are 3–5 ions involved to measure the heating rates precisely and efficiently. We show the heating rates in variance with the number and the position of the ions as well as the radio-frequency power, which are helpful for understanding the trap imperfection.
Dick Effect in the Integrating Sphere Cold Atom Clock
Xiu-Mei Wang, Yan-Ling Meng, Ya-Ning Wang, Jin-Yin Wan, Ming-Yuan Yu, Xin Wang, Ling Xiao, Tang Li, Hua-Dong Cheng, Liang Liu
Chin. Phys. Lett. 2017, 34 (6): 063702 . DOI: 10.1088/0256-307X/34/6/063702
Abstract PDF(pc) (1073KB) ( 342 ) PDF(mobile)(1075KB) ( 9 ) HTML ( 0
The Dick effect is an important factor limiting the frequency stability of sequentially-operating atomic frequency standards. Here we study the impact of the Dick effect in the integrating sphere cold atom clock (ISCAC). To reduce the impact of the Dick effect, a 5 MHz local oscillator with ultra-low phase noise is selected and a new microwave synthesizer is built in-house. Consequently, the phase noise of microwave signal is optimized. The contribution of the Dick effect is reduced to $2.5\times 10^{-13}\tau ^{-1/2}$ ($\tau $ is the integrating time). The frequency stability of $4.6\times 10^{-13}\tau ^{-1/2}$ is achieved. The development of this optimization can promote the space applications of the compact ISCAC.
Compact Optical Add-Drop De-Multiplexers with Cascaded Micro-Ring Resonators on SOI
Huan Guan, Zhi-Yong Li, Hai-Hua Shen, Yu-De Yu
Chin. Phys. Lett. 2017, 34 (6): 064201 . DOI: 10.1088/0256-307X/34/6/064201
Abstract PDF(pc) (816KB) ( 261 ) PDF(mobile)(811KB) ( 1 ) HTML ( 0
A four-channel integrated optical wavelength de-multiplexer is experimentally illustrated on a silicon-on-insulator (SOI) substrate. With the aid of cascaded micro-ring resonators, the whole performance of the wavelength de-multiplexer is improved, such as 3 dB bandwidth and channel crosstalk. Based on the transform matrix theory, a four-channel wavelength de-multiplexer with average channel spacing 4.5$\pm$0.5 nm (3 dB bandwidth $\sim 2\pm 0.5$ nm) is demonstrated at telecommunication bands. For each channel, the extinction at the adjacent channel is below $-$39 dB and the out-of-band rejection ratio is up to 40 dB. The channel dropping loss is below 5 dB in the five FSR spectral response periods (near 100 nm).
The 5.2kW Nd:YAG Slab Amplifier Chain Seeded by Nd:YVO$_{4}$ Innoslab Laser
Lei Liu, Shou-Huan Zhou, Yang Liu, Zhe Wang, Gang Wang, Hong Zhao
Chin. Phys. Lett. 2017, 34 (6): 064202 . DOI: 10.1088/0256-307X/34/6/064202
Abstract PDF(pc) (9949KB) ( 58 ) PDF(mobile)(9950KB) ( 5 ) HTML ( 0
A high power Nd:YAG end-pumped slab amplifier chain with a Nd:YVO$_{4}$ innoslab laser as the master oscillator is demonstrated. A chain output power of 5210 W with beam quality of 4 times the diffraction limit is achieved by double-passing the first amplifier stage and single-passing the second stage with an optical efficiency of 29% while working at a frequency of 1 kHz and pulse width of 200 μs.
High-Stability High-Energy Picosecond Optical Parametric Chirped Pulse Amplifier as a Preamplifier in Nd:Glass Petawatt System for Contrast Enhancement
Ting-Rui Huang, Xue Pan, Peng Zhang, Jiang-Feng Wang, Xiao-Ping Ouyang, Xue-Chun Li
Chin. Phys. Lett. 2017, 34 (6): 064203 . DOI: 10.1088/0256-307X/34/6/064203
Abstract PDF(pc) (784KB) ( 284 ) PDF(mobile)(784KB) ( 1 ) HTML ( 8
We demonstrate a novel picosecond optical parametric preamplification to generate high-stability, high-energy and high-contrast seed pulses. The 5 ps seed pulse is amplified from 60 pJ to 300 $\mu$J with an 8.6 ps/ 3 mJ pump laser in a signal stage of short pulse non-collinear optical parametric chirped pulse amplification. The total gain is more than 10$^{6}$ and the rms energy stability is under 1.35%. The contrast ratio is higher than 10$^{8}$ within a scale of 20 ps before the main pulse. Consequently, the improvement factor of the signal contrast is approximately equal to the gain 10$^{6}$ outside the pump window.
Temperature and Pressure inside Sonoluminescencing Bubbles Based on Asymmetric Overlapping Sodium Doublet
Tai-Yang Zhao, Wei-Zhong Chen, Sheng-De Liang, Xun Wang, Qi Wang
Chin. Phys. Lett. 2017, 34 (6): 064301 . DOI: 10.1088/0256-307X/34/6/064301
Abstract PDF(pc) (529KB) ( 198 ) PDF(mobile)(528KB) ( 4 ) HTML ( 4
We experimentally measure the sodium $D$-lines from the multibubble sonoluminescence in sodium hydroxide aqueous solution. The asymmetric overlapping $D$-lines are successfully decomposed based on the Fourier transform analysis. The line broadening of the decomposed sodium $D$-lines shows the effective temperature of 3600–4500 K and the pressure of 560–1000 atm during sonoluminescence.
Influence of Change in Inner Layer Thickness of Composite Circular Tube on Second-Harmonic Generation by Primary Circumferential Ultrasonic Guided Wave Propagation
Ming-Liang Li, Ming-Xi Deng, Guang-Jian Gao, Han Chen, Yan-Xun Xiang
Chin. Phys. Lett. 2017, 34 (6): 064302 . DOI: 10.1088/0256-307X/34/6/064302
Abstract PDF(pc) (736KB) ( 220 ) PDF(mobile)(726KB) ( 1 ) HTML ( 2
The influence of change in inner layer thickness of a composite circular tube is investigated on second-harmonic generation (SHG) by primary circumferential ultrasonic guided wave (CUGW) propagation. Within a second-order perturbation approximation, the nonlinear effect of primary CUGW propagation is treated as a second-order perturbation to its linear response. It is found that change in inner layer thickness of the composite circular tube will influence the efficiency of SHG by primary CUGW propagation in several aspects. In particular, with change in inner layer thickness, the phase velocity matching condition that is originally satisfied for the primary and double-frequency CUGW mode pair selected may no longer be satisfied. This will remarkably influence the efficiency of SHG by primary CUGW propagation. Theoretical analyses and numerical results show that the effect of SHG by primary CUGW propagation is very sensitive to change in inner layer thickness, and it can be used to accurately monitor a minor change in inner layer thickness of the composite circular tube.
Valid Regions of Formulas of Sound Speed in Bubbly Liquids
Yu-Ning Zhang, Zhong-Yu Guo, Yu-Hang Gao, Xiao-Ze Du
Chin. Phys. Lett. 2017, 34 (6): 064701 . DOI: 10.1088/0256-307X/34/6/064701
Abstract PDF(pc) (544KB) ( 235 ) PDF(mobile)(536KB) ( 2 ) HTML ( 0
There are numerous formulae relating to the predictions of sound wave in the cavitating and bubbly flows. However, the valid regions of those formulae are rather unclear from the view point of physics. In this work, the validity of the existing formulae is discussed in terms of three regions by employing the analysis of three typical lengths involved (viscous length, thermal diffusion length and bubble radius). In our discussions, viscosity and thermal diffusion are both considered together with the effects of relative motion between bubbles and liquids. The importance of relative motion and thermal diffusion are quantitatively discussed in a wide range of parameter zones (including bubble radius and acoustic frequency). The results show that for large bubbles, the effects of relative motion will be prominent in a wide region.
Nonlinear Propagation of Positron-Acoustic Periodic Travelling Waves in a Magnetoplasma with Superthermal Electrons and Positrons
E. F. EL-Shamy
Chin. Phys. Lett. 2017, 34 (6): 065201 . DOI: 10.1088/0256-307X/34/6/065201
Abstract PDF(pc) (955KB) ( 345 ) PDF(mobile)(938KB) ( 4 ) HTML ( 4
The nonlinear propagation of positron acoustic periodic (PAP) travelling waves in a magnetoplasma composed of dynamic cold positrons, superthermal kappa distributed hot positrons and electrons, and stationary positive ions is examined. The reductive perturbation technique is employed to derive a nonlinear Zakharov–Kuznetsov equation that governs the essential features of nonlinear PAP travelling waves. Moreover, the bifurcation theory is used to investigate the propagation of nonlinear PAP periodic travelling wave solutions. It is found that kappa distributed hot positrons and electrons provide only the possibility of existence of nonlinear compressive PAP travelling waves. It is observed that the superthermality of hot positrons, the concentrations of superthermal electrons and positrons, the positron cyclotron frequency, the direction cosines of wave vector $k$ along the $z$-axis, and the concentration of ions play pivotal roles in the nonlinear propagation of PAP travelling waves. The present investigation may be used to understand the formation of PAP structures in the space and laboratory plasmas with superthermal hot positrons and electrons.
Compressive Behavior of TATB Grains inside TATB-Based PBX Revealed by In-Situ Neutron Diffraction
Yi Tian, Hong Wang, Chang-Sheng Zhang, Qiang Tian, Wei-Bin Zhang, Hong-Jia Li, Jian Li, Ben-De Liu, Guang-Ai Sun, Tai-Ping Peng, Yao Xu, Jian Gong
Chin. Phys. Lett. 2017, 34 (6): 066101 . DOI: 10.1088/0256-307X/34/6/066101
Abstract PDF(pc) (635KB) ( 256 ) PDF(mobile)(635KB) ( 7 ) HTML ( 2
We investigate the (002) lattice strain evolution of triaminotrinitrobenzene (TATB) grains inside one TATB-based plastic bonded explosive (PBX) through the in-situ neutron diffraction. By comparing the untreated specimen with the thermal-treated one, it is found that the volume-average response of measured TATB grains remains nearly elastic during quasi-static uniaxial compression. The observed changes in TATB (002) lattice strains correlate tightly with the evolution of damage. A damage parameter defined by the macroscopically determined residual strain is further used to describe the damage degree of PBX, which suggests that the compressive behavior of TATB-based PBX is significantly influenced by the damage evolution.
Local Heating in a Normal-Metal–Quantum-Dot–Superconductor System without Electric Voltage Bias
Li-Ling Zhou, Xue-Yun Zhou, Rong Cheng, Cui-Ling Hou, Hong Shen
Chin. Phys. Lett. 2017, 34 (6): 067101 . DOI: 10.1088/0256-307X/34/6/067101
Abstract PDF(pc) (678KB) ( 158 ) PDF(mobile)(675KB) ( 1 ) HTML ( 0
We investigate the heat generation $Q$ in a quantum dot (QD), coupled to a normal metal and a superconductor, without electric bias voltage. It is found that $Q$ is quite sensitive to the lead temperatures $T_{\rm L,R}$ and the superconductor gap magnitude ${\it \Delta}$. At $T_{\rm L,R}\ll \omega_0$ ($\omega_0$ is the phonon frequency), the superconductor affects $Q$ only at ${\it \Delta} < \omega_0$, and the maximum magnitude of negative $Q$ appears at some ${\it \Delta}$ slightly smaller than $\omega_0$. At elevated lead temperature, contribution to $Q$ from the superconductor arises at ${\it \Delta}$, ranging from less than to much larger than $\omega_0$. However, the peak value of $Q$ is several times smaller than that in the case of $T_{\rm L,R}\ll \omega_0$. Interchanging lead temperatures $T_{\rm L}$ and $T_{\rm R}$ leads to quite different $Q$ behaviors, while this makes no difference for a normal-metal–quantum-dot–normal-metal system, and the QD can be cooled much more efficiently when the superconductor is colder.
The Nonlinear Electronic Transport in Multilayer Graphene on Silicon-on-Insulator Substrates
Yu-Bing Wang, Wei-Hong Yin, Qin Han, Xiao-Hong Yang, Han Ye, Shuai Wang, Qian-Qian Lv, Dong-Dong Yin
Chin. Phys. Lett. 2017, 34 (6): 067201 . DOI: 10.1088/0256-307X/34/6/067201
Abstract PDF(pc) (757KB) ( 192 ) PDF(mobile)(754KB) ( 5 ) HTML ( 4
We conduct a study on the superlinear transport of multilayer graphene channels that partially or completely locate on silicon which is pre-etched by inductively coupled plasma (ICP). By fabricating a multilayer-graphene field-effect transistor on a Si/SiO$_{2}$ substrate, we obtain that the superlinearity results from the interaction between the multilayer graphene sheet and the ICP-etched silicon. In addition, the observed superlinear transport of the device is found to be consistent with the prediction of Schwinger's mechanism. In the high bias regime, the values of $\alpha$ increase dramatically from 1.02 to 1.40. The strength of the electric field corresponding to the on-start of electron–hole pair production is calculated to be $5\times10^{4}$ V/m. Our work provides an experimental observation of the nonlinear transport of the multilayer graphene.
Mechanisms of Spin-Dependent Heat Generation in Spin Valves
Xiao-Xue Zhang, Yao-Hui Zhu, Pei-Song He, Bao-He Li
Chin. Phys. Lett. 2017, 34 (6): 067202 . DOI: 10.1088/0256-307X/34/6/067202
Abstract PDF(pc) (521KB) ( 165 ) PDF(mobile)(520KB) ( 2 ) HTML ( 0
The extra heat generation in spin transport is usually interpreted in terms of the spin relaxation. Reformulating the heat generation rate, we find alternative current-force pairs without cross effects, which enable us to interpret the product of each pair as a distinct mechanism of heat generation. The results show that the spin-dependent part of the heat generation includes two terms. One is proportional to the square of the spin accumulation and arises from the spin relaxation. However, the other is proportional to the square of the spin-accumulation gradient and should be attributed to another mechanism, the spin diffusion. We illustrate the characteristics of the two mechanisms in a typical spin valve with a finite nonmagnetic spacer layer.
Coulomb-Dominated Oscillations in Fabry–Perot Quantum Hall Interferometers
Yu-Ying Zhu, Meng-Meng Bai, Shu-Yu Zheng, Jie Fan, Xiu-Nian Jing, Zhong-Qing Ji, Chang-Li Yang, Guang-Tong Liu, Li Lu
Chin. Phys. Lett. 2017, 34 (6): 067301 . DOI: 10.1088/0256-307X/34/6/067301
Abstract PDF(pc) (962KB) ( 244 ) PDF(mobile)(955KB) ( 17 ) HTML ( 19
Periodic resistance oscillations in Fabry–Perot quantum Hall interferometers are observed at integer filling factors of the constrictions, $f_{\rm c}=1$, 2, 3, 4, 5 and 6. Rather than the Aharonov–Bohm interference, these oscillations are attributed to the Coulomb interactions between interfering edge states and localized states in the central island of an interferometer, as confirmed by the observation of a positive slope for the lines of constant oscillation phase in the image plot of resistance in the $B$–$V_{\rm S}$ plane. Similar resistance oscillations are also observed when the area $A$ of the center regime and the backscattering probability of interfering edge states are varied, by changing the side-gate voltages and the configuration of the quantum point contacts, respectively. The oscillation amplitudes decay exponentially with temperature in the range of 40 mK$ < T\leq 130$ mK, with a characteristic temperature $T_{\rm 0}\sim 25$ mK, consistent with recent theoretical and experimental works.
Controlling Fusion of Majorana Fermions in One-Dimensional Systems by Zeeman Field
Lu-Bing Shao, Zi-Dan Wang, Rui Shen, Li Sheng, Bo-Gen Wang, Ding-Yu Xing
Chin. Phys. Lett. 2017, 34 (6): 067401 . DOI: 10.1088/0256-307X/34/6/067401
Abstract PDF(pc) (601KB) ( 199 ) PDF(mobile)(598KB) ( 2 ) HTML ( 1
We propose the realization of Majorana fermions (MFs) on the edges of a two-dimensional topological insulator in the proximity with s-wave superconductors and in the presence of transverse exchange field $h$. It is shown that there appear a pair of MFs localized at two junctions and that a reverse in the direction of $h$ can lead to permutation of two MFs. With decreasing $h$, the MF states can either be fused or form one Dirac fermion on the $\pi$-junctions, exhibiting a topological phase transition. This characteristic can be used to detect physical states of MFs when they are transformed into Dirac fermions localized on the $\pi$-junction. A condition of decoupling two MFs is also given.
Synchrotron X-Ray Diffraction Studies on the New Generation Ferromagnetic Semiconductor Li(Zn,Mn)As under High Pressure
Fei Sun, Cong Xu, Shuang Yu, Bi-Juan Chen, Guo-Qiang Zhao, Zheng Deng, Wen-Ge Yang, Chang-Qing Jin
Chin. Phys. Lett. 2017, 34 (6): 067501 . DOI: 10.1088/0256-307X/34/6/067501
Abstract PDF(pc) (937KB) ( 194 ) PDF(mobile)(923KB) ( 0 ) HTML ( 4
The pressure effect on the crystalline structure of the I–II–V semiconductor Li(Zn,Mn)As ferromagnet is studied using in situ high-pressure x-ray diffraction and diamond anvil cell techniques. A phase transition starting at $\sim$11.6 GPa is found. The space group of the high-pressure new phase is proposed as $Pmca$. Fitting with the Birch–Murnaghan equation of state, the bulk modulus $B_{0}$ and its pressure derivative $B'_0$ of the ambient pressure structure with space group of $F\bar{4}3m$ are $B_{0}=75.4$ GPa and $B'_0=4.3$, respectively.
Fast Measurement of Dielectric Conductivity for Space Application by Surface Potential Decay Method
Rong-Hui Quan, Kai Zhou, Mei-Hua Fang, Wei-Ying Chi, Zhen-Long Zhang
Chin. Phys. Lett. 2017, 34 (6): 067901 . DOI: 10.1088/0256-307X/34/6/067901
Abstract PDF(pc) (460KB) ( 189 ) PDF(mobile)(459KB) ( 0 ) HTML ( 0
Surface potential decay of polymers for electrical insulation can help to determine the dark conductivity for spacecraft charging analysis. Due to the existence of radiation-induced conductivity, it decays fast in the first few hours after irradiation and exponentially slowly for the remaining time. The measurement of dark conductivity with this method usually takes the slow part and needs a couple of days. Integrating the Fowler formula into the deep dielectric charging equations, we obtain a new expression for the fast decay part. The experimental data of different materials, dose rates and temperatures are fitted by the new expression. Both the dark conductivity and the radiation-induced conductivity are derived and compared with other methods. The result shows a good estimation of dark conductivity and radiation-induced conductivity in high-resistivity polymers, which enables a fast measurement of dielectric conductivity within about 600 min after irradiation.
Radio-Frequency Characteristics of Partial Dielectric Removal HR-SOI and TR-SOI Substrates
Shi Cheng, Yong-Wei Chang, Nan Gao, Ye-Min Dong, Lu Fei, Xing Wei, Xi Wang
Chin. Phys. Lett. 2017, 34 (6): 068101 . DOI: 10.1088/0256-307X/34/6/068101
Abstract PDF(pc) (1006KB) ( 246 ) PDF(mobile)(1007KB) ( 2 ) HTML ( 2
High-resistivity silicon-on-insulator (HR-SOI) and trap-rich high-resistivity silicon-on-insulator (TR-SOI) substrates have been widely adopted for high-performance rf integrated circuits. Radio-frequency loss and non-linearity characteristics are measured from coplanar waveguide (CPW) transmission lines fabricated on HR-SOI and TR-SOI substrates. The patterned insulator structure is introduced to reduce loss and non-linearity characteristics. A metal-oxide-semiconductor (MOS) CPW circuit model is established to expound the mechanism of reducing the parasitic surface conductance (PSC) effect by combining the semiconductor characteristic analysis (pseudo-MOS and $C$–$V$ test). The rf performance of the CPW transmission lines under dc bias supply is also compared. The TR-SOI substrate with the patterned oxide structure sample has the minimum rf loss ($ < $0.2 dB/mm up to 10 GHz), the best non-linearity performance, and reductions of 4 dB and 10 dB are compared with the state-of-the-art TR-SOI sample's, HD2 and HD3, respectively. It shows the potential application for integrating the two schemes to further suppress the PSC effect.
Efficient Visible Photoluminescence from Self-Assembled Ge QDs Embedded in Silica Matrix
Alireza Samavati, Zahra Samavati, A. F. Ismail, M. H. D. Othman, M. A. Rahman, A. K. Zulhairun
Chin. Phys. Lett. 2017, 34 (6): 068102 . DOI: 10.1088/0256-307X/34/6/068102
Abstract PDF(pc) (756KB) ( 149 ) PDF(mobile)(757KB) ( 1 ) HTML ( 0
Measuring the growth parameters of Ge quantum dots (QDs) embedded in SiO$_{2}$/Si hetero-structure is pre-requisite for developing the optoelectronic devices such as photovoltaics and sensors. Their optical properties can be tuned by tailoring the growth morphology and structures, where the growth parameters' optimizations still need to be explored. We determine the effect of annealing temperature on surface morphology, structures and optical properties of Ge/SiO$_{2}$/Si hetero-structure. Samples are grown via rf magnetron sputtering and subsequent characterizations are made using imaging and spectroscopic techniques.
A Simple Deposition Method for Self-Assembling Single Crystalline Hybrid Perovskite Nanostructures
Wen-Rong Xie, Bin Liu, Tao Tao, Guo-Gang Zhang, Bao-Hua Zhang, Zi-Li Xie, Peng Chen, Dun-Jun Chen, Rong Zhang
Chin. Phys. Lett. 2017, 34 (6): 068103 . DOI: 10.1088/0256-307X/34/6/068103
Abstract PDF(pc) (784KB) ( 257 ) PDF(mobile)(771KB) ( 4 ) HTML ( 2
A sequential deposition method is developed, where the hybrid organic–inorganic halide perovskite (CH$_{3}$NH$_{3}$Pb (I$_{1-x}$Br$_{x}$)$_{3}$) is synthesized using precursor solutions containing CH$_{3}$NH$_{3}$I and PbBr$_{2}$ with different mole ratios and reaction times. The perovskite achieved here is quite stable in the atmosphere for a relatively long time without noticeable degradation, and the perovskite nanowires are proved to be single crystalline structure, based on transmission electron microscopy. Furthermore, strong red photoluminescence from perovskite is observed in the wavelength range from 746 nm to 770 nm with the increase of the reaction time, on account of the exchanges between I$^{-}$ ions and Br$^{-}$ ions in the perovskite crystal. Lastly, the influences of concentration and reaction time of the precursor solutions are discussed, which are important for evolution of hybrid perovskite from nanocuboid to nanowire and nanosheet.
Allosteric Mechanism of Calmodulin Revealed by Targeted Molecular Dynamics Simulation
Qian-Yun Liang, Chun-Li Pang, Jun-Wei Li, Su-Hua Zhang, Hui Liu, Yong Zhan, Hai-Long An
Chin. Phys. Lett. 2017, 34 (6): 068701 . DOI: 10.1088/0256-307X/34/6/068701
Abstract PDF(pc) (1365KB) ( 301 ) PDF(mobile)(1360KB) ( 1 ) HTML ( 4
Calmodulin (CaM) is involved in the regulation of a variety of cellular signaling pathways. To accomplish its physiological functions, CaM binds with Ca$^{2+}$ at its EF-hand Ca$^{2+}$ binding sites which induce the conformational switching of CaM. However, the molecular mechanism by which Ca$^{2+}$ binds with CaM and induces conformational switching is still obscure. Here we combine molecular dynamics with targeted molecular dynamics simulation and achieve the state-transition pathway of CaM. Our data show that Ca$^{2+}$ binding speeds up the conformational transition of CaM by weakening the interactions which stabilize the closed state. It spends about 6.5 ns and 5.25 ns for transition from closed state to open state for apo and holo CaM, respectively. Regarding the contribution of two EF-hands, our data indicate that the first EF-hand triggers the conformational transition and is followed by the second one. We determine that there are two interaction networks which contribute to stabilize the closed and open states, respectively.
Enhanced Efficiency of Metamorphic Triple Junction Solar Cells for Space Applications
Du-Xiang Wang, Ming-Hui Song, Jing-Feng Bi, Wen-Jun Chen, Sen-Lin Li, Guan-Zhou Liu, Ming-Yang Li, Chao-Yu Wu
Chin. Phys. Lett. 2017, 34 (6): 068801 . DOI: 10.1088/0256-307X/34/6/068801
Abstract PDF(pc) (1789KB) ( 248 ) PDF(mobile)(1793KB) ( 2 ) HTML ( 2
Metamorphic In$_{0.55}$Ga$_{0.45}$P/In$_{0.06}$Ga$_{0.94}$As/Ge triple-junction (3J-MM) solar cells are grown on Ge (100) substrates via metal organic chemical vapor deposition. Epi-structural analyses such as high resolution x-ray diffraction, photoluminence, cathodoluminescence and HRTEM are employed and the results show that the high crystal quality of 3J-MM solar cells is obtained with low threading dislocation density of graded buffer (an average value of 6.8$\times$10$^{4}$/cm$^{2})$. Benefitting from the optimized bandgap combination, under one sun, AM0 spectrum, 25$^{\circ}\!$C conditions, the conversion efficiency is achieved about 32%, 5% higher compared with the lattice-matched In$_{0.49}$Ga$_{0.51}$P/In$_{0.01}$Ga$_{0.99}$As/Ge triple junction (3J-LM) solar cell. Under 1-MeV electron irradiation test, the degradation of the EQE and $I$–$V$ characteristics of 3J-MM solar cells is at the same level as the 3J-LM solar cell. The end-of-life efficiency is $\sim$27.1%. Therefore, the metamorphic triple-junction solar cell may be a promising candidate for next-generation space multi-junction solar cells.
Modeling Information Popularity Dynamics via Branching Process on Micro-Blog Network
Jin-Jie Li, Lian-Ren Wu, Jia-Yin Qi, Qi-Ming Sun
Chin. Phys. Lett. 2017, 34 (6): 068901 . DOI: 10.1088/0256-307X/34/6/068901
Abstract PDF(pc) (863KB) ( 331 ) PDF(mobile)(862KB) ( 1 ) HTML ( 4
Predicting and modeling of items popularity on web 2.0 have attracted great attention of many scholars. From the perspective of information competition, we propose a probabilistic model using the branching process to characterize the process in which micro-blogging gains its popularity. The model is analytically tractable and can reproduce several characteristics of empirical micro-blogging data on Sina micro-blog, the most popular micro-blogging system in China. We find that the information competition on micro-blog network leads to the decay of information popularity obeying power law distribution with exponent about 1.5, and the value is similar to the exponent of degree distribution of micro-blog network. Furthermore, the mean popularity is decided by the probability of innovating a new message. Our work presents evidence supporting the idea that two distinct factors affect information popularity: information competition and social network structure.
User Heterogeneity and Individualized Recommender
Qing-Xian Wang, Jun-Jie Zhang, Xiao-Yu Shi, Ming-Sheng Shang
Chin. Phys. Lett. 2017, 34 (6): 068902 . DOI: 10.1088/0256-307X/34/6/068902
Abstract PDF(pc) (500KB) ( 217 ) PDF(mobile)(501KB) ( 2 ) HTML ( 9
Previous works on personalized recommendation mostly emphasize modeling peoples' diversity in potential favorites into a uniform recommender. However, these recommenders always ignore the heterogeneity of users at an individual level. In this study, we propose an individualized recommender that can satisfy every user with a customized parameter. Experimental results on four benchmark datasets demonstrate that the individualized recommender can significantly improve the accuracy of recommendation. The work highlights the importance of the user heterogeneity in recommender design.
Viscous Modified Chaplygin Gas in Classical and Loop Quantum Cosmology
D. Aberkane, N. Mebarki, S. Benchikh
Chin. Phys. Lett. 2017, 34 (6): 069801 . DOI: 10.1088/0256-307X/34/6/069801
Abstract PDF(pc) (664KB) ( 273 ) PDF(mobile)(658KB) ( 5 ) HTML ( 4
We investigate the cosmological model of viscous modified Chaplygin gas (VMCG) in classical and loop quantum cosmology (LQC). Firstly, we constrain its equation of state parameters in the framework of standard cosmology from Union 2.1 SNe Ia data. Then, we probe the dynamical stability of this model in a universe filled with VMCG and baryonic fluid in LQC background. It is found that the model is very suitable with $(\chi^{2/d.o.f}=0.974)$ and gives a good prediction of the current values of the deceleration parameter $q_{0}=\in(-0.60,-0.57)$ and the effective state parameter $\omega_{\rm eff}\in(-0.76,-0.74)$ that is consistent with the recent observational data. The model can also predict the time crossing when $(\rho_{\rm DE}\approx\rho_{\rm matter})$ at $z=0.75$ and can solve the coincidence problem. In LQC background, the Big Bang singularity found in classical cosmology ceases to exist and is replaced by a bounce when the Hubble parameter vanishes at $\rho_{\rm tot}\approx \rho_{\rm c}$.
34 articles |
75850deb65dd0608 | Action Filename Description Size Access License Resource Version
Show more files...
The calculation of the electronic structure of chemical systems, necessitates computationally expensive approximations to the time-independent electronic Schrödinger equation in order to yield static properties in good agreement with experimental results. These methods can also be coupled with molecular dynamics, to provide a first principles description of thermodynamic properties, dynamics and chemical reactions. Evidently, the cost of the underlying electronic structure method limits the time frame over which a system can be studied, and hence, certain chemical processes may be out-of-reach using a particular method. Furthermore, when one is interested in designing new molecules with interesting properties, a systematic enumeration of an inordinately large chemical space is typically required. The combination of both expensive electronic structure calculations and large chemical spaces results in an insurmountable barrier in computational cost. The application of artificial intelligence (AI) in computational chemistry has, over the past 20 years, seen an explosion in interest and scope with respect to these two issues. Intelligent algorithms capable of efficiently sampling chemical spaces, coupled with machine learning (ML) techniques to cheapen the calculation of electronic structure evaluations, enable both rapid throughput to search for new molecules with particular properties and in the case of ML, an increase in the timescales that can be simulated via molecular dynamics. In this thesis, computer programs have been developed that enable the application of AI algorithms to chemical and biological problems. In particular, a versatile evolutionary algorithm toolbox called EVOLVE has been developed. As a first test-case study, genetic algorithms were used to efficiently sample the vast chemical sequence space of an isolated ¿-helical peptide, from which insights are gained to rationalise the stability of particular genetically optimised peptides in a variety of implicit solvent environments. Genetic algorithms were then applied to the compositional optimisation of training sets used in machine learning models of molecular properties. The resulting optimal training sets are shown to significantly reduce out-of-sample errors on all thermodynamic and electronic properties considered. Furthermore, they reveal that there are systematic trends in the distribution of these optimally-representative molecules. Inspired by the success of machine learning, an ML-enhanced multiple time step approach for performing accurate ab initio molecular dynamics was developed. Two schemes representing different force partitioning were investigated. In the first scheme, the ML method provides an estimation of the slow (high level) force components acting on a system, while in the second, the ML forces are added to the fast (low level) components and a high level ab initio method is used to correct the error induced by the ML model. In both schemes significant overall speedups are obtained with respect to standard Velocity-Verlet integration, all-the-while maintaining the accuracy of the high level ab initio method. |
e6be2e38b64251db | Heat equation
In mathematics and physics, the heat equation is a certain partial differential equation. Solutions of the heat equation are sometimes known as caloric functions. The theory of the heat equation was first developed by Joseph Fourier in 1822 for the purpose of modeling how a quantity such as heat diffuses through a given region.
Animated plot of the evolution of the temperature in a square metal plate as predicted by the heat equation. The height and redness indicate the temperature at each point. The initial state has a uniformly hot hoof-shaped region (red) surrounded by uniformly cold region (yellow). As time passes the heat diffuses into the cold region.
As the prototypical parabolic partial differential equation, the heat equation is among the most widely studied topics in pure mathematics, and its analysis is regarded as fundamental to the broader field of partial differential equations. The heat equation can also be considered on Riemannian manifolds, leading to many geometric applications. Following work of Subbaramiah Minakshisundaram and Åke Pleijel, the heat equation is closely related with spectral geometry. A seminal nonlinear variant of the heat equation was introduced to differential geometry by James Eells and Joseph Sampson in 1964, inspiring the introduction of the Ricci flow by Richard Hamilton in 1982 and culminating in the proof of the Poincaré conjecture by Grigori Perelman in 2003. Certain solutions of the heat equation known as heat kernels provide subtle information about the region on which they are defined, as exemplified through their application to the Atiyah–Singer index theorem.[1]
The heat equation, along with variants thereof, is also important in many fields of science and applied mathematics. In probability theory, the heat equation is connected with the study of random walks and Brownian motion via the Fokker–Planck equation. The infamous Black–Scholes equation of financial mathematics is a small variant of the heat equation, and the Schrödinger equation of quantum mechanics can be regarded as a heat equation in imaginary time. In image analysis, the heat equation is sometimes used to resolve pixelation and to identify edges. Following Robert Richtmyer and John von Neumann's introduction of "artificial viscosity" methods, solutions of heat equations have been useful in the mathematical formulation of hydrodynamical shocks. Solutions of the heat equation have also been given much attention in the numerical analysis literature, beginning in the 1950s with work of Jim Douglas, D.W. Peaceman, and Henry Rachford Jr.
Statement of the equationEdit
In mathematics, if given an open subset U of n and a subinterval I of , one says that a function u : U × I → ℝ is a solution of the heat equation if
where (x1, ..., xn, t) denotes a general point of the domain. It is typical to refer to t as "time" and x1, ..., xn as "spatial variables," even in abstract contexts where these phrases fail to have their intuitive meaning. The collection of spatial variables is often referred to simply as x. For any given value of t, the right-hand side of the equation is the Laplacian of the function u(⋅, t) : U → ℝ. As such, the heat equation is often written more compactly as
In physics and engineering contexts, especially in the context of diffusion through a medium, it is more common to fix a Cartesian coordinate system and then to consider the specific case of a function u(x, y, z, t) of three spatial variables (x, y, z) and time variable t. One then says that u is a solution of the heat equation if
in which α is a positive coefficient called the diffusivity of the medium. In addition to other physical phenomena, this equation describes the flow of heat in a homogeneous and isotropic medium, with u(x, y, z, t) being the temperature at the point (x, y, z) and time t. If the medium is not homogeneous and isotropic, then α would not be a fixed coefficient, and would instead depend on (x, y, z); the equation would also have a slightly different form. In the physics and engineering literature, it is common to use 2 to denote the Laplacian, rather than .
In mathematics as well as in physics and engineering, it is common to use Newton's notation for time derivatives, so that is used to denote u/t. Note also that the ability to use either or 2 to denote the Laplacian, without explicit reference to the spatial variables, is a reflection of the fact that the laplacian is independent of the choice of coordinate system. In mathematical terms, one would say that the Laplacian is "translationally and rotationally invariant." In fact, it is (loosely speaking) the simplest differential operator which has these symmetries. This can be taken as a significant (and purely mathematical) justification of the use of the Laplacian and of the heat equation in modeling any physical phenomena which are homogeneous and isotropic, of which heat diffusion is a principal example.
The "diffusivity constant" α is often not present in mathematical studies of the heat equation, while its value can be very important in engineering. This is not a major difference, for the following reason. Let u be a function with
Define a new function Then, according to the chain rule, one has
Thus, there is a straightforward way of translating between solutions of the heat equation with a general value of α and solutions of the heat equation with α = 1. As such, for the sake of mathematical analysis, it is often sufficient to only consider the case α = 1.
Since there is another option to define a satisfying as in above by setting Note that the two possible means of defining the new function discussed here amount, in physical terms, to changing the unit of measure of time or the unit of measure of length.
Physical interpretation of the equationEdit
Informally, the Laplacian operator gives the difference between the average value of a function in the neighborhood of a point, and its value at that point. Thus, if u is the temperature, tells whether (and by how much) the material surrounding each point is hotter or colder, on the average, than the material at that point.
By the second law of thermodynamics, heat will flow from hotter bodies to adjacent colder bodies, in proportion to the difference of temperature and of the thermal conductivity of the material between them. When heat flows into (respectively, out of) a material, its temperature increases (respectively, decreases), in proportion to the amount of heat divided by the amount (mass) of material, with a proportionality factor called the specific heat capacity of the material.
By the combination of these observations, the heat equation says that the rate at which the material at a point will heat up (or cool down) is proportional to how much hotter (or cooler) the surrounding material is. The coefficient α in the equation takes into account the thermal conductivity, the specific heat, and the density of the material.
Mathematical interpretation of the equationEdit
The first half of the above physical thinking can be put into a mathematical form. The key is that, for any fixed x, one has
where u(x)(r) is the single-variable function denoting the average value of u over the surface of the sphere of radius r centered at x; it can be defined by
in which ωn − 1 denotes the surface area of the unit ball in n-dimensional Euclidean space. This formalizes the above statement that the value of u at a point x measures the difference between the value of u(x) and the value of u at points nearby to x, in the sense that the latter is encoded by the values of u(x)(r) for small positive values of r.
Following this observation, one may interpret the heat equation as imposing an infinitesimal averaging of a function. Given a solution of the heat equation, the value of u(x, t + τ) for a small positive value of τ may be approximated as 1/2n times the average value of the function u(⋅, t) over a sphere of very small radius centered at x.
Character of the solutionsEdit
Solution of a 1D heat partial differential equation. The temperature ( ) is initially distributed over a one-dimensional, one-unit-long interval (x = [0,1]) with insulated endpoints. The distribution approaches equilibrium over time.
The behavior of temperature when the sides of a 1D rod are at fixed temperatures (in this case, 0.8 and 0 with initial Gaussian distribution). The temperature approaches a linear function because that is the stable solution of the equation: wherever temperature has a nonzero second spatial derivative, the time derivative is nonzero as well.
The heat equation implies that peaks (local maxima) of will be gradually eroded down, while depressions (local minima) will be filled in. The value at some point will remain stable only as long as it is equal to the average value in its immediate surroundings. In particular, if the values in a neighborhood are very close to a linear function , then the value at the center of that neighborhood will not be changing at that time (that is, the derivative will be zero).
A more subtle consequence is the maximum principle, that says that the maximum value of in any region of the medium will not exceed the maximum value that previously occurred in , unless it is on the boundary of . That is, the maximum temperature in a region can increase only if heat comes in from outside . This is a property of parabolic partial differential equations and is not difficult to prove mathematically (see below).
Another interesting property is that even if initially has a sharp jump (discontinuity) of value across some surface inside the medium, the jump is immediately smoothed out by a momentary, infinitesimally short but infinitely large rate of flow of heat through that surface. For example, if two isolated bodies, initially at uniform but different temperatures and , are made to touch each other, the temperature at the point of contact will immediately assume some intermediate value, and a zone will develop around that point where will gradually vary between and .
If a certain amount of heat is suddenly applied to a point in the medium, it will spread out in all directions in the form of a diffusion wave. Unlike the elastic and electromagnetic waves, the speed of a diffusion wave drops with time: as it spreads over a larger region, the temperature gradient decreases, and therefore the heat flow decreases too.
Specific examplesEdit
Heat flow in a uniform rodEdit
For heat flow, the heat equation follows from the physical laws of conduction of heat and conservation of energy (Cannon 1984).
By Fourier's law for an isotropic medium, the rate of flow of heat energy per unit area through a surface is proportional to the negative temperature gradient across it:
where is the thermal conductivity of the material, is the temperature, and is a vector field that represents the magnitude and direction of the heat flow at the point of space and time .
If the medium is a thin rod of uniform section and material, the position is a single coordinate , the heat flow towards increasing is a scalar field , and the gradient is an ordinary derivative with respect to the . The equation becomes
Let be the internal heat energy per unit volume of the bar at each point and time. In the absence of heat energy generation, from external or internal sources, the rate of change in internal heat energy per unit volume in the material, , is proportional to the rate of change of its temperature, . That is,
where is the specific heat capacity (at constant pressure, in case of a gas) and is the density (mass per unit volume) of the material. This derivation assumes that the material has constant mass density and heat capacity through space as well as time.
Applying the law of conservation of energy to a small element of the medium centered at , one concludes that the rate at which heat accumulates at a given point is equal to the derivative of the heat flow at that point, negated. That is,
From the above equations it follows that
which is the heat equation in one dimension, with diffusivity coefficient
This quantity is called the thermal diffusivity of the medium.
Accounting for radiative lossEdit
An additional term may be introduced into the equation to account for radiative loss of heat. According to the Stefan–Boltzmann law, this term is , where is the temperature of the surroundings, and is a coefficient that depends on physical properties of the material. The rate of change in internal energy becomes
and the equation for the evolution of becomes
Non-uniform isotropic mediumEdit
where is the volumetric heat source.
Three-dimensional problemEdit
In the special cases of propagation of heat in an isotropic and homogeneous medium in a 3-dimensional space, this equation is
• is temperature as a function of space and time;
• , , and are the second spatial derivatives (thermal conductions) of temperature in the , , and directions, respectively;
• is the thermal diffusivity, a material-specific quantity depending on the thermal conductivity , the specific heat capacity , and the mass density .
If the medium is not the whole space, in order to solve the heat equation uniquely we also need to specify boundary conditions for u. To determine uniqueness of solutions in the whole space it is necessary to assume additional conditions, for example an exponential bound on the growth of solutions[2] or a sign condition (nonnegative solutions are unique by a result of David Widder).[3]
Internal heat generationEdit
Solving the heat equation using Fourier seriesEdit
The following solution technique for the heat equation was proposed by Joseph Fourier in his treatise Théorie analytique de la chaleur, published in 1822. Consider the heat equation for one space variable. This could be used to model heat conduction in a rod. The equation is
• t is the time variable, so t ≥ 0.
We assume the initial condition
where the function f is given, and the boundary conditions
This solves the heat equation in the special case that the dependence of u has the special form (4). In general, the sum of solutions to (1) that satisfy the boundary conditions (3) also satisfies (1) and (3). We can show that the solution to (1), (2) and (3) is given by
Generalizing the solution techniqueEdit
The solution technique used above can be greatly extended to many other types of equations. The idea is that the operator uxx with the zero boundary conditions can be represented in terms of its eigenfunctions. This leads naturally to one of the basic ideas of the spectral theory of linear self-adjoint operators.
for n ≥ 1 are eigenfunctions of Δ. Indeed,
Heat conduction in non-homogeneous anisotropic mediaEdit
Putting these equations together gives the general equation of heat flow:
• In the case of an isotropic medium, the matrix A is a scalar matrix equal to thermal conductivity k.
Fundamental solutionsEdit
In one variable, the Green's function is a solution of the initial value problem (by Duhamel's principle, equivalent to the definition of Green's function as one with a delta function as solution to the first equation)
where δ is the Dirac delta function. The solution to this problem is the fundamental solution (heat kernel)
In several spatial variables, the fundamental solution solves the analogous problem
The general problem on a domain Ω in Rn is
Some Green's function solutions in 1DEdit
A variety of elementary Green's function solutions in one-dimension are recorded here; many others are available elsewhere.[7] In some of these, the spatial domain is (−∞,∞). In others, it is the semi-infinite interval (0,∞) with either Neumann or Dirichlet boundary conditions. One further variation is that some of these solve the inhomogeneous equation
where f is some given function of x and t.
Homogeneous heat equationEdit
Initial value problem on (−∞,∞)
Fundamental solution of the one-dimensional heat equation. Red: time course of . Blue: time courses of for two selected points x0 = 0.2 and x0 = 1. Note the different rise times/delays and amplitudes.
Interactive version.
Depicted is a numerical solution of the nonhomogeneous heat equation. The equation has been solved with 0 initial and boundary conditions and a source term representing a stove top burner.
Inhomogeneous heat equationEdit
Problem on (-∞,∞) homogeneous initial conditions
which expressed in the language of distributions becomes
For example, to solve
Similarly, to solve
Mean-value property for the heat equationEdit
Solutions of the heat equations
though a bit more complicated. Precisely, if u solves
Notice that
as λ → ∞ so the above formula holds for any (x, t) in the (open) set dom(u) for λ large enough.[8] This can be shown by an argument similar to the analogous one for harmonic functions.
Steady-state heat equationEdit
Steady-state condition:
Particle diffusionEdit
One can model particle diffusion by an equation involving either:
In either case, one uses the heat equation
Brownian motionEdit
Let the stochastic process be the solution of the stochastic differential equation
which is the solution of the initial value problem
where is the Dirac delta function.
Schrödinger equation for a free particleEdit
Thermal diffusivity in polymersEdit
Further applicationsEdit
See alsoEdit
1. ^ Berline, Nicole; Getzler, Ezra; Vergne, Michèle. Heat kernels and Dirac operators. Grundlehren der Mathematischen Wissenschaften, 298. Springer-Verlag, Berlin, 1992. viii+369 pp. ISBN 3-540-53340-0
3. ^ John, Fritz (1991-11-20). Partial Differential Equations. Springer Science & Business Media. p. 222. ISBN 978-0-387-90609-6.
5. ^ Juan Luis Vazquez (2006-12-28), The Porous Medium Equation: Mathematical Theory, Oxford University Press, USA, ISBN 978-0-19-856903-9
• Cannon, John Rozier (1984), The one–dimensional heat equation, Encyclopedia of Mathematics and its Applications, 23, Reading, MA: Addison-Wesley Publishing Company, Advanced Book Program, ISBN 0-201-13522-1, MR 0747979, Zbl 0567.35001
• Carslaw, H.S.; Jaeger, J.C. (1988), Conduction of heat in solids, Oxford Science Publications (2nd ed.), New York: The Clarendon Press, Oxford University Press, ISBN 978-0-19-853368-9
• Cole, Kevin D.; Beck, James V.; Haji-Sheikh, A.; Litkouhi, Bahan (2011), Heat conduction using Green's functions, Series in Computational and Physical Processes in Mechanics and Thermal Sciences (2nd ed.), Boca Raton, FL: CRC Press, ISBN 978-1-43-981354-6
• Evans, Lawrence C. (2010), Partial Differential Equations, Graduate Studies in Mathematics, 19 (2nd ed.), Providence, RI: American Mathematical Society, ISBN 978-0-8218-4974-3
• Friedman, Avner (1964), Partial differential equations of parabolic type, Englewood Cliffs, N.J.: Prentice-Hall, Inc.
• Widder, D.V. (1975), The heat equation, Pure and Applied Mathematics, 67, New York-London: Academic Press [Harcourt Brace Jovanovich, Publishers]
• Wilmott, Paul; Howison, Sam; Dewynne, Jeff (1995), The mathematics of financial derivatives. A student introduction, Cambridge: Cambridge University Press, ISBN 0-521-49699-3
External linksEdit |
205d1a8e871e5b36 | 10.1 Creating a GUI tool step by step
10.2 Further GUI design considerations
Let’s suppose we have developed a model for a system we want to study. The system could be a physical system modeled by the relevant physical laws, e.g., Newton’s second law of motion, the Schrödinger equation, Kirchoff’s circuit laws, Maxwell’s equation, or chemical reaction-rate equations. It could also be a financial, sociological, or economic system model. In any case, the model needs to be expressed in a mathematically precise form. The model will consist of mathematical relations—these could be algebraic equations, differential equations, matrix algebra, and so forth. The model will also have some parameters that characterize the specific characteristics of a particular system, e.g., masses, voltages, concentrations, spring constants, or initial velocities.
The first task is to capture the mathematical model in a computational model expressed as a MATLAB program. This can often be a MATLAB script or function, which may call other user-written functions. Going from a model expressed mathematically to a computational model is itself challenging and involves some careful thinking about what one knows at the outset, what one doesn’t know, and how the relevant mathematics can be used to connect the two. The mathematical model must be transformed into a computational algorithm—a series of well-defined ...
Get Learning to Program with MATLAB: Building GUI Tools now with O’Reilly online learning.
|
a707605f38a31e56 | I am trying to understand the working principle of basis sets in quantum chemistry. As far as I understand in the Hartree-Fock method, the basis set is used to calculate the matrix representation of Fock operator in each iteration in the SCF procedure. Usually each basis function in a chosen basis set is represented as a linear combination of several primitive Gaussian type orbital (GTO). I want to use cc-PVDZ basis for C atom which I can take from the EMSL Basis Set Exchange Library. For example in the Gaussian format:
! cc-pVDZ EMSL Basis Set Exchange Library 10/19/17 2:40 AM
! Elements References
! H : T.H. Dunning, Jr. J. Chem. Phys. 90, 1007 (1989).
! He : D.E. Woon and T.H. Dunning, Jr. J. Chem. Phys. 100, 2975 (1994).
! Li - Ne: T.H. Dunning, Jr. J. Chem. Phys. 90, 1007 (1989).
! Na - Mg: D.E. Woon and T.H. Dunning, Jr. (to be published)
! Al - Ar: D.E. Woon and T.H. Dunning, Jr. J. Chem. Phys. 98, 1358 (1993).
! Sc - Zn: N.B. Balabanov and K.A. Peterson, J. Chem. Phys. 123, 064107 (2005),
! N.B. Balabanov and K.A. Peterson, J. Chem. Phys. 125, 074110 (2006)
! Ca : J. Koput and K.A. Peterson, J. Phys. Chem. A, 106, 9595 (2002).
C 0
S 8 1.00
6665.0000000 0.0006920
1000.0000000 0.0053290
228.0000000 0.0270770
64.7100000 0.1017180
21.0600000 0.2747400
7.4950000 0.4485640
2.7970000 0.2850740
0.5215000 0.0152040
S 8 1.00
6665.0000000 -0.0001460
1000.0000000 -0.0011540
228.0000000 -0.0057250
64.7100000 -0.0233120
21.0600000 -0.0639550
7.4950000 -0.1499810
2.7970000 -0.1272620
0.5215000 0.5445290
S 1 1.00
0.1596000 1.0000000
P 3 1.00
9.4390000 0.0381090
2.0020000 0.2094800
0.5456000 0.5085570
P 1 1.00
0.1517000 1.0000000
D 1 1.00
0.5500000 1.0000000
1. In the lines such as S 8 1.00 I know that the letter S is the orbital type, 8 is the number of GTO in this orbital's expansion, but what is 1.00 for?
2. The GTO for a p-type orbital has the form $x^l y^m z^n \exp(-ar^2)$ where the powers satisfy $l+m+n=1$ and thus there are three possible sets of $\{l,m,n\}$. But for instance in that link for lines with p-type orbitals, which possibilities shall I pick? The same question for d-type orbitals.
• 4
$\begingroup$ The link is broken. I have used my crystal ball to guess that you looked at a Gaussian94-style listing. In this case, the 1.0 refers to a sort of global scaling factor. In general, see reference of the program the format of which you use for a description of the basis set listing format. In terms of which p-orbitals to pick: all of them. You never want just one - imagine your molecule rotates. $\endgroup$ – TAR86 Oct 19 '17 at 8:50
• $\begingroup$ Yes you are right it's Gaussian94 style for cc-PVDZ basis set for C atom. Using example in that link, there are 8 listed orbitals, 3 of them p, one of them d, the rest is s. Does this mean there are in total 4+(3x3)+(1x6)=19 orbitals to be used to represent the Fock operator? $\endgroup$ – nougako Oct 19 '17 at 9:01
• 1
$\begingroup$ Your calculation is correct for "cartesian" gaussians (noted as 6D in Gaussian03 etc.). Many programs do linear combinations of the cartesian gaussians to yield "spherical" gaussians, that is, 5 d functions and one s function (which I imagine has nodal planes). The s function is then effectively discarded to save computational time. $\endgroup$ – TAR86 Oct 19 '17 at 12:15
• $\begingroup$ @nougako I think you confused Dunning's cc-pCVDZ basis set with the cc-pVDZ basis set. Martin has posted the cc-pVDZ basis set as you said, but from your comment I assume you were looking at the core polarizied double zeta basis set (which has an additional sp shell). Could you state which basis set your intented to use? $\endgroup$ – awvwgk Nov 9 '17 at 19:01
• $\begingroup$ I intended to use cc-pVDZ for C atom. It's been a while now and I don't remember why I said there are 8 orbitals there while it seems like there are only 6 in my previous comment. $\endgroup$ – nougako Nov 11 '17 at 2:16
Have a look in the Gaussian manual if something is unclear.
If you want to use an extern basis set in Gaussian you have to respect the gaussian input conventions for the basis set, which you can find here (gaussian.com/gen/). Since you are using the EMSL Basis Set Exchange the server gives you a file that satisfies these conventions. Like:
Type NGauss Sc
$α_1\quad d_{1μ}$
$α_2\quad d_{2μ}$
$α_N\quad d_{Nμ}$
Type stands for the shell, then you have the number of primitives with NGauss and scaling factor Sc. Afterwards you put your exponents and coeffients of the primitive gaussian functions which then will be contracted to the respective contracted gaussian functions. You might notice that the scaling factor is set to unity for almost all basis sets—I checked some Dunning, Pople and Ahlrichs basis sets and always found it set to unity, which seems resonable. In the TURBOMOLE format the scaling factor is omitted totally, so it seems to be a Gaussian specific hack.
Now to your second question. You are specifing the shell, not a single function, a shell contains always all spherical harmonics belonging to the given quantum number or all cartesian gaussian functions. The spherical harmonics are the solution of the Schrödinger equation for an electron on a surface. You have generally $2l+1$ spherical harmonics where $l$ is the azimudal quantum number specified as 0/s, 1/p, 2/d, 3/f, 4/g, 5/h and so on. There exists also the possibilty to use cartesian gaussians with $(l+1)(l+2)/2$ degenerated functions.
Why am I telling you that? You always choose a whole shell with a azimudal quantum number, how many function belong to this shell depends on the quantum chemistry code you are using. ASAIK, Gaussian supports both.
In case of a p-shell it does not matter, both expressions evaluate to three p-functions (as expected). So you get three p-functions with one p-shell.
The basis set you have chosen would be evaluted as:
(9s4p1d) → [3s2p1d]
Having nine primitive s type gaussians (note that Dunning basis sets use a general contraction scheme), four primitive p type gaussians and one primitive d type gaussian which form a basis set out of three s shells, two p shells and one d shell with 14 contracted spherical harmonic gaussian functions or 15 contracted cartesian gaussian functions for carbon.
Your Answer
|
73a04d35b509da6e | Imaginary and Complex Numbers
Let’s start with the following calculation: $-2 = (-8)^{1/3} = (-8)^{2/6} = ((-8)^2)^{1/6} = 64^{1/6} = 2$.
This must be wrong… but I don’t see why!
The brutal answer would be that you should never write $a^p$ when $a$ is negative… But, this would conceal some wonderful underlying mathematics, including and especially the weirdness and awesomeness of imaginary and complex numbers! Let’s discover these mathematical objects in this article! Our approach will be highly geometric and, I think, much more insightful than the one you have learned (or will learn) at school.
The Geometry of $n$-th Root
To understand what just happened, let’s focus on the first equality, namely $-2 = \sqrt[3]{-8}$. It reads “$-2$ is the cube root of $-8$”. But what does that mean?
Doesn’t it mean that $(-2)^3 = -8$?
Yes! But, let’s have a geometrical understanding of what you’ve just said! To do so, notice that if we multiply both sides of the equality by any number, then the equality still holds. Indeed, if we multiply by $1$ both sides we obtain $1 \times (-2)^3 = 1 \times (-8)$, and if we multiply by $241$, we have $241 \times (-2)^3 = 241 \times (-8)$. More generally, if $x$ represents any number, we have $x \times (-2)^3 = x \times (-2) \times (-2) \times (-2) = x \times (-8)$. Thus, we can now see the operation “$\times (-2)$” as an operation on numbers, which, when applied three times, is equivalent to the operation “$\times (-8)$”!
I’m still not sure where you’re going with it…
Here’s the awesome part. These operations correspond to geometrical transformations made to the number line (they’re symmetries)! For instance, multiplying by $(-2)$ corresponds to inverting it, and stretching it by a factor 2. This is what’s done below three times!
Multiplication by -2
It might look nice, but I don’t see the point of the geometrical approach…
Be patient! Let’s simplify the problem a little bit and consider the equality $-1 = (-1)^{1/3}$. What does it mean geometrically?
Well, I guess that “$\times (-1)$” corresponds to just inverting the number line, doesn’t it?
Yes! Now, this means that we can translate the algebraic relation $-1=(-1)^{1/3}$ by the geometrical phrase: Inverting the number line three times is equivalent to inverting it once. Sweet, isn’t it?
Multiplication by -1
Yes! But I still don’t see the point…
Hehe! The key idea of complex numbers lies in the next question… Is $(-1)$ the only one cube root of $(-1)$?
I think so… If you take a positive number, its cube will be positive… So the only number that works is $-1$…
Don’t think numerically! The whole point of my construction was to consider the problem geometrically!
In other words, is there a geometrical operation on the number line, which, when applied three times, corresponds to inverting it?
Come on! You can find it!
I know! How about rotating the number line by a 6th of a turn?
In fact, there are two such 6th-of-a-turn operations, depending on whether the turn is clockwise or anti-clockwise. Below are described these two operations, each applied three times to the number line.
Sixth of a Turn
In addition to the “$\times (-1)$” operation, this gives us a total of three cube roots of $(-1)$!
Does $(-8)$ have several cube roots too? What about the 6th roots of $64$?
Great questions! In fact, you should try to answer by yourself!
Humm… I guess a cube root of $(-8)$ can be obtained by a 6th of a turn (like $(-1)$), combined with a stretching of the number line, can’t it?
Exactly! Similarly, 6th roots of $64$ include dilations by a factor 2 combined with a rotation of one or two 6th of turn, clockwise or anti-clockwise. Plus, there are also operations “$\times (-2)$” and “$\times 2$”. This gives us six 6th roots of $64$. And as you can guess (or prove!), more generally, any number has $n$ $n$-th roots!
I’ve heard about the square root of $(-1)$… Is it obtained similarly to what we’ve done??
Once again, you should be the one who gives me the answer!
In other words, is there a geometrical transformation which, when applied twice to the number line, is equivalent to simply inverting it?
I know! Rotations of a quarter of a turn!
There you go! By convention, we refer to the anti-clockwise quarter-of-a-turn rotation as $i$. This $i$ is so important that we have given it different names… which I all dislike! It’s known as the imaginary number (imaginary? number?), the square root of $(-1)$, or, worst of all, $\sqrt{-1}$.
What’s wrong with $\sqrt{-1}$?
What’s very wrong is that $i$ is not the only square root of $(-1)$. The clockwise quarter-of-a-turn rotation is a square root of $(-1)$ too! Plus, if you can’t write $\sqrt[3]{-8}$, then you definitely can’t write $\sqrt{-1}$!
OK… That’s cool but I don’t see how this fixes the paradox of the introduction!
Hehe… We can now answer that elegantly!
Solution to the Paradox
The major flaw lies in the non-unicity of the $n$-th roots. This is what 19th-century French mathematician Évariste Galois called the ambiguity of $n$-th roots. More precisely, there’s not an actual unicity of the cube root of $(-8)$, nor is there a unicity of the 6th root of $64$. In particular, $(-2)$ is a 6th root of $64$, but it’s just not the one we refer to by $\sqrt[6]{64}$.
Still, there’s a strong relation between cube roots of $(-8)$ and 6th roots of $64$, isn’t there?
Yes. And the other flaw of the formula of the introduction lies in the relation $(-8)^{1/3} = ((-8)^2)^{1/6}$. Literally, it says that a cube root of $(-8)$ is a sixth root of the square of $(-8)$.
Humm… I’m not sure I understand…
Once again, our salvation will come from geometry! Geometrically, this says that an operation which is equivalent to $\times (-8)$ when applied three times is equal to an operation which, when applied six times, is equivalent to applying $\times (-8)$ twice. Below is a figure which illustrates this statement.
Cube Root of -8 and 6th Root of 64
As you can deduce it from the figure above, any cube root of $(-8)$ is also a 6th root of $64$. Indeed, applying the green operation six times will necessary be equivalent to applying “$\times (-8)$” twice. But some of the 6th roots of $64$ aren’t cube roots of $(-8)$! It’s not too hard to prove that these are the cube roots of $8$, the other square root of $64$. I invite to do that as an exercise!
Now, by denoting $\sqrt[3]{-8}$ the set of all cube roots of $-8$ and $\sqrt[6]{64}$ the set of all 6th roots of $64$, we can elegantly correct the paradox! These notations are highly non-conventional and I have been blamed for using them. But I believe it to provide an insightful and beautiful solution to the paradox. Also, if you can make the difference between $n$-th roots and the classical notation $\sqrt[n]{x}$ for $x \geq 0$, then you’ll have made a huge breakthrough in the understanding of $n$-th roots.
Granted, this could have been proved without involving geometrical operations, as the key aspect is the property of groups in pure algebra, which the geometrical operations form. But I think it’s much more insightful when you consider this equation for geometrical operations.
Using our notations of $n$-th roots as defining the set of all $n$-th roots, note that, if $n/m$ is not an irreducible, we have $(a^{n})^{1/m} \neq (a^{1/m})^n$. For instance, when $a=1$, $n=m=2$, we have $(1^2)^{1/2} = \{1, -1\}$, while $(1^{1/2})^2 = \{(-1)^2, 1^2\} = \{1\}$. One natural way to define $a^{n/m}$ would then be $a^{n/m} = (a^{1/m})^n$. I’ll leave you as an exercise to prove that $a^q$ would then be well-defined for any $q \in \mathbb Q$.
Homotheties and Rotations
Although insightful, the descriptions of complex numbers we have given so far aren’t very rigorous.
So what’s the rigorous definition of complex numbers?
From a geometrical perspective, complex numbers should actually regarded as a certain collection of transformations of a plane rather than those of a line. This plane is called the complex plane. It is infinite in all directions and has a unique center, called the origin. Plus, one of its axis which goes through the origin is known as the real number line. The axis perpendicular to the real number line is known as the imaginary number line.
So what kind of transformations of the plane are we talking about?
The transformations which corresponds to a complex numbers are those we have been using so far: homotheties and rotations centered on the origin. These two operations are the symmetries described below:
Rotation and Homothety
The key aspect of these operations is that any number of rotations and homotheties can be combined, and that the order in which they are combined does not matter. In technical terms we say that all these geometrical transformations are associative and commutative. Another important fact is that all usual numbers can be matched uniquely with one such geometrical operation. For instance, what’s the operation corresponding to the number $2$?
And I guess that it corresponds a homothety by a factor 2…
Yes, which is also known as “$\times 2$”! What about the number $(-1)$?
Multiplication by -2 of the Complex Plane
The operation “$\times (-1)$” inverted the number line… So I guess it’s a symmetry along the imaginary axis!
Nope… Keep in mind that we can only use homotheties and rotations!
Arg… Humm… I know! It’s a half-turn rotation!
Excellent! Let me give you one last example: $(-2)$ is a homothety of a factor 2 combined with a rotation of a half turn. Now, more generally, any combination of a homothety and a rotation forms a complex numbers.
Wait… A complex number?
Yes! Now, the homothety is defined by a positive factor called module and is commonly denoted $\rho$. By convention, the rotation is defined by an angle of anti-clockwise turn called argument and is often denoted $\theta$. Since these two parameters uniquely define the combination of a homothety and a rotation, each complex number can be represented by the couple $(\rho, \theta)$.
In terms of pure algebra, we are here defining the set of complex numbers by $\mathbb C = (\mathbb R_+^* \times SO(2)) \cup \{0\} = (\mathbb R_+^* \times \mathbb R/\tau \mathbb Z) \cup \{0\}$. As a topological group, $\mathbb C^* = \mathbb C – \{0\}$ is then trivially isomorphic to $\mathbb R_+^* \times \mathbb S^1$, where $\mathbb S^1 = \mathbb R/\mathbb Z$ is the circle. A clever extension of this approach can then be defined to construct the set of quaternions $\mathbb H$, which nearly equals $(\mathbb R_+^* \times SO(3)) \cup \{0\}$ (technically, the “rotations” of $\mathbb H$ form a double covering of $SO(3)$). Quaternions are a bit more complicated though, as they are not commutative. Indeed, as you can see it if you play Rubik’s cube, two rotations in space do not commute in general. If you can, please write about quaternions!
And any number is a complex number?
Well, as we’ve said, any positive number is just a homothety. This includes a rotation of angle $0$. Thus, any positive number $x$ is the complex number $(x, 0)$. Now, if $x$ is a positive number, then $(-x)$ corresponds to a homothety of factor $x$, and of a half turn. Since a half turn corresponds to angle $\pi$, the number $(-x)$ is thus the complex number $(x, \pi)$.
What? A half turn is $\pi$? Shouldn’t we rather give the full turn a name like $\tau$, and call the half turn $\tau/2$?
I know! Some mathematicians even think that $\pi$ should be withdrawn from all equations to be replaced by $\pi = \tau/2$. There’s even a manifest supporting that… as you can see it in the following awesome video by ViHart:
I personally much prefer $\tau$ over $\pi$… Since you’ve probably learned $\pi$, I’ll try to insert it in this article, but I’ll be doing most of it with $\tau$. In particular, note that if $x > 0$, then $(-x)$ is the complex number $(x, \tau/2)$.
What about the number zero?
Humm… Good remark. We need a new transformation which corresponds to zero! This transformation consists in collapsing the whole complex plane onto its origin.
One thing troubles me… You’ve been saying that complex numbers are geometrical transformations? In what possible sense are they numbers?
Not in an obvious one for sure! But, for one thing, we can multiply complex numbers. This corresponds to performing successively the geometric operations associated to the complex numbers. And what’s beautiful is that this has an algebraic translation! What I mean by that is that multiplying $(\rho_1, \theta_1)$ by $(\rho_2, \theta_2)$ corresponds to two homotheties by factors $\rho_1$ and $\rho_2$ and two rotations of angles $\theta_1$ and $\theta_2$. Now, two homotheties of factors $\rho_1$ and $\rho_2$ combine into a homothety of factor $\rho_1 \times \rho_2$, while two rotations of angles $\theta_1$ and $\theta_2$ result in a rotation of angle $\theta_1+\theta_2$. Thus, we have the product $(\rho_1, \theta_1) \times (\rho_2, \theta_2) = (\rho_1 \times \rho_2, \theta_1 + \theta_2)$. How sweet is that?
Note that angles are defined up to $\tau$, which means that the angle $\theta+\tau$ is the same as $\theta$. But let’s not dwell too much on modular algebra. If you can though, please write about it! Now, if you are familiar with modular algebra, then note that, in pure algebra terms, what we’ve done here is defining the group $(\mathbb C^*, \times)$ as the product group $(\mathbb R_+^*, \times) \times (\mathbb R /\tau \mathbb Z, +)$.
Euler's Formula (bis)
For reasons I won’t be dwelling on here, Leonhard Euler showed that it made sense to write the complex number $(\rho, \theta)$ as $\rho e^{i\theta}$. Given this writing, the multiplication of two complex numbers follows the usual laws of algebra, as $(\rho_1 e^{i\theta_1}) \times (\rho_2 e^{i\theta_2}) = (\rho_1 \rho_2) e^{i(\theta_1+\theta_2)}$. Also, plugging in $\rho=1$ and $\theta=\pi$, we obtain the equality $e^{i\pi} = (1, \pi) = -1$, which is usually rewritten as $e^{i\pi} + 1=0$. This beautiful formula is known as Euler’s identity! But, in spite of probably angering the Swiss scholar, I’d rather have it written with $\tau$ as $e^{i\tau} = 1$. This last formula is much more insightful, as it says that $\tau$ represents a full turn.
To understand why complex numbers can be written $\rho e^{i\theta}$, check my article on Euler’s identity!
I guess this is a nice construction, but I don’t see the point…
The ability complex numbers have to describe rotations is the reason why they are so much used in oscillating problems in physics and engineering. Instead of involving ugly trigonometry, the function $f(t)=e^{it}$ provides an elegant description of these motions, which greatly facilitates computations! But that’s still just the tip of the iceberg. To unveil the true magic of complex numbers, we’ll need to dig deeper!
Points in the Complex Plane
In the 19th century, German mathematician Carl Friedrich Gauss, the Prince of mathematics, provided a powerful visualization of complex numbers. To get there, notice the incredible fact that $1 \times x=x$ when $x$ is a number. Thus, if I show you a geometrical transformation “$\times x$”, then you can easily find which $x$ I chose by looking at what point the number $1$ is sent to. Similarly, if I give you a geometrical transformation $(\rho, \theta)$, then you can find out the values of $\rho$ and $\theta$, as they’ll be the polar coordinates of the point $1$ is sent to! This point is called the image of 1.
The polar coordinates? Can you give an example?
Sure! Below is the the combination of a homothety by a factor 2 and a rotation by an angle $2\tau/3$ (2 thirds of a turn).
Follow 1
The factor of homothety $\rho$ is the distance between the image of 1 and the origin, while the angle of rotation $\theta$ is the (anti-clockwise) angle from $1$ to its image.
These look like the polar coordinates!
Exactly! This shows that any geometrical transformation can be translated into a point in the complex plane whose polar coordinates are given by the factor of homothety and the angle of rotation! And this is a one-to-one correspondence between geometrical transformations and points! Thus, we can identify geometrical transformation with points in the complex planes.
In pure algebra terms, what we’ve unveiled here is a natural bijection between $\mathbb R_+^* \times SO(2)$ and $\mathbb R^2-\{0\}$. This bijection is a homeomorphism! Plus, it is naturally extended to a homeomorphism $(\mathbb R_+^* \times SO(2)) \cup \{0\} \rightarrow \mathbb R^2$. Thus, we can identify both sets, and we call them both $\mathbb C$.
Could we translate these coordinates back in classical Cartesian ones?
Yes! But before doing that, let’s first look to which point of the complex plane the geometrical transformation $i$ is associated to:
i in the complex plane
So, $i$ is the point right above the origin? That’s funny…
I know! What’s also particularly interesting is that we can now describe Euclidean planar geometry with complex numbers!
Vectors in the Complex Plane
To complete the construction of complex numbers, we need to associate any point in the complex plane with a vector.
A vector? What the hell is that?
A vector is a motion in the complex plane. This motion is often represented by an arrow from an initial point to a final point. But what’s important to keep in mind is that the vector corresponds to the motion, not the arrow. Two arrows may correspond to the same motion even though they don’t start at the same initial points, as displayed by the arrows of same colors in the figure on the right.
So how do we associate any point in the complex plane with a vector?
Given a point in the complex plane, we can draw the arrow from the origin to that point. This vector associated to this arrow will then be the vector associated to the point in the complex plane. For instance, $i$ is associated to the arrow from $0$ to $i$, which corresponds to the green arrows in the figure on the right.
I think I get it… But what’s the point in mapping points to vectors?
We can now define the addition of complex numbers!
How do we do that?
By combining the motions associated to the vectors! For instance, combining the purple and green motions is a motion of one unit to the right and by two units upwards. This is equivalent to the blue motion only! This means that $purple + green = blue$. And this can be visualized geometrically by having the purple and blue arrows starting at the same point, while the green arrow is put at the end of the purple arrow. The purple, green and blue arrows must then form a triangle, as done below:
Addition of Vectors
And since all vectors correspond to a complex number, we can now do additions of complex numbers by adding their corresponding vectors!
Now here comes the key part. All vectors can be decomposed uniquely as a sum of $1$ and $i$. For instance, the purple vector can be obtained by a combination of the vector associated to $1$, and a vector associated to $i$. Thus, $purple = 1+i$. Similarly, the blue vector is a combination of $1$ and two times $i$. Hence, $blue = 1+2i$.
So all vectors are a certain number of times $1$ plus a certain number of times $i$?
Exactly! And since all vectors correspond to a complex number, all complex numbers can thus be written $a+bi$, where $a$ and $b$ are usual numbers. This decomposition enables simple computations of additions of complex numbers! Indeed, if you consider any two complex numbers $z_1$ and $z_2$, then we know by now that each can be decomposed as $z_1 = a_1+b_1i$ and $z_2 = a_2 + b_2 i$. The sum of $z_1$ and $z_2$ is then given by $z_1 + z_2 = (a_1+b_1i) + (a_2+b_2i) = (a_1+a_2) + (b_1+b_2)i$.
I should also mention that each complex number $z$ is also associated with an operation “$+z$” on the points of the complex plane. Geometrically, this operation consists in a translation of a vector which is the one that $z$ corresponds to. In fact, this mapping of $z$ to “$+z$” is an isomorphism of Euclidean vector space between the space of complex numbers and the set of translations of the complex plane.
This Cartesian description of complex numbers would have been quite useful if it hadn’t been for the more general approach of linear algebra to define vectors.
In pure algebra terms, what we’ve done here is unveiling a natural isomorphism of Euclidean vector space between $\mathbb C$ and $\mathbb R^2$. In particular, this gives the space $(\mathbb C, +)$ a structure of commutative group. This isomorphism is trivial for the classical construction of complex numbers, but it’s quite impressive in the construction of this article! Recall that we introduced $\mathbb C$ as combinations of homotheties and rotations!
The Field of Complex Numbers
Let’s sum up what we’ve discussed so far. The awesomeness of complex numbers is that they can be identified with several different mathematical objects. They can be seen as combinations of homotheties and rotations of the complex plane, as points in the complex plane and as vectors in the complex planes. The first understanding of complex numbers describes the multiplication, while the third describes the addition. Thinking about each of the meanings of complex numbers separately is already quite mesmerizing, but the truly mind-blowing property of complex numbers occurs when we mix them!
What do you mean?
Let’s see what happens when we both have a multiplication and an addition! In particular, let’s focus on the simplest possible case, namely $(x+y) \times z$, where $x$, $y$ and $z$ are all complex numbers.
Humm… I don’t know where to start!
Well, the expression starts with the addition of $x$ and $y$…
OK… So to do the addition, we need to think of these complex numbers as vectors, right?
Exactly! Let’s draw $x$, $y$, and their sum $x+y$. But then, we need to multiply these terms by $z$. How do we do that?
I know! We need to think of $z$ as $\times z$, which is a combination of a rotation and a homothety!
Very good! This means that $xz$, $yz$ and $(x+y)z$ will be the image by the geometrical transformation “$\times z$” of $x$, $y$ and $z$. This is what’s drawn below:
Now, the magic occurs when we notice that any geometrical transformation $\times z$ preserves the shapes of triangles. As a result, the triangle $x$, $y$, $x+y$ gets transformed into the triangle $xz$, $yz$, $(x+y)z$. And this means that the sum of the sides $xz$ and $yz$ equals the last side $(x+y)z$! In other terms, $xz+yz=(x+y)z$. This is the essential distributivity property which binds the two operations we have defined on complex numbers! It says that the structure of complex numbers is much richer than the structures of geometrical operations and vectors alone.
I’m not sure I see what’s so great about that…
What’s awesome about that is that all the algebraic manipulations you could do with usual numbers still hold for complex numbers. In particular, operations like $(x+y)^2 = x^2+2xy+y^2$ are still valid for complex numbers! This strong resemblance of manipulations is what led mathematicians to call complex numbers… numbers. In pure algebra terms, we say that complex numbers form a field.
To recapitulate, a complex number is very complicated mathematical object, which can be seen through diverse angles. Mainly, it can be seen as a combination of a homothety and a rotation, as a point in the complex plane or as a vector of dimension 2. These three interpretations are displayed below.
Complex Numbers
But what makes complex numbers so special isn’t the different angles through which they can be seen, but the combination of them all, in sort of the same way that quantum objects aren’t simply classical waves nor classical particles. Namely, the full nature of complex numbers is unveiled as they are considered as a field. In particular, it is to that field that the fundamental theorem of algebra gets applied.
What’s that theorem?
This theorem, which was first proven by Carl Friedrich Gauss, states that all polynomial complex equations have solutions. It’s as simple as that. This property is also known as the fact that the complex numbers form the algebraic closure of real numbers. In fact, as mentioned briefly in my article on the construction of numbers, the most insightful and natural (but also terribly abstract) way to construct complex numbers is precisely by defining them as the algebraic closure of real numbers.
In pure algebra terms, let’s denote $\mathbb R[X]$ the ring of polynomials. Then, it can be shown that $(X^2+1)\mathbb R[X]$ is a maximal ideal. The space of complex numbers is then defined as $\mathbb C = \mathbb R[X]/(X^2+1)\mathbb R[X]$, and it is thus a field. This is so beautiful that I nearly cried when I first saw that!
What’s the point of this theorem?
Many more equations can now be solved! And I’m not only talking about the polynomial equations. More importantly, natural and simple solutions appear in differential equations, electromagnetism, eigen-value search, Fourier transform and number theory among many other fields. In particular, complex numbers have turned out to be the right structure to describe particle physics! Check my article on the dynamics of the wave function in quantum mechanics to see the complex numbers in action!
More on Science4All
The Magic of Algebra The Magic of Algebra
By Lê Nguyên Hoang | Updated:2016-02 | Views: 3533
Construction and Definition of Numbers Construction and Definition of Numbers
By Lê Nguyên Hoang | Updated:2016-02 | Views: 7720
Numbers and Constructibility Numbers and Constructibility
By Lê Nguyên Hoang | Updated:2016-02 | Views: 7708
Symmetries and Group Theory Symmetries and Group Theory
By Lê Nguyên Hoang | Updated:2016-02 | Views: 4406
By Lê Nguyên Hoang | Updated:2016-01 | Views: 82278
Dynamics of the Wave Function: Heisenberg, Schrödinger, Collapse Dynamics of the Wave Function: Heisenberg, Schrödinger, Collapse
By Lê Nguyên Hoang | Updated:2016-02 | Prerequisites: The Essence of Quantum Mechanics, Imaginary and Complex Numbers, Linear Algebra and Higher Dimensions | Views: 9153
On one hand, the dynamics of the wave function can follow Schrödinger equation and satisfy simple properties like Heisenberg uncertainty principle. But on the other hand, it can be probabilistic. This doesn't mean that it's totally unpredictable, since the unpredictability is amazingly predictable. Find out how these two dynamics work!
One comment to “Imaginary and Complex Numbers
1. Inspired introduction to complex numbers! Introducing it that way will nicely pave the way for geometric algebra.
Will certainly use this when teaching my kids this concept – only quibble when you write that an “anti-clockwise quarter-of-a-turn rotation [is represented] as i” you illustrate this with the horizontal and then rotated vertical number lines. So far so good, but because the transition arrow is curved, I at first thought it indicated the wrong orientation of the rotation, took me a moment to realize that this is of course not the assigned meaning. May be better to use a straight arrow for that graphic.
Leave a Reply
|
b4faddd1668d2529 | ← Atomic Physics
Intrinsic Spin
Tuesday, March 8, 2022
Magnetic Dipole Moments
The separate components of the angular momentum vector, L\boldsymbol{\overrightarrow{L}}, and ll of an atom can be determined from the interactions between an external magnetic field and the atom's magnetic dipole moment. However, performing this experiment will results in an unexpected property of the electron, called intrinsic spin.
Orbital magnetic dipole moments
A classical magnetic dipole moment can be produced by a current loop or the orbital motion of a charged object. The magnetic dipole moment, denoted μ\boldsymbol{\overrightarrow{\mu}}, is a vector whose magnitude is equal to the product of the circulating current, ii, and the area enclosed by the orbital loop. The direction of the vector is perpendicular to the orbital plane, using the right hand rule with the direction of conventional current.
Relation to the angular momentum vector, L\boldsymbol{\overrightarrow{L}}
Since quantum mechanics forbids exact knowledge about the angular momentum vector, it also forbids exact knowledge of the magnetic dipole moment vector. Only the zz components of these vectors can be known exactly. Note: since the electron has negative charge, L\boldsymbol{\overrightarrow{L}} and μ\boldsymbol{\overrightarrow{\mu}} point in opposite directions.
Using the Bohr circular orbit model (which happens to be consistent with the quantum mechanical reality), we have a loop of current i=dq/dt=q/Ti=dq/dt=q/T, where qq is the charge of the particle (e-e for an electron) and TT is the time for one orbit. Assuming the electron moves with v=p/mv=p/m around a loop of radius rr, its TT value will be T=2πr/v=2πrm/pT=2\pi r/v=2\pi rm/p. This means
μ=iA=q2πrm/pπr2=q2mrp=q2mL\mu=iA=\frac{q}{2\pi rm/p}\pi r^2=\frac{q}{2m}rp=\frac{q}{2m}\left|\boldsymbol{\overrightarrow{L}}\right|
since L=rp\left|\boldsymbol{\overrightarrow{L}}\right|=rp. In vector notation, we get
Note: the subscript on the dipole vector is a reminder that the vector comes from the orbital angular momentum L\boldsymbol{\overrightarrow{L}}. The zz component of the vector is
This quantity, μB\mu_{\text{B}}, is called the Bohr magneton and is equal to about 9.274×1024J/T9.274\times10^{-24}\text{J/T}.
Dipoles in External Fields
An electric dipole is two equal but opposite charges qq separated by a distance of rr. The electric dipole moment, denoted p\boldsymbol{\overrightarrow{p}}, has a magnitude equal to qrqr and points from the negative charge to the positive one.
In a uniform external electric field, vertical forces F+\boldsymbol{\overrightarrow{F}}_+ and F\boldsymbol{\overrightarrow{F}}_- act on the charges. While the net force on the dipole is 00, there is a torque applied to the system to make the dipole align with the field.
If the field is not uniform, the forces will not be equal, meaning there will be a net force on the dipole in addition to the torque. Another way of describing this is that if the dipole has pz>0p_z\gt 0 (assuming the electric field is pointing in the zz direction), its force will be in the negative zz direction and vice versa.
The same is true for a magnetic field and a magnetic dipole moment. A nonuniform magnetic field acting on the magnetic moments gives an unbalanced force that causes a displacement. In fact, if μz\mu_z is positive, the force on the dipole is negative, and if μz\mu_z is negative, the force on the dipole is positive.
The Stern-Gerlach Experiment
The setup
Imagine a beam of hydrogen atoms in the n=2n=2, l=1l=1 state (mlm_l can be 1-1, 00, or +1+1) incident on a screen. They pass through a slit then a nonuniform external magnetic field before hitting the screen. Assuming the experiment can be done before the atoms decay to the n=1n=1 state, we would expect three lines to appear on the screen.
There should be one undeflected line for the ml=0m_l=0 atoms since they do not have a magnetic moment μ\mu. The atoms with ml=+1m_l=+1 have μL,z=μB\mu_{\text{L,z}}=-\mu_{\text{B}}, so they should be deflected upwards while the atoms with ml=1m_l=-1 have μL,z=μB\mu_{\text{L,z}}=\mu_{\text{B}}, so they should be deflected downwards.
In short, we should always expect an odd number of lines on the screen (representing different mlm_l values) since there are always 2l+12l+1 values mlm_l can take on.
Experimental results
Actually performing the experiment with hydrogen atoms in the l=1l=1 state results in six lines appearing on the screen! Perhaps more confusingly, doing the experiment with atoms in the l=0l=0 state results in two lines rather than the predicted one. In the l=0l=0 state, L\boldsymbol{\overrightarrow{L}} has length 00, so there should be no magnetic moment. However, this cannot be true since the atoms are deflected!
What is going on here?
Following from the Schrödinger equation, atoms would need to have l=1/2l=1/2 in order to produce two images. This is not true from how we have defined angular momentum thus far, but we can define another contributor to resolve the issue.
The intrinsic angular momentum of an electron is the hidden factor producing this oddity! An electron therefore has two kinds of angular momentum: its orbital angular momentum, L\boldsymbol{\overrightarrow{L}}, and its intrinsic angular momentum, S\boldsymbol{\overrightarrow{S}}. This new term, S\boldsymbol{\overrightarrow{S}}, is typically called the spin of the electron.
To resolve the strange results of the Stern-Gerlach experiment, we assign the spin quantum number, ss, of an electron to 1/21/2. With this spin, we also have the angular momentum vector S\boldsymbol{\overrightarrow{S}}, a zz component, Sz\boldsymbol{S}_z, an associated magnetic moment, μS\boldsymbol{\overrightarrow{\mu}}_{\boldsymbol{S}}, and a spin magnetic quantum number, mSm_S.
Orbital Spin
Quantum number l=0,1,2,...l=0,1,2,... s=1/2s=1/2
Length of vector l(l+1)\sqrt{l\left(l+1\right)}\hbar s(s+1)=3/4\sqrt{s\left(s+1\right)}\hbar=\sqrt{3/4}\hbar
zz-component Lz=mlL_z=m_l\hbar Sz=msS_z=m_s\hbar
Magnetic quantum number ml=0,±1,±2,...,±lm_l=0,\pm 1, \pm 2,...,\pm l ms=±1/2m_s=\pm 1/2
Magnetic moment μL=(e/2m)L\boldsymbol{\overrightarrow{\mu}}_{\text{L}} = -\left(e/2m\right)\boldsymbol{\overrightarrow{L}} μS=(e/m)S\boldsymbol{\overrightarrow{\mu}}_{\text{S}} = -\left(e/m\right)\boldsymbol{\overrightarrow{S}}
Using spin, we have an explanation for the results of the Stern-Gerlach and similar experiments. Every fundamental particle has a characteristic intrinsic spin and corresponding spin magnetic moment. The proton and neutron also have spin 1/21/2 and the photon has a spin of 11. Other particles such as pions (pi mesons) has spin 00. |
218dd82779df71e2 | Fundamentals and Mechanisms of Vacuum Johannes Passig1,2, Ralf Zimmermann1,2, and Thomas Fennel3
1Universität Rostock, Institut für Chemie, and Joint Centre (JMSC), Dr.-Lorenz-Weg 2, D-18059 Rostock, Germany 2Helmholtz Zentrum München, German Research Center for Environmental Health GmbH, Research Unit Comprehensive Molecular Analytics (CMA) and Joint Mass Spectrometry Centre, Gmunder Str. 37, D-81379 München, Germany 3Institute of , University of Rostock, Albert-Einstein-Straße 23, D-18059 Rostock, Germany
1.1 Preface
Within the past decades, the progress of -based sources have opened various new research directions in the area of light-matter interactions. The of free with the purpose of their detection and mass-based identification may appear as an easy task in this context. However, experience has shown that already the simplest approach, the fragment-free ionization with single of sufficient , remains technically challenging. Beyond practical issues and applications, advanced photoionization techniques are an important field of study in and fundamental research. As an example, resonance-enhanced multiphoton ionization (REMPI, see chapters 2 and 4 of this book) bridges the gap between mass spectrometry (MS) and molecular spectroscopy, offering two-dimensional selectivity both in mass and in structure. In general, the absorption of a single by a can lead to its ionization if the E = h𝜈 = ℏ𝜔 is equal to or larger than 𝜋ℏ 𝜈 𝜆 the ionization potential IP,whereh = 2 is the , = c∕ is the photon ; 𝜆 is its , and c is the speed of light. It is worth to have a look on the typical energy- and timescales that constitute the framework of photoionization processes as illustrated in Figure 1.1. motion and electronic transitions are much faster than atomic motion on molecular scales. This is the basis for important approximations in and molecular physics and spectroscopy. Current tabletop laser systems are available from (IR) to (UV), with ultrashort pulses that allow to analyze molecular processes on their physical or natural timescale. Laser intensities for (multi-) photoionization MS span the range from the onset of REMPI (≈ 106 W∕cm2)to the strong field regime (> 1014 W∕cm2) and beyond for complex laser sources. Tuning the photon energy remains rather complicated both for lamp- and laser-based sources. However, the latter implicates further parameters as phase
Photoionization and Photo-Induced Processes in Mass Spectrometry: Fundamentals and Applications, First Edition. Edited by Ralf Zimmermann and Luke Hanley. © 2021 WILEY-VCH GmbH. Published 2021 by WILEY-VCH GmbH. 2 1 Fundamentals and Mechanisms of Vacuum Photoionization
Energy XUV VUV VIS IR Microwave spacing of 103 100 10–3 10–6 microscopic Energy (eV) processes Outer shell/ Microscopic Molecular Molecular valence band el. state ro.-vib. processes vibration rotation lifetimes lifetimes
Technical VUV attosecond Ti:Sapphire Nd:YAG/Excimer Pulsed TOF-MS processes laser pulse fs laser pulse ns laser pulse lamps cycle Timescale 10–18 10–15 10–12 10–9 10–6 Time (s) 10–3
Figure 1.1 Typical energy- and timescales of processes related to photoionization.
or chirp, which are linked to the coherence of laser pulses and are likely to increase the selectivity in future applications. When dealing with photoionization MS, natural questions about the under- lying mechanisms of photoabsorption and photoionization arise at some point. A brief standard answer might be as follows: Assume two eigenstates of a molec- ular system with different electronic charge distributions. Upon excitation from the lower to the upper state and assuming a single active electron, the system corresponds to a quantum superposition of those two states and the electronic charge oscillates with an amplitude that reflects the transition dipole moment. Now, assume a light field acting on it. Thus, the electron “feels” an oscillating electric field from the light. If the , direction (polarization), and shape of charge distribution associated with a transition, match the light field and the molecule couple and the electron will be resonantly driven into the while the light wave is damped. If the excited state is a continuum state, the elec- tron is ejected. This description provides a basic, phenomenological understanding, and some readers consider now to skip the following pages with complicated formulas. However, it is a problematic simplification, mixing different concepts and principles. The following section will provide an introductory survey on photoabsorption, being the physical basics of photoionization. The scope is not to treat the full complexity of and spectroscopy but to give an outline of some fundamental principles being useful for its conceptual understanding.
1.2 Light
The physics of classical light propagation, optics, and electromagnetism is based on the Maxwell equations, a set of partial differential equations that describe the behavior of the electric field E(r, t) and the magnetic field B(r, t) with respect to charges 𝜌 and currents j: 𝛁 ⋅ , 𝜌 , 𝜖 E(r t)= (r t)∕ 0 (1.1) 𝛁 ⋅ B(r, t)=0 (1.2) 𝜕 𝛁 × E(r, t)=− B(r, t) (1.3) 𝜕t 𝜕 𝛁 × B(r, t)=𝜇 j + 𝜇 𝜖 E(r, t) (1.4) 0 0 0 𝜕t 1.2 Light 3
𝜕 𝜕F 𝜕 The divergence 𝛁 ⋅ F = Fx + y + Fz of a vector field F produces a scalar field, 𝜕x 𝜕y 𝜕z giving the quantity of F’s source (outward flux) at each point. An expanding vec- tor field (e.g. heated air) yields positive divergence values, whereas a contracting one (e.g. cooled air) yields negative values. The curl 𝛁 × F yields a vector field that describes the infinitesimal rotation of F at each point, e.g. the circulation density of a flow.
Fundamental properties of light can be derived from this set of equations. Later, we will have to construct a Hamiltonian to describe photoabsorption. Because it represents the system total energy in , it will be natural to use potentials rather than fields. In electrostatics, the field is related to the electrostatic potential through E(r)=−𝛁Φ(r) (1.5) However, for a field that varies in time and in space, the electrodynamic potential must be expressed in terms of both the time-dependent scalar potential 𝜙(r, t) and the vector potential A(r, t). According to Eq. (1.5), the fields E(r, t) and B(r, t) canbeexpressedas 𝜕 E(r, t)=−𝛁𝜙(r, t)− A(r, t) (1.6) 𝜕t B(r, t)=𝛁 × A(r, t) (1.7) This definition automatically fulfills Eqs. (1.2) and (1.3). Furthermore, it allows a transformation of the potentials into Coulomb gauge, where A is divergence-free (𝛁 ⋅ A = 0), whereas the physical observables E(r, t) and B(r, t) remain unchanged (not shown here). In Coulomb gauge, the scalar potential is 𝜙 𝜌 𝜀 identical with the Coulomb potential, yielding =− ∕ 0 for Eq. (1.1). Further- more, if charges and currents are absent, only Eq. (1.4) determines A (and thus E and B) in the following form of a wave equation: 1 𝜕2 1 𝛁2A(r, t)− A(r, t)=0withc2 = √ (1.8) 2 𝜕 2 𝜇 𝜖 c t 0 0 Solutions are linearly polarized plane waves 1 ⋅ 𝜔 𝛿 A(r, t)= Ã𝜀ei(k r− t)+ + c.c. = Ã𝜀 cos(k ⋅ r − 𝜔t + 𝛿) (1.9) 2 with amplitude Ã, polarization vector 𝜀,imaginaryuniti,wavevectork,angu- lar frequency 𝜔, and phase offset 𝛿. Several primary properties of light can now be derived, such as the relation 𝜔 = ck when inserting the solution into Eq. (1.8) or the orthogonality of wave and polarization vectors k ⋅ 𝜀 = 0that directly results from the gauge condition 𝛁 ⋅ A = 0. The electric and magnetic fields follow as plane waves, mutually orthogonal also with the propagation direc- tion due to the vector product k × 𝜀 E(r, t)=Ẽ 𝜀 sin(k ⋅ r − 𝜔t + 𝛿) (1.10) ̃ , E 𝜀 ⋅ 𝜔 𝛿 B(r t)=𝜔(k × ) sin(k r − t + ) (1.11) 4 1 Fundamentals and Mechanisms of Vacuum Photoionization
E Figure 1.2 Illustration of a linearly polarized light wave as a solution of ε Eq. (1.8) with mutually orthogonal electric field E,magneticfieldB, and propagation B S as well as energy transport in direction of the Poynting vector S. Polarization vector 𝜀. Eq. (1.16).
with Ẽ =−𝜔Ã. A key parameter for many applications is the light intensity, respectively the photon density. They follow from the (instantaneous) energy density of the electromagnetic field [ ] 1 1 𝜖 | |2 + | |2 = 𝜖 ̃ 2 2( ⋅ − 𝜔 + 𝛿) u = 0 E 𝜇 B 0E sin k r t (1.12) 2 0 The rapid oscillations can be averaged as ⟨sin2⟩ = 1∕2, yielding the mean energy
that can alternatively be expressed in terms of the photon density nph = dNph∕dV 1 ⟨u⟩ = 𝜖 Ẽ 2 = n ℏ𝜔 (1.13) 2 0 ph Energy transport by light propagation is characterized using the Poynting vector 1 k = × = 𝜖 ̃ 2 2( ⋅ − 𝜔 + 𝛿) S 𝜇 E B 0c E sin k r t (1.14) 0 k The absolute value of its time-average |⟨S⟩| is the commonly used light intensity (unit W∕m2) 1 I = 𝜖 cẼ 2 = cn ℏ𝜔 (1.15) 2 0 ph Note that the intensity I is proportional to the square of the field (amplitude) Ẽ 2.Thephotonflux𝜑 (unit photons∕(m2 s)) can be expressed in terms of the
intensity and photon energy or via the photon density nph and the speed of light c. I 𝜑 = = ℏ𝜔 nphc (1.16) Next, the general cross section 𝜎 is a coefficient of proportionality between the rate W of an induced transition and the photon flux W = 𝜑𝜎(𝜔) (1.17) with unit megabarn (1Mb= 10−18 cm2). 1.3 Photoabsorption 5
1.3 Photoabsorption
In classical physics, light absorption is interpreted as damping of a periodic electric field by dipoles oscillating with opposite phase at the same frequency. This basic picture provides a descriptive explanation of optical properties and some fundamental interactions. However, a description of the photoionization of and molecules is only possible in a quantum mechanical context. Upon light absorption, a system undergoes a transition from an initial state to a final state of higher energy with energy difference ℏ휔.Becausethecharge distributions of the states differ, their coherent superposition results in an oscillating dipole that can couple to the light field under appropriate conditions.
1.3.1 Transitions in First Order Some essential principles needed for the quantum mechanical description are the representation of (electronic) states by wavefunctions Ψ(r, t) and physical observ- ables by their corresponding operators.
In quantum mechanics, the state |휓⟩ of a system can be described by a complex wavefunction 휓(r, t) in coordinate space representation. A physical observable is represented by a linear operator Ô acting on the state producing a new vector Ô|휓⟩ = |휓∗⟩.If|휓⟩ is an eigenstate of an observable, the equation Ô|휓⟩ = a ⋅ |휓⟩ yields the associated eigenvalues a, corresponding to the value of the observable in that eigenstate. Eigenvalues a can be continuous (e.g. for the position operator r̂) or discrete as for the angular momentum operator and thus be expressed by quantum numbers.
The time evolution of a physical system is described by solutions ofthe time-dependent Schrödinger equation (TDSE) in its general form 휕 iℏ Ψ(r, t)=Ĥ Ψ(r, t) (1.18) 휕t where i is the imaginary unit, ℏ is the Planck constant, and Ĥ is the Hamiltonian operator representing the system’s total energy.
An analytical solution is only possible for very simple systems, such as the hydro- gen atom. Already the presence of an external field or a second electron renders closed and analytical solutions impossible. Typically, several approximations allow for the treatment of a molecular system with minimum deficiency, depending on the framework of the scientific problem. Common approaches for atom-light inter- actions apply the single active electron approximation (SAE) that treats a single interacting electron in an effective potential (e.g. Hartree-Fock) resembling both the atomic core and the (mean) electron-electron interactions. 6 1 Fundamentals and Mechanisms of Vacuum Photoionization
In general, photoionization can be understood as one of several possible secondary processes upon photoabsorption. For the weak field regime1,which applies even beyond typical REMPI intensities of about 107 W∕cm2,the description of photoabsorption is convenient via perturbation theory. Its basic concept is the partition of the Hamiltonian Ĥ into the calculable Hamiltonian ̂ H0 of a simplified and known system (which may be artificial) and an addi- tional Hamiltonian Ĥ ′ representing the weak disturbance to the system that is quantified using approximate methods.
1.3.2 Perturbation Theory For our basic considerations of photoabsorption via electronic states, we describe
an atom by a single electron of charge q = e and mass me = m in a Coulomb 2 휋휖 potential VC =−Ze ∕4 0r, which represents a stationary nucleus. The vector potential remains classical. Considering that in Coulomb gauge, the scalar poten-
tial equals VC (see Section 1.2), the Hamiltonian of the electron splits into a stationary part of the undisturbed atom Ĥ and a time-dependent part Ĥ (t) ̂ 0 int for the interaction with the light field. Hint(t) can be derived from the classical Hamiltonian for a charged particle in a radiation field (not shown here) ℏ2 e q2 Ĥ =− 𝛁2 + V − iℏ A ⋅ 𝛁 + A2 (1.19) 2m c m 2m ⏟⏞⏞⏞⏞⏞⏟⏞⏞⏞⏞⏞⏟ ⏟⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏟⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏟ ̂ ̂ H0 Hint(t)
̂ For the first term H0, the analogy to the corresponding classical total energy Ekin + 2 휕 V = p + V is clearly visible, if the notation of the momentum operator p̂ =−iℏ 2m 휕x is considered.
For moderate field strengths (I ≪ 1015 W∕cm2), the last term is small com-
pared to the cross term, which simplifies the interaction Hamiltonian Hint to e H ≈−iℏ A ⋅ 𝛁 (1.20) int m According to perturbation theory, the wavefunction Ψ(r, t) can be expressed as a linear combination of unperturbated eigenstates Ψ(r) of the stationary
Schrödinger equation (time-independent Hamiltonian) H0Ψj(r)=EjΨj(r) with the time-dependent coefficients cj(t) ∑ ℏ , −iEjt∕ Ψ(r t)= cj(t)Ψj(r)e (1.21) j
1 The term “weak disturbance” indicates an important limitation for laser-based photoionization: The external laser field, treated as perturbation, has to be small against the inner atomic forces. Thus, laser intensities exceeding12 10 W∕cm2 and molecular interactions induced thereby are often referred to as “nonperturbative” (Baumert and Gerber 1997), while typical intensities for single-photon ionization (SPI) and REMPI applications are below 108 W∕cm2. 1.3 Photoabsorption 7
Inserting in the TDSE (Eq. (1.18)) yields ∑ ∑ d ℏ ℏ ℏ −iEjt∕ | ⟩ −iEk t∕ | ⟩ (i cj(t)+Ej)e Ψj = (Ek + Hint(t))ck(t)e Ψk (1.22) j dt k
The “ket” vector |Ψ⟩ represents the state that is associated with the wavefunction Ψ(r, t). With the corresponding “bra” vector ⟨Ψ|, that corresponds to the complex conjugated wavefunction Ψ∗(r, t) in coordinate space, and with the inner product ⟨Ψ|Ψ⟩ = ∫ Ψ∗Ψdr and the operator H acting on |Ψ⟩, we can express the expecta- tion value of the observable (here energy) represented by operator H in the state |Ψ⟩ by ⟨Ψ|H|Ψ⟩.
Considering the orthogonality of the eigenstates, the result is a set of coupled equations for the time evolution of coefficients cj(t) with the transition frequency 휔 ℏ jk =(Ej − Ek)∕ : ∑ d 1 휔 ⟨ | | ⟩ i jk t cj(t)= ℏ Ψj Hint(t) Ψk ck(t)e (1.23) dt i k ⏟⏞⏞⏞⏞⏞⏞⏞⏟⏞⏞⏞⏞⏞⏞⏞⏟
jk Hint(t) | ⟩ Assuming a two-level system with initial state Ψa and the interaction | ⟩ Hamiltonian (1.20), the coefficient cb(t) for the state Ψb can be expressed in the following form. t t 1 ba ′ i휔 t′ ′ e i휔 t′ ′ c (t)= H (t )e ba dt =− ⟨Ψ |A ⋅ 𝛁|Ψ ⟩e ba dt (1.24) b ℏ ∫ int ∫ b a i 0 m 0 Applying the vector potential A for the electromagnetic field representing the classic description of light (Section 1.2)
1 휔 훿 휔 훿 A(r, t)=Ã휀 cos(kr − 휔t + 훿)= Ã휀[ei(kr− t+ ) + e−i(kr− t+ )] (1.25) 2 | ⟩ yields the time-dependent amplitude of state Ψb in first-order perturbation theory
e i훿 ikr t i(휔 −휔)t′ ′ c (t)=− Ã[e ⟨Ψ |e 휀 ⋅ 𝛁|Ψ ⟩ ∫ e ba dt b 2m ⏟⏞⏞⏞⏞⏞⏞⏞⏞⏞⏟⏞⏞⏞⏞⏞⏞⏞⏞⏞⏟b a 0 휔 Mba( ) 훿 t 휔 휔 ′ −i ⟨ | −ikr휀 ⋅ 𝛁| ⟩ ∫ i( ba+ )t ′ + e Ψb e Ψa 0 e dt ] (1.26) The first integral describes the absorption of a photon. Because the complex 휔 ≠ 휔 e-function is periodic with mean value zero for ba , it contributes only for 휔 휔 ⇒ ℏ휔 ba = Eb = Ea + while the second integral corresponds to the photon 휔 휔 ⇒ ℏ휔 emission if ba =− Eb = Ea − .
1.3.3 Absorption 휔 ⟨ | ikr휀 ⋅ 𝛁| ⟩ The matrix element associated with the perturbation Mba( )= Ψb e Ψa in Eq. (1.26) connects the initial state with the final state and thus determines the system interaction strength with the light. Before its further evaluation, we 8 1 Fundamentals and Mechanisms of Vacuum Photoionization ,t) (a.u.) ω F(
ωba Frequency ω
Figure 1.3 Behavior of the function F(휔, t) (Eq. (1.28)) that determines the time evolution of the transition when the field and the system are in resonance. If the light frequency 휔 equals 휔 | ⟩ | ⟩ the transition frequency ba of the states Ψa and Ψb , the function peaks increasingly and transforms into the 훿-function in the long-time limit.
derive corresponding transition rates and cross sections. During absorption, the | ⟩ occupation probability cb(t) of the state Ψb increases with time, which can be expressed through integration of the first term in Eq. (1.26)
| i(휔 −휔)t |2 2 | e 훿 e ba − 1| 1 e 2 |c (t)|2 = |− Ãei M (휔) | = Ã |M (휔)|2F(t,̃휔) b ba 휔 휔 2 ab | 2mi ba − | 2 m (1.27) with 1 − cos(휔̃t) F(t,̃휔)= (1.28) 휔̃ 2 휔̃ 휔 휔 and the frequency offset = − ba. As illustrated in Figure 1.3, the transition probability increases sharply with time according to function (1.28) when the 휔 휔 external field and the system are in resonance, hence = ba.
Equations (1.27) and (1.28) describe the evolution of the occupation probability of | ⟩ state Ψb with time and thus characterize the rate with which transitions appears. | ⟩ | ⟩ During excitation, the states Ψa and Ψb form a coherent superposition and the associated dipole oscillates with the transition frequency.
≫ 휋 |휔 | For timescales longer than a cycle (t 2 ∕ ba ), Eq. (1.28) changes to a delta ,̃휔 ⇒ 휋 훿 휔 휔 훿 휔̃ → 휔̃ → function F(t ) t ( − ba) (note: ( ) ∞ for 0). Furthermore, we express the amplitude à of the vector potential by the light intensity I (compare Eq. (1.15)). 2 2I à = (1.29) 휖 휔2 0c 1.3 Photoabsorption 9 and find the frequency-dependent absorption rate 휋e2 I W (휔)= |M (휔)|2훿(휔 − 휔 ) (1.30) ba 휖 2 휔2 ba ba 0cm So far, we assumed perfectly monochromatic light corresponding to an infinite plane wave. However, the intensity of realistic light has components covering at least a narrow band over a frequency range I(휔)=dI∕d휔;thus,thetotalabsorp- tion rate has to be integrated over the entire
휔 휔 휔 Wba = ∫ Wba( )I( )d (1.31) 휔 In the long-time limit, only the value at ba contributes to Wba by nature of the delta function 휋e2 I(휔 ) W = ba |M (휔 )|2 (1.32) ba 휖 cm2 휔2 ba ba 0 ba To express the associated cross section, we use Eq. (1.17) ℏ휔 휔 휎 baWba = I( ba) ba (1.33) which yields the absorption cross section for the spectral intensity associated with the considered transition. Note that the unit of the cross section is therefore area times frequency. 휋ℏ e2 1 휎 = |M (휔 )|2 (1.34) ba 휖 2 휔 ba ba 0c m ba Of note, the integration (1.31) assumes a sum of incoherent spectral components. The treatment of absorption from coherent and ultrashort laser pulses has to con- sider the explicit pulse waveform. Molecules typically have numerous states in a small energy interval that con- tributes to the absorption. Hence, the transition probability into this band is the sum of all transition probabilities matching the frequency of the incident light.
The number of states in an energy range between Eb and Eb + dE canbeexpressed 휌 휌 as (Eb)dE,where (Eb) is called the density of states. Assuming that the matrix elements for transitions into such a band of states are comparable, Fermi’s golden rule can be derived via spectral integration of the transition rates (not shown). 휋 2 휌 | |2 Wba = ℏ (Eb) Mba (1.35) Thus, to calculate the transition rate into a band, multiply the square of the matrix element by the density of states of the involved bands.
1.3.4 Dipole Approximation So far, the matrix elements are dependent on the wavelength and direction of the light wave (photon) via the eikr term. For interactions with visible, (V)UV, and IR (but not X-ray) radiation, the wavelength is much larger than the atomic length scale; hence, the system “feels” an oscillating dipole field. As a consequence, the wave vector dependence of the vector potential can be neglected, according to 10 1 Fundamentals and Mechanisms of Vacuum Photoionization
eikr ≈ 1for휆 → ∞; k → 0. In this so-called dipole approximation,thevector potential A(r, t) → A(t) describing the light field becomes spatially homogenous and, as a consequence, the magnetic field B = 𝛁 × A vanishes. Descriptively, the
electron velocity ve is low enough to neglect both the magnetic Lorentz force e(ve × B) and the relativistic effects that arise if the electron is driven at very high intensities (> 1016 W∕cm2), predominantly for long . The matrix D elements Mba can now expressed by the dipole matrix element in the so-called length form via the transition dipole moment Dba =−erba containing spatial coordinates (x̂, ŷ, ẑ) of the position operator r m휔 m휔 MD = ba 휀 ⋅ (−e⟨Ψ |r|Ψ ⟩)= ba 휀 ⋅ D (1.36) ba ℏe b a ℏe ba Thus, the matrix element of the interaction Hamiltonian, representing the expectation value of its energy spectrum, is now related to the much more
descriptive dipole matrix element Dba, which represents the charge distribution within the wavefunction.
The dipole matrix element Dba (and also Mba) determines the interaction strength between light and the atom or molecule. Its scalar part describes the change of | ⟩ | ⟩ charge distribution during transition from Ψa to Ψb that determines the tran- sition probability. The vector part demands projection of the light field onto the dipole moment, i.e. it defines the required light polarization.
The corresponding absorption rate can be derived to 휋 휋e2 W D = I(휔 )|D |2 = I(휔 )|r |2 (1.37) ba 휖 ℏ2 ba ba 휖 ℏ2 ba ba 3 0c 3 0c If the dipole matrix element is zero, the transition is so-called dipole-forbidden. However, such transitions are often observed because they may be allowed as (weaker) magnetic dipole or electric quadrupole transitions. Commonly used in spectroscopy to describe the absorption strength is the dimensionless oscillator
strength fij of a transition between states i and j 휔 2me ij f = |r |2 (1.38) ij 3ℏ ij values are between 0 and 1. Typical values are shown in Table 1.2.
An interesting application of transition rates is related to the famous Einstein coefficients. Therefore, the Boltzmann distribution is applied to the level popula- tion of an ensemble of atoms in equilibrium and the Planck distribution to the photon field. An interesting finding in the context of VUV sources is that spon- taneous emission increases relative to stimulated emission as the cube of light frequency. Hence, population inversion, which is a basis of laser sources, is difficult to generate and maintain in highly excited systems. Instead of cooperating in a stimulated emission process, the excited populations randomly loose energy via . 1.3 Photoabsorption 11
1.3.5 Selection Rules To calculate absorption rates, the corresponding matrix elements have to be evaluated by spatial integration over the corresponding wave functions ⟨ | | ⟩ ∫ Ψb r Ψa = Ψbr Ψadr in Eq. (1.36). If the integral vanishes, a transition does not occur (with the exception of higher order transitions), which is called dipole-forbidden. In particular, this is the case if the function Ψbr Ψa is antisym- metric, and thus, its integral over space yields zero. This can often be determined by analysis of the wavefunction symmetry without explicit calculation of the integral. For example, dipole transitions (e.g. |s⟩ → |s⟩) are not allowed for the atom. The symmetry behavior is reflected by the parity selection rule (ref. third column in Table 1.1). Using quantum numbers to term the states, further selection rules for dipole transitions can be derived by evaluation of zero and nonzero matrix elements, as Δl =±1andΔm = 0, ±1, where l is the angular momentum quantum number and m is the magnetic quantum number in a one-electron system. Descriptively, these rules reflect conservation of angular momentum because the of the absorbed photon contributes to the system’s angular momentum L.Thez-component of L is associated with the electrons’ magnetic moment, which couples to the photon spin, thus yielding Δm = 0 in the case of linear polarization and Δm =±1 for circularly polarized light. For multielectron systems, the total angular and orbital momentum as well as the total spin is evaluated, and coupling schemes of angular momen- tum sources are considered. In realistic systems, especially molecules, several dipole-forbidden transitions can nevertheless be observed. For example, the dipole-forbidden transitions may be allowed as multipole transitions. Typically, the rate drops down by three orders of magnitude from one multipole to the next, see Table 1.2.
1.3.6 Electronic Line Width and Lifetime So far, we assumed that the states have sharp eigen and can be described via time-dependent wavefunctions of the form 휓e−iEt∕ℏ (ref. Figure 1.4a,d). Suppose a state that is exponentially decaying in amplitude as the system changes to another state (Figure 1.4b). The decaying function corresponds
Table 1.1 Dipole selection rules for electronic transitions in a Hydrogen-like atom. J = L + S is the total angular momentum, L is the total orbital momentum quantum number, S is the total 휋 spin quantum number, MJ is the total magnetic quantum number, and is the parity.
Intermediate Rigorous LS coupling coupling
, , 휋 휋 a) b) c) ΔJ = 0 ±1 ΔMJ = 0 ±1 b =− a Δl =±1 if ΔS = 0: if ΔS =±1: (J = 0 ⇎ 0) ΔL = 0, ±1, (L = 0 ⇎ 0) ΔL = 0, ±1, ±2 a) Rigorous for one-electron systems. b) Small atoms with low LS-coupling. c) Heavier atoms with transitions between several multiplet states. 12 1 Fundamentals and Mechanisms of Vacuum Photoionization
iEt ħ re[e– / ] re[e–iEt/ħ–t/2τ]
(a) Time (b) (c)
(d) Frequency (energy) (e) (f)
Figure 1.4 Schematic wavefunctions (top) and their spectra (bottom). (a, d) Stationary state. (b, e) Decaying state and resulting Lorentzian line profile. (c, f) Collision-induced phase distortion and its resulting spectrum (fast Fourier transformed). to a superposition of oscillations whose frequencies can be Fourier analyzed according to ℏ 휏 ℏ 휏 ′ ℏ 1 ( ∕ ) e−iEt∕ −t∕ = g(E′)e−iE t∕ dE′ with g(E′)= (1.39) ∫ 휋 (E − E′)2 +(ℏ∕휏)2 where 휏 is the time constant of the decay. Therefore, the decaying dipole oscil- lation is associated with a finite energy range. The width at half height of the Lorentzian function g(E′) is ℏ∕휏 and called natural line width (Figure 1.4e). Con- sidering that the state to which the transition appears may also have a finite life- time 휏 , the line width is given by b ( ) 훿 ℏ 1 1 E = 휏 + 휏 (1.40) b a Hence, the shorter the state lifetime, the less precise its energy and vice versa. This concept is particularly important for REMPI (see Chapter 2), where ionization rates depend on the energy match between the photon and possible intermediate states as well as the photon density. It further gives rise to the concept of virtual states of uncertain energy, which may be employed at high photon densities. Measured line widths are typically much larger, which can be attributed to the other origins of broadening. First, collisions with other atoms may lead to (radiative or nonradiative) transitions and randomize the phase of emitted radiation (ref. Fig. 1.4c,f). Both reduces the effective lifetime of the state and leads to the (pressure-dependent) collision broadening of the Lorentzian line profile. Secondary, the relative motion of atoms results in frequency shift. Thus, the so-called Doppler broadening increases with and decreases 1.3 Photoabsorption 13 with atomic mass and produces a Gaussian profile, which convolves with the Lorentzian profile.
1.3.7 Electronic Transitions of Molecules So far, we treated light absorption using a simple model system undergoing electronic transitions. However, real electronic spectra of molecules are highly complex. This cannot be comprehended by single-electron transitions in a static potential model as the nucleic motion is completely neglected. Indeed, any elec- tronic transition changes a real system’s charge distribution inducing vibration of the nuclei, which, in turn, changes the rotational state. Moreover, full under- standing of the molecule response implies consideration of all possible sources of angular momentum and their coupling scheme. Note that for photoionization, REMPI cross sections are a consequence of the actual electronic structure. In direct SPI, the outgoing states are continuum states, being less restricted. Corresponding to atoms, selection rules arise from the conservation of total angular momentum and the total parity changes in dipole transitions, refer Table 1.2. Initial estimates on cross sections can be derived from typical oscillator strength values and respective transition probabilities, as summarized in Table 1.2 and indicated by exemplary absorption spectrum of benzene (Figure 1.5). Several schemes allow for (at least qualitative) indications on the molecules’ electronic spectra and transition probabilities. Hund’s rules describe how the four sources of angular momentum (electron orbital angular momenta L,rotationof nuclear framework O, and spin of electrons S and nucleus I), are coupled (in a ). Any electronic transition changes the charge distribution. The nuclei readjust to these forces causing vibration. Consequently, electronic transitions in molecules are accompanied with vibrational transitions, giving rise to the term vibronic transitions that have many lines in the absorption spectra. Of note, nuclear rearrangement is much slower than electronic transitions, see also Figure 1.1. This is the basis for the Franck–Condon principle that makes statements on the most probable vibronic (electronic + induced vibrational) tran- sitions, see Figure 1.6. Electronic excitation is virtually instantaneous before the nuclei can readjust to the distance rB and must therefore be drawn as a “vertical” transition (blue arrow). From the numerous vibrational states of the upper elec- tronic level, the one with the greatest overlap with the original state vibrational wavefunction is occupied.
Table 1.2 Typical oscillator strengths for transitions according to the selection rules
Oscillator strength f
Electric dipole allowed ≲ 1 Parity forbidden 10−1 Magnetic dipole allowed 10−5 Electric quadrupole allowed 10−5 Spin forbidden 10−5 14 1 Fundamentals and Mechanisms of Vacuum Photoionization
Electric dipole - allowed 5
4 Symmetry-/ electric dipole-forbidden 3 + Photo– singlet π* ←π dissociation 2
1 Rydberg- series ε 0 log
–1 Spin-forbidden triplet π* ←π –2
–3 1. 5 0 0 2.000 2.500 3.000 3.500 Wavelength (Å)
Figure 1.5 Absorption spectrum of benzene, illustrating the dipole transition probabilities according to selection rules. Source: Modified from Barrow (1962).
Energy Electronic state B Figure 1.6 Electronic transitions are faster than nuclear motion. The nuclei rearrange after electronic 3 excitation (vertical blue arrow) to the new distance rB. Transition 2 probability is highest between the vibration states of the greatest 1 waveform overlap, determining the possible final states (Franck-Condon principle). For examples on 0 Franck-Condon controlled vibronic spectrum, see the analysis of the Electronic state A biphenylene REMPI spectrum and other molecules in chapter 2.
1 Vibration states 0 r r A B Internuclear distance
Vibronic transitions in turn induce rotational transitions contributing many more absorption lines, according to the selection rules. Comparable to ice skaters, who extend their arms during a pirouette to slow down their spin, the vibration affects molecular rotation. Complexity further increases for polyatomic molecules. For small molecules, application of the selection rules must consider their whole symmetry, 1.3 Photoabsorption 15 as the electronic excitation affects the complete structure. Detailed electronic structures are calculated with methods of computational chemistry that are based on several approximations, e.g. Hartree-Fock (ab initio), Post-Hartree-Fock (considers electron correlations), or density functional theory. In many appli- cations, a particular group of atoms in the molecule is considered because they show a characteristic absorption feature. These subgroups, called chromophores, may occur in different molecules contributing absorption bands of the same wavelength. Basic considerations on a molecule’s absorption behavior can often be reduced to the presence of such chromophores and perturbations from other groups in the molecule. A prominent example is the B-band of benzene and derivatives originating from 휋 → 휋∗ transitions. Although this benzenoid band is forbidden by symmetry for pure electronic states, it is allowed with respect to the overall symmetry of vibronic states. In context of analytical applications, it contributes the intermediate REMPI states giving the aromatic ring structures high REMPI cross sections for e.g., the fourth harmonic of the Nd:YAG laser (266 nm). Comprehensive collections of photoabsorption spectral information can be found in the literature, e.g. Berkowitz (2002). Having regard to the scope on photoionization for analytical mass spectrometry, many structural and spectroscopic details can be omitted here, and in the following, we can focus on specific aspects that have practical implications to mass spectrometry. In chapter 2 the application of REMPI in molecular spectroscopy using tunable is discussed with the help of photoabsorption and ionization spectra of suited interesting molecular systems.
1.3.8 Single-photon Ionization (SPI) So far, electronic transitions between bound states were treated. Their probabil- ities at given photon energies and radiation intensities are formally determined by (dipole-) transition matrix elements or more practically by absorption cross sections (ref. Eqs. (1.33, 1.34)) (Berkowitz 2002). Corresponding absorption spec- tra are often characterized using oscillator strength values (Eq. (1.38)). If an elec- tron is released by the absorption process, the final state has a free electron, allowing continuous values for its energy Ekin and momentum pe (continuum state). Consequently, SPI is less constrained to structural and electronic proper- ties, and its cross sections show a rather narrow distribution, enabling to use SPI for a universal ionization of molecules (threshold selectivity Eh휈 ≥ IE), see also Chapter 3 for SPI-MS applications. SPI and possible subsequent fragmentation of a molecule AB in the ground vibronic state can be formally written as AB + h휈 → [AB+]′ + e− (1.41) [AB+]′ → AB+ and∕or A + B+ (1.42) with the quotation mark indicating the excited state. Again, the timescale of the electron ejection process (Eq. (1.41)) is much shorter compared to the relaxation described by Eq. (1.42). Consequently, the electron energy Ekin canbemeasured + ′ to probe the energy EB oftheexcitedstateofthemolecularion[AB ] via pho- 휈 toelectron spectroscopy (PES) according to Ekin = h − EB. Detailed ionization and fragmentation studies are facilitated by combining MS with PES, ideally in 16 1 Fundamentals and Mechanisms of Vacuum Photoionization
a single-event coincidence setup, called photoelectron photoion coincidence
(PEPICO) spectroscopy (Baer 2000). The special case of Ekin ≃ 0 eV (“threshold PEPICO”) allows the precise determination of the energy EB of the molecular ionandhasbeenusedtoworkoutthethermochemistryofmanygas-phase species (Sztáray et al. 2010). Allowing multiple charging (n > 1) and with the focus on mass spectrometry, Eqs. (1.41) and (1.42) may also be summarized as AB + h휈 → [ABn+]′ + ne− (1.43) ↳ ABn+ (1.44) ↳ Ap+ + Bq+ (1.45) ↳ A+ + B− (n = 0) (1.46) For chemical analyses using single-photon ionization mass spectrometry (SPI-MS), the case of Eq. (1.44) with n = 1 is typically desired, while Eq. (1.45) refers to production of fragment of charge p+ and q+ and Eq. (1.46) refers to -pair production. Ionization energies (IEs) of organic molecules reside in the
range of 8–12 eV. Because the values of atmosphere gases as N2, O2, H2O,etc., are higher, they typically do not interfere in SPI-MS analyses. Fragmentation increases with photon energy; thus, the desired photon energy is about 10–15 eV, a region where the ionization efficiency varies strongly and where compact, robust, and intense light sources are rare. Dependent on the analyte molecules, the gap between IE and the appearance energies of one or more fragmentation pathways differs widely and molecules may not only be ionized but also undergo direct or metastable fragmentation, see Figure 1.7 for an example.
7. 5 8 8.59 9.5 10 10.5 11 11.5 eV (a) Electron impact ionization(EI) (c) VUV spectrum 1. 0 182 mass spectrum m/z =182 (fragment signal) 1. 0 m/z = 211 (molecule signal)
211 0.5 167 H3C O
[email protected] eV (a.u.) O H3C 0.0 80 120 160 200 240 H3C (b) 211 O NH2 1. 0 Single photon ionization (SPI) mass spectrum IE = 8.0 eV Ion signal from SPI (a.u.) Ion signal from 0.5 [email protected] nm (a.u.) [email protected] 0.0 0.0 80 120 160 200 240 170 160 150 140 130 120 110 m/z Wavelength (nm)
Figure 1.7 Ionization and fragmentation behavior of mescaline. (a) Electron impact ionization with 70 eV electron energy leads to substantial fragmentation, in contrast to (b) single-photon ionization (SPI) with 8.8 eV synchrotron radiation. (c) Fragment-free SPI is possible for a photon energy around 8–9 eV, limiting the choice of VUV light sources for SPI. Source: Kleeblatt et al. (2013). Modified with permission of SAGE. 1.3 Photoabsorption 17
This behavior is typical for several relevant compounds classes, e.g. for many explosives, drugs of abuse and pharmaceutical active compounds, as well as for the majority of metabolites. For many other compounds, such as alkanes or alky- lated aromatic compounds, however, a totally soft ionization by SPI is achieved. Particular high stability is observed for aromatic compounds because the charge is delocalized throughout the whole ring (resonance stabilization) (Edirisinghe et al. 2006; Gunzer et al. 2019). The can be obtained with relatively good accuracy by quantum chemical calculations: According to Koop- mans’ theorem, the vertical IE equals the negative energy of the highest occupied molecular orbital (HOMO). The underlying approximation neglects changes of the energy levels through rearrangements upon removal of the electron from the HOMO. Adiabatic values for IE can be obtained from the difference of the total energies of the geometry-optimized molecular ground state and the respective ionic state (Gross 2011). In general, smaller homologues of a molecular sub- stance class exhibit higher ionization energies than its larger homologues. For organic compounds, IE can also be influenced by substituents, which can either withdraw electron density from the HOMO or push electron density into the HOMO via mesomeric or inductive effects. The highest molecular IE values are observed for small inorganic compounds such as HF (16.03 eV, highest molecular IE), F2 (15.70 eV), N2 (15.58 eV), H2 (15.43 eV), CO2 (13.78 eV), H2O (12.62 eV), or O2 (12.07 eV). Methane has the highest IE for a hydrocarbon (12.61 eV). Ionization energies as a function of molecular weight (m∕z) for different organic compound classes are shown in Figure 1.8. Generally, the values of IE decrease with increasing molecular size within a homologous series. For very large molecules, they hyperbolically approach a value close to the work function of the respective bulk material, see the intersections with the y-axis in Figure 1.8(b). Obviously, there is no apparent size limit in photoionization as demonstrated in studies on large molecules (Akhmetov et al. 2010; Schätti et al. 2017). The IE of condensed aromatic compounds are generally lower than the values of aliphatic compounds, as the ionization occurs from the HOMO of the aromatic 휋-electron system (delocalized). Substitution can induce different effects on IE, as illustrated in Figure 1.8: substituents that withdraw electron density from the 휋-electron system, such as the halogens, fluorine, and chlorine, tend to increase IE upon successive substitution of H-atoms at the aromatic moieties. On the other hand, substitution by electron-donating groups, such as methyl groups, causes a decrease of IE with an increasing degree of substitution. The second fundamental parameter for SPI is the single-photon ionization 휎 휑 3 cross section SPI. From the photon flux through the sample volume V (cm ), −3 휎 the concentration C of molecules (ncm ) and the absolute SPI value, the ionization rate RSPI (n∕s) can be calculated according to 휎 휑 RSPI = SPI VC (1.47) 휎 Experimental values of SPI for several homologous or substituted compounds as a function of molecular weight are depicted in Figure 1.9. The measurements have been performed using a gas chromatograph coupled to a mass equipped with an electron beam for SPI (Eschner and Zimmer- 휎 mann 2011). For all compounds, a slight increase of the SPI values with rising molecular weight is observed. The similarities of SPI cross sections for members 18 1 Fundamentals and Mechanisms of Vacuum Photoionization
13 Linear alkanes 1-alkenes 12 2-alkanones Aldehydes Linearly condensated PAHs 11 Alternately condensated PAHs Annuarly condensated PAHs 10 Methylated benzenes Fluorinated benzenes Chlorinated benzenes 9 Chlorinated dibenzo-p-dioxins
8 lonization energy (eV) lonization energy
6 0 50 100 150 200 250 300 350 400 450 500 (a) m/z 12
8 Linear alkanes 7 1-alkenes 2-alkanones Aldehydes
lonization energy (eV) lonization energy 6 Linearly condensated PAHs 5 Alternately condensated PAHs Annuarly condensated PAHs 4 0 0.05 0.1 0.15 0.2 0.25 (b) 1/# atoms
Figure 1.8 Ionization energies plotted against (a) the molecular weight for different homologous organic compounds and (substituted) aromatic hydrocarbons (homologous series for alkanes, 1-alkenes, aldehydes and alkanones; increasing substitution degree from 1-6 for benzene derivatives (for alkylated benzenes: 1-5, from toluene to pentamethyl- benzene) and 1-8 for chlrorinated dibenzo-p-dioxins; differently condensed rings for polycyclic aromatic hydrocarbons (PAH)). (b) The molecular IE values plotted against the inverse number of atoms illustrate their approach toward material’s work function (conducting material) or ionization energy (insulators) for large atom numbers. Source: Data from NIST Chemistry Webbook and National Institute of Standard and MD Technology. within a particular compound class can be used for approximate (semi-) quan- 휎 tification. Note that SPI also depends on the spectral shape of the respective light source. For compounds with an IE within the emission band, only the fraction of photons exceeding IE can contribute to ionization. This results in a relative sup- 휎 pression of the observed SPI compared to the compounds with lower IE.The effect can be noticed for the n-alkanes as depicted in Figure 1.9(b). Here, the 휎 ⪆ slope of SPI of n-alkanes (red) is lower for the smaller homologues (1∕n 0.02), exhibiting IE values within the lamp’s emission band. 1.3 Photoabsorption 19
70 Linear alkanes 60 2-alkanones Methylated benzenes 50 Chlorinated benzenes PAHs
0 50 100 150 200 250 300 (a) m/z 70 Linear alkanes 60 2-alkanones Methylated benzenes 50 PAHs
10 Photoionization cross section (Mb) cross Photoionization section (Mb) cross Photoionization 0 0 0.05 0.1 0.15 (b) 1/# atoms
휎 Figure 1.9 Single-photon ionization cross sections SPI at 9.8 eV (0.4 eV FWHM) plotted against (a) the molecular weight and (b) the inverse atom number of different compound classes (homologous series for alkanes and alkanones; increasing substitution degree from 1-6 for benzene derivatives, linearly condensed rings for polycyclic aromatic hydrocarbons (PAH): naphthalene, anthracane, tetracene). The flatten of the curve toward smaller linear alkanes is due to their relatively high IEs, in particular for the lower molecular weight species (see Figure 1.8). Therefore, only a fraction of the VUV- of the used lamp can be used for the ionization. Source: Data from Eschner and Zimmermann (2011). If the IE is plotted against the inverse number of atoms (Figure 1.8b) the IE limit for very large molecules can be estimated. For infinitely large molecules this IE converges to the respective state material property. In the case of the PAH this correspond to graphite, which is a conducting solid with a work function of 4.7 eV (Rut’kov 2020), or an individual graphene layer with a work function of 4.3 eV (Rut’kov 2020). Polypropylene (PP) in contrary is a model for a fully saturated, insulating solid state polymer, which exhibit a solid state ionisation energy of 8.65 eV (Rajopadhye, 1986). The extended lines for the linearly condensed PAH and the fully saturated linear alkanes in Figure 1.8b hit the x axis (i.e. infinitely large molecule) at the values of 4.4 eV (PAH) and 8.8 eV 20 1 Fundamentals and Mechanisms of Vacuum Photoionization
(alkanes), showing a very good agreement of estimation and experimental data. With respect to the photoionization of very large molecules it is notable that there was an intense discussion about the question, if larger molecules indeed can be efficiently photoionized (Schlag 1992). It was suggested, that SPI efficiency of larger molecules is decreasing with molecular size due the incensing density of states, which supports the rapid dissipation of energy supplied by photon absorption into internal charge transfer states. The dissipated energy then cannot be recombined quickly enough for atimely (auto-)ionization. However, already in 1995 single photon ionization of fullerenes and carbon clusters up to 2000 m/z was reported (Becker 1995). Recently, a fs laser desorption SPI-post ionization experiment demonstrated that efficient SPI of complex polypeptides with more 20.000 m/z is possible (Schätti 2018). Generally, cross sections for SPI are lower than the values of the standard ionization via electron impact (EI). In contrast to the rather weak dipole interac- tion with the light field in SPI, the electron’s de Broglie wavelength in EI (70 eV, ≈ 1.5 pm) matches the typical bond lengths in organic molecules. Thus, the energy transfer to the analyte and ionization efficiency is maximized. Also, the cross-sectional variance of about 10 for different compound classes is rather low for the universal ionization method EI (Adam and Zimmermann 2007). However, despite lower cross sections, respective sensitivity, SPI features ionization with very low fragmentation and produces no interfering signals from the carrier gas (higher IE), rendering it an ideal method for complex organic mixtures. Such practical considerations and the unique features of (resonance-enhanced) multiphoton ionization will be discussed in Chapters 2, 4 and 11 of this book.
Adam, T. and Zimmermann, R. (2007). Determination of single photon ionization cross sections for quantitative analysis of complex organic mixtures. Anal. Bioanal. Chem. 389 (6): 1941–1951. Akhmetov, A., Moore, J.F., Gasper, G.L. et al. (2010). Laser desorption postionization for imaging MS of biological material. J. Mass Spectrom. 45 (2): 137–145. Baer, T. (2000). Ion dissociation dynamics and thermochemistry by photoelectron photoion coincidence (PEPICO) spectroscopy. Int. J. Mass Spectrom. 200 (100): 443–457. Barrow, G.M. (1962). Introduction to Molecular Spectroscopy. McGraw Hill. Baumert, T. and Gerber, G. (1997). Molecules in intense femtosecond laser fields. Phys. Scr. T72: 53–68. Becker, C.H. and Wu, K.J. (1995) On the photoionization of large molecules. J. Am. Soc. Mass. Spectrom. 6: 883–888. Berkowitz, J. (2002). Atomic and Molecular Photoabsorption. Absolute Total Cross Sections. London: Academic Press. Edirisinghe, P.D., Moore, J.F., Calaway, W.F. et al. (2006). Vacuum ultraviolet postionization of aromatic groups covalently bound to peptides. Anal. Chem. 78 (16): 5876–5883. References 21
Eschner, M. and Zimmermann, R. (2011). Determination of photoionization cross-sections of different organic molecules using gas chromatography coupled to single-photon ionization (SPI) time-of-flight mass spectrometry (TOF-MS) with an electron beam pumped rare gas excimer light source (EBEL): influence of molecular structure and analytical implications. Appl. Spectrosc. 65: 806–816. Gross, J.H. (2011). Mass Spectrometry. Springer. Gunzer, F., Krüger, S., and Grotemeyer, J.H.C. (2019). Photoionization and photofragmentation in mass spectrometry with visible and UV lasers. Mass Spectrom. Rev. 38 (2): 202–217. Kleeblatt, J., Ehlert, S., Hölzer, J. et al. (2013). Investigation of the photoionization properties of pharmaceutically relevant substances by resonance-enhanced multiphoton ionization spectroscopy and single-photon ionization spectroscopy using synchrotron radiation. Appl. Spectrosc. 67 (8): 860–872. National Institute of Standard and MD Technology NIST Chemistry WebBook. Standard reference database. Gaithersburg. http://webbook.nist.gov/chemistry. Rajopadhye, N.R. and Bhorarkar, S.V. (1986). Ionization potential and work function measurements of PP, PET and FEP using low-energy electron beam. J. Mat. Sci. Lett. 5: 603–605. Rut’kov, E.V., Afanas’eva, E.Y., and Gall, N.R., (2020). Graphene and graphite work function depending on layer number on Re. Diamond Rel. Mat. 101: 107576. Schätti, J., Rieser, P., Sezer, U., et al. (2018). Pushing the mass limit for intact launch and photoionization of large neutral biopolymers. Commun. Chem. 1: 93. https://doi.org/10.1038/s42004-018-0095-y. Schätti, J., Sezer, U., Pedalino, S. et al. (2017). Tailoring the volatility and stability of oligopeptides. J. Mass Spectrom. 52 (8): 550–556. Schlag, E.W., Grotemeyer, J., and Levine, R.D. (1992). Do large molecules ionize? Chem. Phys. Lett. 190: 521–527. Sztáray, B., Bodi, A., and Baer, T. (2010). Modeling unimolecular reactions in photoelectron photoion coincidence experiments. J. Mass Spectrom. 45 (11): 1233–1245. |
2314a7e720993d0a | My watch list
Hückel method
The Hückel method or Hückel molecular orbital method (HMO) proposed by Erich Hückel in 1930, is a very simple LCAO MO Method for the determination of energies of molecular orbitals of pi electrons in conjugated hydrocarbon systems, such as ethene, benzene and butadiene. [1] [2] It is the theoretical basis for the Hückel's rule; the extended Hückel method developed by Roald Hoffmann is the basis of the Woodward-Hoffmann rules [3]. It was later extended to conjugated molecules such as pyridine, pyrrole and furan that contain atoms other than carbon, known in this context as heteroatoms. [4]
It is a very powerful educational tool and details appear in many chemistry textbooks.
Hückel characteristics
The method has several characteristics:
• It limits itself to conjugated hydrocarbons
• Only pi electron MO's are included because these determine the general properties of these molecules and the sigma electrons are ignored. This is referred to as sigma-pi separability.
• The method takes as inputs the LCAO MO Method, the Schrödinger equation and simplifications based on orbital symmetry considerations. Interestingly the method does not take in any physical constants.
• The method predicts how many energy levels exist for a given molecule, which levels are degenerate and it expresses the MO energies as the sum of two other energy terms called alpha, the energy of an electron in a 2p-orbital and beta, an interaction energy between two p orbitals which are still unknown but importantly have become independent of the molecule. In addition it enables calculation of charge density for each atom in the pi framework, the bond order between any two atoms and the overall molecular dipole moment.
Hückel results
The results for a few simple molecules are tabulated below:
MoleculeEnergy Frontier orbital HOMO - LUMO energy gap
EthyleneE1 = α - β LUMO -2β
E2 = α + β HOMO
ButadieneE1 = α + 1.62β
E2 = α + 0.62β HOMO -1.24β
E3 = α - 0.62β LUMO
E4 = α - 1.62β
BenzeneE1 = α + 2β
E2 = α + β
E3 = α + β HOMO -2β
E4 = α - β LUMO
E5 = α - β
E6 = α - 2β
CyclobutadieneE1 = α + 2β
E2 = α SOMO 0
E3 = α SOMO
E4 = α - 2β
Table 1. Hückel method results Lowest energies op top α and β are both negative values [5]
The theory predicts two energy levels for ethylene with its two pi electrons filling the low-energy HOMO and the high energy LUMO remaining empty. In butadiene the 4 pi electrons occupy 2 low energy MO's out of a total of 4 and for benzene 6 energy levels are predicted two of them degenerate.
For linear and cyclic systems (with n atoms), general solutions exist [6].
Linear: E_k = \alpha + 2\beta \cos \frac{k\pi}{(n+1)}
Cyclic: E_k = \alpha + 2\beta \cos \frac{2k\pi}{n}
Many predictions have been experimentally verified:
\Delta E = -4\beta \sin \frac{\pi}{2(n+1)}
from which a value for β can be obtained between −60 and −70 kcal/mol (−250 to −290 kJ/mol).[7]
• The predicted MO energies as stipulated by Koopmans' theorem correlate with photoelectron spectroscopy.[8]
• The Hückel delocalization energy correlates with the experimental heat of combustion. This energy is defined as the difference between the total predicted pi energy (in benzene 8β) and a hypothetical pi energy in which all ethylene units are assumed isolated each contributing 2β (making benzene 3 x 2β = 6β).
• Molecules with MO's paired up such that only the sign differs (for example α+/-β) are called alternant hydrocarbons and have in common small molecular dipole moments. This is in contrast to non-alternant hydrocarbons such as azulene and fulvene that have large dipole moments. The Hückel-theory is more accurate for alternate hydrocarbons.
• For cyclobutadiene the theory predicts that the two high-energy electrons occupy a degenerate pair of MO's that are neither stabilized or destabilized. Hence the square molecule would be a very reactive triplet diradical (the ground state is actually rectangular without degenerate orbitals). In fact, all cyclic conjugated hydrocarbons with a total of 4n pi electrons share this MO pattern and this form the basis of Hückel's rule.
Mathematics behind the Hückel Method
The Hückel method can be derived from the Ritz method with a few further assumptions concerning the overlap matrix S and the Hamiltonian matrix H.
It is assumed that the overlap matrix S is the identity Matrix. This means that overlap between the orbitals is neglected and the orbitals are considered orthogonal. Then the generalised eigenvalue problem of the Ritz method turns into an eigenvalue problem.
The Hamiltonian matrix H = (Hij) is parametrised in the following way:
Hii = α for C atoms and α + hA β for other atoms A.
Hij = β if the two atoms are next to each other and both C, and kAB β for other neighbouring atoms A and B.
Hij = 0 in any other case
The orbitals are the eigenvectors and the energies are the eigenvalues of the Hamiltonian matrix. If the substance is a pure hydrocarbon the problem can be solved without any knowledge about the parameters. For heteroatom systems, such as pyridine, values of hA and kAB have to be specified.
Hückel solution for ethylene
In the Hückel treatment for ethylene [9], the molecular orbital \Psi\, is a linear combination of the 2p atomic orbitals \phi\, at carbon with their ratio's c\,:
\ \Psi = c_1 \phi_1 + c_2 \phi_2
This equation is substituted in the Schrödinger equation:
\ H\Psi = E\Psi
with H\, the Hamiltonian and E\, the energy corresponding to the molecular orbital
to give:
Hc_1 \phi_1 + Hc_2 \phi_2 = Ec_1 \phi_1 + Ec_2 \phi_2\,
This equation is multiplied by \phi_1\, and integrated to give new set of equations:
c_1(H_{11} - ES_{11}) + c_2(H_{12} - ES_{12}) = 0 \,
c_1(H_{21} - ES_{12}) + c_2(H_{22} - ES_{22}) = 0 \,
H_{ij} = \int dv\psi_iH\psi_j\,
S_{ij} = \int dv\psi_i\psi_j\,
All diagonal Hamiltonian integrals H_{ii}\, are called coulomb integrals and those of type H_{ij}\,, where atoms i and j are connected, are called resonance integrals with these relationships:
H_{11} = H_{22} = \alpha \,
H_{12} = H_{21} = \beta \,
Other assumptions are that the overlap integral between the two atomic orbitals is 0
S_{11} = S_{22} = 1 \,
S_{12} = 0 \,
leading to these two homogeneous equations:
c_1(\alpha -E) + c_2(\beta) = 0 \,
c_1\beta + c_2(\alpha - E) = 0 \,
with a total of five variables. After converting this set to matrix notation:
\begin{vmatrix} \alpha - E & \beta \\ \beta & \alpha - E \\ \end{vmatrix} * \begin{vmatrix} c_1 \\ c_2 \\ \end{vmatrix}= 0
the trivial solution gives both wavefunction coefficients c equal to zero which is not useful so the other (non-trivial) solution is :
which can be solved by expanding its determinant:
\beta(\alpha - E) + (\alpha-E)^2 - (\beta(\alpha - E) + \beta^2 = 0\,
(\alpha-E)^2 = \beta^2\,
\alpha-E = \pm\beta\,
E = \alpha \pm \beta \,
\Psi = c_1(\phi_1 \pm \phi_2) \,
After normalization the coefficients are obtained:
c_1 = c_2 = \frac{1}{\sqrt{2}},
The constant β in the energy term is negative and therefore α + β is the lower energy corresponding to the HOMO and is α - β the LUMO energy.
Further reading
• The HMO-Model and its applications: Basis and Manipulation, E. Heilbronner and H. Bock, English translation, 1976, Verlag Chemie.
• The HMO-Model and its applications: Problems with Solutions, E. Heilbronner and H. Bock, English translation, 1976, Verlag Chemie.
• The HMO-Model and its applications: Tables of Hückel Molecular Orbitals , E. Heilbronner and H. Bock, English translation, 1976, Verlag Chemie.
1. ^ E. Hückel, Zeitschrift für Physik, 70, 204, (1931); 72, 310, (1931); 76, 628 (1932); 83, 632, (1933)
3. ^ Stereochemistry of Electrocyclic Reactions R. B. Woodward, Roald Hoffmann J. Am. Chem. Soc.; 1965; 87(2); 395-397. doi:10.1021/ja01080a054
4. ^ Andrew Streitwieser, Molecular Orbital Theory for Organic Chemists, Wiley, New York, (1961)
5. ^ The chemical bond 2nd Ed. J.N. Murrel, S.F.A. Kettle, J.M. Tedder ISBN 0471907600)
6. ^ Quantum Mechanics for Organic Chemists. Zimmmerman, H., Academic Press, New York, 1975.
7. ^ Use of Huckel Molecular Orbital Theory in Interpreting the Visible Spectra of Polymethine Dyes: An Undergraduate Physical Chemistry Experiment. Bahnick, Donald A. J. Chem. Educ. 1994, 71, 171.
8. ^ Huckel theory and photoelectron spectroscopy. von Nagy-Felsobuki, Ellak I. J. Chem. Educ. 1989, 66, 821.
9. ^ Quantum chemistry workbook Jean-Louis Calais ISBN 0471594350
This article is licensed under the GNU Free Documentation License. It uses material from the Wikipedia article "Hückel_method". A list of authors is available in Wikipedia. |
4239dc27631428b9 | From UW-Math Wiki
Jump to navigation Jump to search
ACMS Abstracts: Spring 2022
Jacob Notbohm (UW)
Title: Collective Cell Migration: Rigidity Transition and the Eyes of the Cell
Abstract: Collective cell migration is an essential process in development, regeneration, and disease. The motion results from a physical balance of cell-generated forces, but the relationships between cell force and motion are challenging to study, because cell forces are actively generated within each cell and balanced by complicated interactions at the cell-substrate and cell-cell interfaces. In complex, multi-body physical systems such as this one, mathematical models can provide essential insights into the underlying mechanisms of collective cell force generation, transmission, and, ultimately, motion. This presentation will describe an experimentalist’s perspective on a class of models for collective cell migration based on the vertex model, wherein the cells are polygons that tesselate a two-dimensional plane. The models are discussed in the context of experiments performed by my research group to measure cell forces and velocities, which enable quantitative comparison between model predictions and experimental results. The presentation will focus on two specific examples. The first is a fluid-to-solid rigidity transition predicted by the models to depend on cell shape. The second is the experimental finding that cells align their propulsive forces with those of their neighbors, analogous to how birds within a flock or fish within a school use visual cues to for alignment. These two examples illustrate how our experiments have led to clearer understanding of the underlying factors within the cell that correspond to the different model parameters and have discovered new phenomena not yet accounted for in the recent models.
Alex Townsend (Cornell)
Title: What networks of oscillators spontaneously synchronize?
Abstract: Consider a network of identical phase oscillators with sinusoidal coupling. How likely are the oscillators to spontaneously synchronize, starting from random initial phases? One expects that dense networks of oscillators have a strong tendency to pulse in unison. But, how dense is dense enough? In this talk, we use techniques from numerical linear algebra, computational algebraic geometry, and dynamical systems to derive the densest known networks that do not synchronize and the sparsest ones that do. We will find that there is a critical network density above which spontaneous synchrony is guaranteed regardless of the network's topology, and prove that synchrony is omnipresent for random networks above a lucid threshold. This is joint work with Martin Kassabov, Steven Strogatz, and Mike Stillman.
Prof. Alex Townsend is an associate professor at Cornell University in the Mathematics Department. His research is in Applied Mathematics and mainly focuses on spectral methods, low-rank techniques, fast transforms, and theoretical aspects of deep learning. Prior to Cornell, he was an Applied Math instructor at MIT (2014-2016) and a DPhil student at the University of Oxford (2010-2014). He was awarded an NSF CAREER in 2021, a SIGEST paper award in 2019, the SIAG/LA Early Career Prize in applicable linear algebra in 2018, and the Leslie Fox Prize in numerical analysis in 2015.
Geoffrey Vasil (Sydney)
Title: The mechanics of a large pendulum chain
Abstract: I’ll discuss a particular high-dimensional system that displays subtle behaviour found in the continuum limit. The only catch is that it formally shouldn’t, which raises a few questions. When is a discrete system large enough to be called continuous? When are approximate (broken) symmetries good enough to be treated like the real thing? When and why does a fluid approximation work as well as we like to assume? What does all this say about observables and the approach to equilibria? The particular system I have in mind is a large ideal pendulum chain, and it’s cousin the continuous flexible string. I propose that the pendulum chain is a perfect model system to study notoriously difficult phenomena such as vortical turbulence, waves, cascades and thermalisation, but with many fewer degrees of freedom than a three-dimensional fluid.
Xiangxiong Zhang (Purdue)
Title: Recent Progress on Q^k Spectral Element Method: Accuracy, Monotonicity and Applications
Abstract: In the literature, spectral element methods usually refer to finite element methods with high order polynomial basis. The Q^k spectral element method has been a popular high order method for solving second order PDEs, e.g., wave equations, for more than three decades, obtained by continuous finite element method with tenor product polynomial of degree k and with at least (k+1)-point Gauss-Lobatto quadrature. In this talk, I will present some brand new results of this classical scheme, including its accuracy, monotonicity (stability), and examples of using monotonicity to construct high order bound-preserving schemes in various applications including the Allen-Cahn equation coupled with an incompressible velocity field, Keller-Segel equation for chemotaxis, and nonlinear eigenvalue problem for Gross–Pitaevskii equation. 1) Accuracy: when the least accurate (k+1)-point Gauss-Lobatto quadrature is used, the spectral element method is also a finite difference (FD) scheme, and this FD scheme can sometimes be (k+2)-th order accurate for k>=2. This has been observed in practice but never proven before in terms of rigorous error estimates. We are able to prove it for linear elliptic, wave, parabolic and Schrödinger equations for Dirichlet boundary conditions. For Neumann boundary conditions, (k+2)-th order can be proven if there is no mixed second order derivative. Otherwise, only (k+3/2)-th order can be proven and some order loss is indeed observed in numerical tests. The accuracy result also applies to spectral element method on any curvilinear mesh that can be smoothly mapped to a rectangular mesh, e.g., solving a wave equation on an annulus region with a curvilinear mesh generated by polar coordinates. 2) Monotonicity: consider solving the Poisson equation, then a scheme is called monotone if the inverse of the stiffness matrix is entrywise non-negative. It is well known that second order centered difference or P1 finite element method can form an M-matrix thus they are monotone, and high order accurate schemes in general are not M-matrices thus not monotone. But there are exceptions. In particular, we have proven that the fourth order accurate FD scheme (Q^2 spectral element method) is a product of two M-matrices thus monotone for a variable coefficient diffusion operator: this is the first time that a high order accurate scheme is proven monotone for a variable coefficient operator. We have also proven the fifth order accurate FD scheme (Q^3 spectral element method) is a product of three M-matrices thus monotone for the Poisson equation: this is the first time that a fifth order accurate discrete Laplacian is proven monotone in two dimensions (all previously known high order monotone discrete Laplacian in 2D are fourth order accurate). |
62ef189195c41430 | The block universe is interesting, but not comforting
Click through for source and bonus red button caption at
This SMBC gets at something that’s often bothered me about the way many people talk about the block universe concept. The block universe is the idea that if the universe is fully deterministic, then its entire history from beginning to end exists in an eternal timeless static block. We are patterns embedded in the block, and so from our perspective, we exist in a dynamic and changing reality. But from outside, the block is like a movie DVD, a static object containing a story, with the beginning, middle, and end already set.
Often it’s described by saying that all of the past events in the universe, including all the people who’ve ever lived, are “still out there right now.” In other words, Julius Caesar, the Buddha, Gandhi, and our deceased relatives are still living their lives “right now.” Likewise, we’re often told that our lives are now part of the block, and that nothing will ever remove that part of the overall pattern.
The overall sense, I think, is meant to be comforting. Albert Einstein if often quoted as supporting this view. But it’s worth noting the context of the quote.
Letter to Besso’s family (March 1955) following the death of Michele Besso,
But what exactly do we mean by “right now” in this context? Certainly not the same now we mean when we talk about our current plane of simultaneity within the block, a notion that general relativity has rendered into something relative to our current frame. So the “right now” that historical figures and past relatives are still living their lives would have to be a “now” outside of our normal conception of time.
And why exactly should we take comfort that our lives are ostensibly an undeletable part of the block? Who will ever see that part of the pattern, and in what context, being outside of time and space? God? Higher dimensional aliens? I can see a religious person finding comfort in the first answer, but the second one doesn’t necessarily provide me any comfort.
I think the block universe is an interesting metaphysical concept, one that could be true. But I’ve never seen it, in and of itself, as a particularly comforting one. But maybe I’m missing something?
159 thoughts on “The block universe is interesting, but not comforting
1. The idea is mathematically elegant, but it is not emotionally soothing.
As a mathematician, I appreciate that elegance. But I have come to doubt that there is anything mathematical about reality. The mathematics that we see are only part of our mathematical models. Reality is probably stranger than we are able to imagine.
Liked by 1 person
1. It is striking that many new discoveries in physics happen first with mathematics. Often the people making those discoveries, or those around them, are quick to say, “Don’t worry. It’s not like this crazy thing is true. It’s just a mathematical convenience.” But often it does go on to be true.
Reality is definitely stranger than we can imagine, often in way we don’t want to accept.
Liked by 1 person
1. I do not get this kind of talk —- “Reality is stranger than we can imagine.”
How alien could “reality” really be? Physics is not all ‘reality’ is. Physics needs to be able to connect to our best ideas in biology, ethics, art, …
I believe, to understand “reality”, we need to start in the middle of it, with living things and their increasing complexity and complex ways of life —like our own.
In some sense, the world physics shows us is an instrumentality. A useful perspective but not all of ‘Reality’.
Liked by 2 people
1. Interestingly, that phrase is a paraphrase that dates back to J.B.S. Haldane, a British biologist.
Possible Worlds and Other Papers (1927), p. 286
Consider if we encountered a well educated person from the year 1500. For that person, the earth is the center of the universe with everything in a very small universe rotating around it, rather than an infinitesimal speck of leftover stardust in an unimaginably vast darkness. For them, the world is only a few thousand years old rather than billions. They have no idea of evolution, relativity, the unsettling implications of quantum mechanics, what causes diseases, or many other things. If we told them of the modern view of reality, it would be stranger than anything they had imagined.
Now, consider if we encountered a person from the year 2500.
1. And yet, there has to be a reasonable train of thought of how we got there from here. However strange physics gets, it is still a human activity—- at least the Doing Of science is; it is a practice/praxis, an agency.
And this gets us back to the original issue about The Block Universe and whether it is a “comfortable” perspective. It is not. I believe that human/personal vocabulary has no place there. That is a category mistake. It contains no agency. Persons don’t exist there, not you or I, and not Julius Caesar or Albert E. Only quantum waves and other such depersonalize spacious-temporal objects. It is all external relationships.
So this leaves a huge disconnect in our thinking, if you do not allow for the reality of Doing, as well as the reality of Happening. There must be a compromise position. I think Dennett with his Intentional Stance is about this very point. From a physics-first perspective, only Emergent Realities allow the act of learning by humans.
The universe can be no more strange than the ability we have of learning that it might be so.
2. On further consideration, I think it would satisfy me if you guys said ‘Physics is stranger …”, not “reality is stranger…”
I think humans have always been “in touch with reality” and that includes 1500, now, and 2500. How would we exist if we weren’t? And that includes our ideas about “reality”, they can’t be that wrong or our human cultural experiment would never have gotten this far.
3. I don’t know Greg. We seem able to deal with our immediate reality on a day to day basis, on scales we evolved to deal with, as long as things don’t change too much or too fast. The fact that so many people in this country can’t seem to take the virus seriously, understand how economic policies affect them, or many other aspects of modern life, doesn’t incline me to think we’re that clued in to our reality.
4. Right. Things only seem strange when they are unfamiliar. Gravity, for example, could easily be called “spooky action at a distance” if it were not so common that we all take it for granted.
2. Looking for “comfort” in a scientific theory sounds a bit like looking for “beauty” as a criterion for a scientific theory. Neither strike me as relevant. That said, I find a certain degree of comfort in encountering a theory that makes sense to me; to coin a psychological term, “cognitive consonance” (as opposed to “cognitive dissonance”). That doesn’t mean I require theories to make sense to me. When they don’t, I shift gears and try my best to gain traction or I move on. I like the idea of us being patterns rather than definite things. Patterns are often in the minds of the conceivers and there might be many different ways of conceiving our patterns, some of which might be useful in certain contexts.
Liked by 1 person
1. I agree on comfort not being good criteria. A theory definitely has to be compatible with empirical data. If it’s not, it doesn’t matter how comforting, beautiful, or consonant it is.
On the other hand, we’re sometimes faced with multiple theories compatible with the data, or multiple speculative theories where tests to narrow down the options aren’t yet possible. We often talk about parsimony in those cases, but the reality is beautiful or consonant theories often seem a lot more parsimonious to us than theories we deem ugly or dissonant. And a lot of people won’t accept unsettling theories, no matter how elegant, until the evidence is overwhelming.
Liked by 1 person
3. ” so from our perspective, we exist in a dynamic and changing reality. But from outside, the block is like a movie DVD,”
From outside the block, time is depicted as a dimension within the block, so all temporal distinctions will be painted as spatial. That doesn’t mean that they really are, any more than if I paint a picture of Trump as green, he will stop being orange.
“a static object containing a story, with the beginning, middle, and end already set.”
Emphasis added to show the mistake. Time is off your map, if ye be outside the block. Here there be monsters.
Liked by 2 people
1. I guess the block universe picture is supposed to enhance your intuition that the past and future are real. The trouble is that it enhances a bunch of wrong intuitions even more. A better approach IMHO is to draw a two-dimensional graph with two different time axes, as measured by two observers on different inertial trajectories. That will show you that you can’t make only one time “real” without artificially promoting one set of observers to having The One True Time measurement.
Liked by 1 person
1. I think space time diagrams are definitely helpful in understanding specific cases in relativity. But I think the block universe is supposed to be an extrapolation from all those cases to what it means for the universe overall. The problem is, it’s an extrapolation we may never be able to actually test.
2. It’s an Ideal reference point. It is a useful perspective to adopt, for some purposes—like if you want to blow up Hiroshima (but even a lot better stuff). It is not useful if you are standing in front of your closet trying to decide what shirt to wear to work today.
Liked by 1 person
1. For myself, I find comfort in understanding the world. If the block universe helps with that understanding (which it does, especially with regard to the role of mutual information in consciousness), then I find comfort in that.
Paul is correct in recognizing the difficulty of understanding the block universe without using concepts that involve time. But I don’t think that makes it pointless. It just makes it harder.
Liked by 2 people
1. I don’t know that I would say understanding reality gives me comfort, but it does trigger some kind of reward system in my brain, a type of thrill. Once I think I understand something, particularly something I’ve long struggled with, I often find myself dwelling on it for a while and getting satisfaction from it. (During these periods, it takes discipline not to post on it constantly.) That wears off in time and I end up looking to get my understanding fix somewhere else.
Maybe I overstated things with “pointless.” But I do often think the language difficulties leave a lot of people somewhat confused on exactly what the block universe is supposed to be.
4. It seems to me that if someone lived, their life ‘crystallised’ something actual out of multiple possibilities, and that the effect of that is felt by everything subsequent to them. While the strongest such effects are physical artefacts, genetics and culture, from the viewpoint of physics they have an effect on everything within the ct (speed of light x time) cone that spreads out from them. In that sense their life is there for all time, its effects echoing into the future.
Liked by 2 people
1. I think that’s definitely true. But it also seems true of a hurricane, or a virus. And on the scope of the universe, a quasar has far more causal influence.
Of course, a person has a larger and longer lasting effect for the people that knew them. In that sense, the effects of the person live on after they’re gone in those people. So to their fellow humans, it’s very significant, and perhaps the best any of us can hope for.
5. I don’t think the BUH is true, so I can’t find comfort in it. Seems like it could equally be discomforting since it’s a fully deterministic view. The raisins just are, from beginning to end.
Most religious people believe in some form of free will, so I’d suspect most would just reject the BUH on that grounds.
FWIW, MWI presents a form of block universe, especially versions where the wave-functions always exist (which I’m beginning to believe is the version that has to be considered). MWI, in any event, presents a fully deterministic universe.
Liked by 1 person
1. I’ve never felt the discomfort of determinism. I might if that determinism could ever be cashed out in a meaningful way, but none of us is Laplace’s demon. Although I guess if you’re looking at it from the idea of ultimate judgment, I could see how it might be discomforting.
Definitely the MWI, or any other deterministic interpretation of QM, leads to a form of the block universe. It seems like the possible fundamental randomness of QM is the only thing that would prevent a block universe scenario. At least unless some form of interactionist dualism were to turn out true.
1. Determinism opposes free will, which is why some people are uncomfortable with it. Kind of a funny duality. Comfort or discomfort, depending on one’s view, about the BUH or determinism in general.
I do think what we perceive as quantum randomness is an argument against the BUH. There is also the need to explain the subjective now we experience and share. I also have questions about how all that structure came into being. It’s one thing for the universe to calculate itself as it goes, but quite another to produce that much computation all at once as the BUH implies.
And I think a key argument in its favor, the SR simultaneous futures argument, to actually be contrary to what SR says — that we can make no statements about “now” or simultaneity until the light from their events reaches us.
Put it this way: I consider the BUH much less likely than MWI. 😀
1. Determinism certainly opposes contra-causal free will. But I’m a compatibilist, so for me free will is still there.
Most of the descriptions I’ve seen of the block universe don’t get into how it got there. The concept itself seems incompatible with an origin. Such a description would involve a time sequence, but we’re talking about something that contains all of time. It seems like that’s one of the concept’s problems. It’s extremely counter-intuitive to even think about. Most people can’t without thinking of it as somehow existing in an “outer” time of some sort, where the question of how it got there can arise.
Strangely enough, while my credence in the MWI is higher than yours, I agree that the block universe is less likely. It involves a further extrapolation of what we might currently take to be our fundamental laws. Even if we have those laws right, we have no way to know whether that understanding is complete. In other words, we don’t know what unknown factors might frustrate that prediction.
Liked by 1 person
2. Scientific determinism isn’t like theological determinism, and specifically doesn’t oppose free will. Most relevantly and surprisingly, scientific determinism doesn’t imply universal causality. That is, not every physical event or state has a cause, if we impose a common sense restriction on how we define “cause”. The restriction: causality must be asymmetric, i.e. one-way. If A causes B, then B does not cause A. A microscopically detailed state is linked by natural law to earlier and later microscopically detailed states (determinism), but this linkage is symmetric. So it is not causal.
Time is not like a river. Or rather, it is so only at a macroscopic level, where we use coarse-grained descriptions (W > 1, W being the count of microscopic states that satisfy the description) that are associated with non-zero entropy (S = k log W), which increases. It is only here that time “flows” like a river. No hydraulic forces are pushing on you from the microscopic details of the past. From the macroscopic past, there are definitely influences you must deal with, but you have many options of how to deal. And the macroscopic past is not sufficient to determine a unique future for you. Insofar as the Block Universe picture interferes with the “fundamentally, time flows” assumption, it can actually help you see why scientific determinism doesn’t conflict with free will.
By the way – to me, the Block Universe is a picture/metaphor. I don’t know what Hypothesis it is supposed to be – maybe the idea that all times Chéngwéi equally real? (Chéngwéi – Mandarin for “to be” – Mandarin is, I hear, a tenseless language. That would come in real handy here!)
Liked by 1 person
1. I remember your free will posts — commented on several of them — and my view is still that time is axiomatic, a fundamental aspect of reality. (So fundamental, I suspect time existed before the Big Bang. I see space as less fundamental than time.)
So, for me, causality is just what happens when the laws of physics are combined with time. Thermodynamics and entropy, likewise, are emergent from physical law plus time. The physical behaviors responsible for Boltzmann’s entropy equation are exactly due to this behavior — a physical system is far more likely to evolve to larger macro states simply because they’re larger.
The asymmetry you want comes from time’s “ratchet” — time only goes one way; B can never retro-cause A. The basic laws of physics are symmetrical with respect to time, but reality simply cannot go backwards.
I agree time isn’t a river (nice metaphor, though). Time is a spacetime curve we move along (a worldline). Just tonight, hanging with friends, I got into how, sitting still, we’re moving along the time axis at max speed, the speed of light. (The spacetime time axis is formally ct, not just t. (It was originally ict, and because the terms are squared, we get ct^2, which is where the +++- spacetime signature comes from.))
The problem for free will with determinism is that, if reductionism is true, the macro world is just as determined as the micro world. (And with no recourse to quantum randomness.) Chaos means we have no hope of predicting anything, but the future (and the now) seem fully determined. Free will seems to be an illusion unless someone can explain how it arises.
FWIW, I think higher-level brains just might be the one non-determined system in reality. (Maybe even all brains.) (You can find posts about that on my blog if interested.)
And I am beginning to question whether the real numbers apply to physical reality. There are a lot of significant problems with the continuum. Maybe reality is rational, but not real (numerically speaking).
The Block Universe Hypothesis is eternalism, the metaphorical image of a giant block of Lucite with embedded bits. It apparently sprang into existence, all the structure calculated. And, for some reason, we’re trapped in a “now” moment that began at birth and ends at death.
2. I agree that time is fundamental – but time’s ratchet is not. The ratchet comes from entropy, not the other way around. If time extends beyond the Big Bang, then any intelligent creatures in that part of spacetime live “backward” relative to us. They remember things from times we would consider the less-distant past than their act of remembering, and they plan and control things we would consider to be in the more-distant past. Because that’s how their entropy gradient would go.
It doesn’t matter if the future is determined, if the determining factors are not independent of you. Consider: why is a man in jail unfree in a way that a woman in a room with a curtain for a door is not? Because no matter how the jailed man pulls or pushes on the iron bars, they remain in a position which blocks his exit. Whereas the position of the curtain is not independent of what the woman does. And because the microscopic details of the past are connected to you only by bidirectional (in time) laws, those details are not independent of you. The thought that they are is a cognitive illusion, based on your all-macroscopic experience.
If a Block Universe contains time, the block does not “pop into” existence. You would need some kind of meta-time for that, which is bizarre and excessive.
Liked by 1 person
3. “I agree that time is fundamental – but time’s ratchet is not.”
So time is fundamental but not omni-directional?
I disagree entirely about entropy — I believe it’s just a consequence of the laws of physics over time. Entropy is about configuration states evolving towards more probable ones.
I also disagree any part of spacetime can run “backwards” (for any reason). Entropy does not control time.
I do agree the BU doesn’t “pop into” existence. As you say, that would be bizarre. In fact, it doesn’t exist at all. 🙂
4. Wyrd, I’ve been busy with our other discussion about the BU but I’ve been meaning to ask about your remarks here. As a preliminary, I’d appreciate your providing a definition of the ‘time’ you’re discussing, although you likely agree with St. Augustine of Hippo, who said, “If no one asks me, I know, but if any person should require me to tell him, I cannot.”
As you say, “the basic laws of physics are symmetrical with respect to time.” What is not symmetrical is not reality, however, but the stream of consciousness, which has a fixed futureward narrative direction. Physical laws and reality can operate backwards but consciousness cannot. Reflexes and memories are also encoded with a learned futureward narrative direction—that’s how we can tell that a movie of a line of people backing off a stopped train is being played in reverse.
It appears that your usage of Minkowski terminology is incorrect. Minkowski introduced the term ‘worldline’ to denote the path that an object traces in 4-dimensional spacetime. Per Wikipedia, “Each point of a worldline is an event that can be labeled with the time and the spatial position of the object at that time,” i.e., the path in all four dimensions of spacetime. Time is not a worldline or a “spacetime curve we move along,” as you wrote. Physicist Robert Geroch wrote in General Relativity from A to B:
There is no dynamics within space-time itself: nothing ever moves therein; nothing happens; nothing changes … one does not think of particles as ‘moving through’ space-time, or as ‘following along’ their world-lines. Rather, particles are just ‘in’ space-time, once and for all, and the world-line represents, all at once, the complete life history of the particle.”
Further, the BU does not mean that we’re “trapped in a ‘now’ moment that began at birth and ends at death.” Our lifetimes are permanent objects in spacetime with extended volume called worldtubes. A worldtube is not a ‘now’ or a “now moment.”
Your claim that begins “The spacetime time axis is formally ct, not just t” misconstrues the mathematics for a spacetime interval. From the discussion at:
… special relativity provides a new invariant, called the spacetime interval, which combines distances in space and in time. All observers who measure time and distance carefully will find the same spacetime interval between any two events. … The constant ‘c’, the speed of light, converts time units (like seconds) into space units (like meters). Seconds times meters/second = meters.
That means that your surmise that “we’re moving along the time axis at max speed, the speed of light” is incorrect. As Geroch wrote (above), nothing in 4-dimensional spacetime is moving at all. Also, a spacetime interval (and its mathematics) cannot exist in your conception of the geometry of the universe because a spacetime interval only makes sense if the different times t1 and t2 are co-real (i.e., both exist), which you claim is not possible.
5. “I’d appreciate your providing a definition of the ‘time’ you’re discussing,”
I hold time to be fundamental and axiomatic, so it can’t be defined, just described. My notion of it is essentially the common view of it being something that either flows or through which we flow.
“It appears that your usage of Minkowski terminology is incorrect.”
Nope. I’m using it exactly as Minkowski did. We move through 4D spacetime. If you go back and read the paragraph in question, I was contrasting our motion along the time axis when we have no motion along any physical axis with what happens when we do have motion along a physical axis. Regardless, in all cases, our proper time always ticks at the same rate, hence our worldlines, our path through spacetime, is our personal time axis.
“Your claim that begins ‘The spacetime time axis is formally ct, not just t’ misconstrues the mathematics for a spacetime interval.”
Nope. Firstly, you’re confusing the spacetime interval with the time axis. Secondly, defining the time axis with units of ct is necessary to create a 4D spacetime.
Using units of ct on the time axis gives that axis units of length. If the length unit is 3×10^8 m/s × 1 s, then the seconds cancel out and we’re left with meters, a unit of length. And it’s common in these situations to just define c as 1, so there’s no numeric scaling of the seconds.
“That means that your surmise that ‘we’re moving along the time axis at max speed, the speed of light’ is incorrect.”
Not my surmise. It’s textbook SR.
That’s utter nonsense. The spacetime interval is just an invariant measure in Minkowski space. It’s just the Minkowski equivalent of the Pythagorean distance in Euclidean space.
Euclidean distance: x^2 + y^2 + z^2 + w^2
Minkowski distance: x^2 + y^2 + z^2 – t^2
As I mentioned above, the original source of the minus was defining the time units as ict (leveraging how multiplication by i creates a new orthogonal axis). Since the terms are squared, i turns into -1 and creates the subtraction. In GR there’s just a -+++ metric, which can equally be a +— metric.
6. Entropy is about configuration states evolving towards more probable ones, sure. The open cosmological question is whether this happens in both temporal directions away from the low-entropy Big Bang state, or only in one. If it makes you happy, by all means describe these processes as “the Big Bang having evolved from a higher-entropy state, then evolving into another (our) higher-entropy state.”
I don’t know what it would mean for “entropy [to] control time.” My claim is only that entropy controls our experience of time. And given that it does, we can explain our observations without positing a metaphysically objective (perspective-independent) True Arrow of Time.
Liked by 1 person
7. Entropy gradients presuppose a time line. But in order to portray that line as a ray, a single-arrow beastie instead of a two-arrow one, that’s where entropy gradients come in. The single arrow faithfully represents something important about the macro level, but not about the level of ultimate constituents.
Liked by 1 person
8. We can agree to disagree, but entropy isn’t just a consequence of time’s ray-likeness even if the latter is a fundamental fact. We both posit a low-entropy Big Bang. You add a True Arrow of Time on top of that, while I settle for the Entropic Arrow.
Liked by 1 person
9. I don’t add time’s arrow on top of anything, time is fundamental, and that includes that it only goes one way. That’s what I meant by “time’s ratchet.” One-way time, for me, is axiomatic — even more so than space, since, for me, there is a “before” to the Big Bang as well as an “after.” Time is part of whatever meta reality provided the laws and context for the Big Bang in the first place.
I totally agree about a low-entropy Big Bang, but I see entropy as a measure of something, not any kind of force in itself. The gradient you speak of is just a matter of statistical improbability. If it were a real gradient, it could never be defied, even in principle. An Entropic Arrow is consequential and conditional.
Another thing for me is, entropy is always requires context and definition. For instance, my CD collection, if I define a total sort order such that there is only one possible fully sorted condition, then I can say the entropy of the collection, per this definition, is zero (k_b × ln 1 = 0). But I also have to define what I mean by the macro states of the system. How do different unsorted states compare to each other in terms of their entropy (which ones are in the same macro state)? All that, to me, makes entropy just a way of measuring and describing systems. I don’t attach any more significance to it than that.
10. I agree that entropy is consequential and conditional, and that the entropy gradient can be defied. Indeed, Crooks fluctuation theorem gives the probability of an entropy-decreasing process – often absurdly tiny, but never quite zero. And of course, the fact that entropy gradients are consequential upon the existence of a particularly low entropy state is the beauty of it! It’s not an extra assumption, it’s buy one Big Bang, get a workable arrow-of-time free. Workable meaning that it explains the observations that led us to see time as asymmetric in the first place.
Liked by 1 person
11. Correct me if I’m wrong, but it seems like the only point of disagreement is that I see (fundamental) time as axiomatically one-way whereas you see it as bi-directional so the only “ratchet” is emergent entropy?
Question: “And of course, the fact that entropy gradients are consequential upon the existence of a particularly low entropy state is the beauty of it!”
I don’t follow. Why a particularly low entropy state? Isn’t the gradient always there, per K_B ln Ω? Are you referring to how it could be said to be less “steep” as Ω increases?
12. You’re exactly right about the only point of disagreement.
The “particularly” low entropy state is needed to explain our known history. Of course how low you need to go, depends what you’re explaining. To explain all astronomical observations combined, you need the extremely low-entropy Big Bang that cosmologists infer. To explain just the fact that for thousands of years, humans have been able to remember their past but not to remember their future, you “only” need a very low-entropy state some thousands of years ago.
Liked by 1 person
13. “To explain all astronomical observations combined, you need the extremely low-entropy Big Bang that cosmologists infer.”
Ah, yes, we agree on that, too. (Roger Penrose, in his book Cycles of Time, has a good explanation of what makes the BB low-entropy (gravity). The idea had always been a bit fuzzy in my mind until then — the tendency to equate the initial moments of identical quark soup everywhere to a gas-equalized room, which is a maximal entropy condition. But gravity changes the dynamics.)
I can’t go along with entropy being why we don’t “remember” the future, although I do agree creating memories necessarily increases entropy. All increased structure does!
Liked by 1 person
14. 🙂 No, sorry, I can’t. In my view entropy is strictly consequential and never directly causes anything. It’s not what we would define as a “force.” To me it’s like asking if a shadow causes the object that creates the shadow.
15. Not a sufficient reason in the sense of cause, perhaps (although I think it is, but never mind that), but a sufficient evidence. As indeed, in the right circumstances, a shadow of, say, a dog can be sufficient evidence of a dog.
Liked by 1 person
3. Wyrd, as regards quantum randomness, consider this observation from physicist Brian Greene:
“… loose language can be deceptive. The mathematics of quantum mechanics, Schrödinger’s equation, is just as deterministic as the mathematics of classical Newtonian physics. The difference is that whereas Newton takes as input the state of the world now and produces a unique state for the world tomorrow, quantum mechanics takes as input the state of the world now and produces a unique table of probabilities for the state of the world tomorrow. The quantum equations lay out many possible futures, but they deterministically chisel the likelihood of each in mathematical stone. Much like Newton, Schrödinger leaves no room for free will.
Perhaps the idea of QM randomness is a physics urban legend rooted in a misunderstanding of probability. Greene’s remark lends philosophical support to the MWI, though. I see no reason to believe that the existence of our spacetime precludes the existence of an infinity of others.
Secondly, the subjective ‘now’ is simply the feeling of the immediacy of an experience. It’s wholly explained as an artifact of consciousness
Third, “questions about how all that structure came into being” are unanswerable and of the kind typically posed by science fiction writers. Perhaps the computation was completed prior to the result being impressed onto the hologrammatic medium of spacetime. Of course, the computation would have to take place in some sci-fi meta-time … but that ‘compute-print’ sequence would answer your “all at once” concern.
Lastly, light cones are causal boundaries and irrelevant in determining what events are co-real—what events coexist. In the block universe, all events are co-real.
1. “Perhaps the idea of QM randomness…”
From a God’s-eye view, if one believes in the MWI (which, BTW, I don’t), then QM randomness does go away, but only from that God’s-eye view. From the perspective of any given branch, reality still appears random. The Schrödinger does evolve deterministically, but measurements involve probabilities, even in the MWI.
I don’t rule them out, but I’m struck by how the main place we find ideas of extra dimensions and multiple worlds is comic books and science fiction. And some mathematical theorists who, I suspect, have gotten “lost in the math.”
“Secondly, the subjective ‘now’…”
Okay, but if the BUH is correct, and all moments exist equally, why do we experience only the ‘now’ and why do we share it with others? If the BUH is correct, why haven’t our senses evolved to have some sense of it?
“Third, ‘questions about how all that structure came into being’ are unanswerable…”
Yeah, exactly my point. The BUH is science fiction. (See my post Blocking the Universe for details.)
“In the block universe, all events are co-real.”
This represents a fundamental misunderstanding of SR. A key tenant of which is that simultaneity is virtual — we can make no statements about events outside our light cone until information about that event reaches us. We can only speak of our own ‘now’ — it’s only in retrospect we’re able to define some event with space-like separation as having happened “simultaneously” with some event in our past.
2. Wyrd, regarding MWI:
This Wikipedia article lists well over a dozen QM Interpretations, all of which are metaphysics —philosophy … and not verifiable in principle:
Consequently we can’t believe in any of them. However I appreciate the creative boost to science fiction they’ve provided, which is their most valuable contribution.
The ‘now’:
We experience the completely subjective ‘now’ as the momentary immediacy within the stream of consciousness we’re experiencing. All of our conscious moments feel like now’s. If we’re not separated by too much distance and not noticeably accelerating relative to one another, my feeling of ‘now’ will correspond in clock time to your feeling of ‘now’ by virtue of us both being at the same temporal coordinate of spacetime. There is no objective ‘now’ in the universe that we can experience together—‘now’ is a feeling.
Origin of Spacetime
That’s the same age-old unanswerable question of “where did everything come from”? It’s completely irrelevant to the geometry of the universe. Proposed explanations include God, “everything always was and always will be” and a Boltzmann universe that just ‘popped’ into existence in its entirety. Being unable to explain something’s origin doesn’t make that something disappear.
Wyrd, light cones are a mathematical abstraction describing a so-called “zone of causality.” But you’re claiming that everything outside our light cone doesn’t exist—isn’t real! You surely realize that light cones expand with time. According to you that means that, as our light cone expands, events become real that weren’t real before.
Setting aside the fact that the universe contains an infinity of light cones, let’s start a brand new light cone with a flash of light as Wikipedia’s “light cone” article describes:
In special and general relativity, a light cone is the path that a flash of light, emanating from a single event (localized to a single point in space and a single moment in time) and traveling in all directions, would take through spacetime.”
Let’s locate our new flash of light about eight light-minutes away, about the distance from the sun to the earth. According to your statements, none of us is real to an observer within that light cone until the light cone expands to reach us, at which point we magically become real. Hokey smokes Wyrd! How about citing for our benefit a couple of accessible references of practicing physicists who agree with your “light cones create reality” proposal. Thanks in advance.
3. Guys,
This is an interesting discussion, but the rhetoric seems to be escalating. Please do me a favor. Let’s keep this friendly and respectful, and try to disagree without provoking each other.
4. Stephen,
The status of interpretations of quantum mechanics can be affected by observations. For instance, if an actual physical wave function collapse is ever detected, it would falsify any interpretation without a collapse (such as the MWI). Conversely, as larger and larger objects are held in quantum superposition, it puts increasing pressure on the assumption that a collapse actually happens. If a conscious being were ever put in superposition, it would be tough to argue that versions of them collapse out of existence without direct evidence.
For the MWI in particular, Brian Greene and others have noted that it is possible, in principle, to detect interference between decohered branches (worlds), albeit very difficult. I read somewhere it was equivalent to trying to figure out how the gravity of Jupiter affects the orbit of the ISS.
5. The thing is, the MWI does have a form of, for lack of a better word, “collapse”.
It certainly has measurement, right? Alex can measure vertical electron spin and end up in separate branches [Alex-up] and [Alex-down]. Both branches now have a wave-function in a known state. Repeating the same measurement returns the same result, so the wave-function has, in some sense, collapsed as a result of the measurement.
Measurements, interactions, whatever you want to call them, do change the measured wave-function.
6. When it comes to collapse, we can talk about an epistemic collapse or an ontic collapse. Epistemic collapse is about what an observer knows and is trivially true for all interpretations. Ontic collapse is all the possible outcomes but one ceasing to exist. The MWI doesn’t have an ontic collapse.
For the MWI, I suppose we could talk about an accessibility collapse, where all the outcomes but one become inaccessible, which would be true on any one post-measurement branch.
The MWI definitely has measurement. It’s just that the observer has no special role. Any measurement like event, that is, any magnification of results of an individual quantum event, such as maybe a radioactive particle contacting DNA and causing a mutation, leads to diverging branches.
7. Call it “accessibility” collapse if you want, but the wave-function does change instantly as a consequence of the interaction. (It would have changed in a different way had Alex measured the horizontal axis.)
8. Remember, the MWI is QM without the ontic collapse. Nothing happens instantaneously in the pure quantum formalism. Certainly decoherence, which is what leads to the loss of access to the other branches, happens very rapidly depending on the environment, but that’s not the same as the instantaneous collapse.
9. “Nothing happens instantaneously in the pure quantum formalism.”
How do you account for what happens when Alex makes a measurement? There is still a change from the wave-function’s state before and after.
10. Under both Copenhagen and MWI, Alex experiences the same results in her subjective timeline, although under MWI that timeline is just one of many branches. But MWI’s accounting of the wave function is different. It certainly evolves as a result of the measurement, but doesn’t have the sharp discontinuous abrupt change stipulated by Copenhagen and related collapse interpretations.
11. But in some cases, such as this one, it has to. Even from the God’s-eye view of all branches, the measurement Alex performs causes an abrupt change to the wave-function as a consequence of the measurement. The interaction causes the wave-function to be in a different eigenstate, and that change happens instantly.
12. Depending on the version of MWI, prior to the measurement, there either are two identical branches or just one branch. In either case, the “particle” is in a superposition of all possible states.
Regardless of the version, after the measurement, there are two branches, and in both branches the “particle” is in a known eigenstate. Repeating the same measurement gives the same result.
So, because of the interaction, the wave-function is different before and after.
13. Not sure whether the difference here is terminology or ontology. I’ll describe my understanding and see which way it might shake loose.
Under MWI, the particle remains in a superposition of all its states. Depending on the exact measurement, it does cause at least some of the branches of that superposition to decohere from each other, so the wave function is definitely altered. But it’s a very different account from the Copenhagen one, a continuous (albeit rapid) evolution, instead of multiple possible states instantly transforming into one.
14. It is definitely a different account in that the MWI preserves all quantum “choices” while the CI collapses those into a single outcome. On the other hand, as you say, the wave-function is definitely altered.
In beam-splitter experiments, the difference between the two seems more stark. As we’ve discussed in the past, in two-slit experiments, it’s not clear (to me) exactly where (or even if) branching occurs under the MWI. Spin-measurement experiments are a bit unique in making a measurement, but still having an evolving quantum system post-measurement.
A spin measurement can only have two outcomes, hence two branches. In both of those Alex has altered the original wave-function as a consequence of the measurement. (Since one Alex has a spin-up particle and the other has a spin-down, they obviously don’t have identical wave-functions, but they each have a particle with a wave-function in a known eigenstate.)
So it seems there are different … levels? types? kinds? … of “collapse” depending on the situation.
There is one kind where an in-flight particle’s momentum description means it’s everywhere and nowhere, but the point-like interaction “collapses” that description “instantly” throughout space (seeming to violate locality). Beam-splitter experiments have that kind. In the MWI, that’s just separate branches.
There’s what might be happening in two-slit experiments — the branching is in where the particle lands, and again there is the “collapse everywhere” type. If the branching involves the two paths through the slits, then I need a quantum mechanic to explain the exact physical mechanism there, because it appears that kind of branching merges at the point interaction.
Then there’s what happens when we make spin measurements, which seems a different situation. There’s clearly branching, but rather than “collapse” in the “goes away” sense there is “collapse” in the “sudden shift” sense.
I do think it’s to the point we need an expert quantum mechanic.
15. When it comes to the MWI, the standard answer is, of course, whatever the raw mathematical quantum formalism says. Definitely having an expert quantum physicist on hand would help. I think I have a very shallow feel for it, but I quickly get out of my depth if you hit me with questions requiring too deep an understanding.
Given the way the word “collapse” is used in quantum physics, I think applying it to what happens under the MWI is probably adding confusion rather than clarifying. MWI is QM without the collapse. The collapse as usually defined as an abrupt discontinuous instantaneous event. In straight Copenhagen, it’s a non-mathematical event that ends the evolution modeled by the math. (In objective collapse interpretations, it’s additional mathematics added to that formalism.) In MWI, I think all we can talk about is the illusion of the collapse. (Phenomenal collapse?)
16. “The collapse as usually defined as an abrupt discontinuous instantaneous event.”
Right, and my point is that I think it does happen under the MWI, too. And I’m not sure in either case if there is math for it.
What bothers people under the CI is that the Schrödinger equation is a linear equation — fully deterministic. But any “measurement” affects it, and I’m just not sure the MWI actually does get entirely around that fact.
I don’t think it’s an illusion. Measurement does affect the wave-function ontologically.
17. “…‘now’ is a feeling.”
Sure, but where does it come from? In an evolving universe, it’s just part of that evolution. In a block universe, time is static and there is no flow, so why does ‘now’ select a special moment in the static block? Why that moment? Why do we, here on Earth, appear to be at the rough time coordinate 13.8 billion years when the block must contain trillions of years.
“Being unable to explain something’s origin doesn’t make that something disappear.”
Of course not, but we can question the implications of those origins. One story asks us to believe in a Big Bang from which the universe evolves and we see the result of 13.8 billion years of a machine that calculate itself over that time. Another story asks us to believe all that generated structure — and much more — sprang into existence all at once. It isn’t just the last 13.8 billions of structure — it’s the trillions of years that follow, too. And surely the block must extend beyond our visible universe, so it’s a vast amount of structure in time and space. Something has to account for it.
As you said yourself, “Perhaps the computation was completed prior to the result being impressed onto the hologrammatic medium of spacetime. Of course, the computation would have to take place in some sci-fi meta-time” Exactly. It has to be accounted for. The BUH suggests a dual creation: The calculation first, and then the implementation. But what was the calculation done with?
“But you’re claiming that everything outside our light cone doesn’t exist”
No, that’s not what I said. What I said was: “A key tenant of [SR] is that simultaneity is virtual — we can make no statements about events outside our light cone until information about that event reaches us. We can only speak of our own ‘now’ — it’s only in retrospect we’re able to define some event with space-like separation as having happened ‘simultaneously’ with some event in our past.”
(See my post Blocking the Universe for details.)
18. Mike, I always try to avoid the personal and be friendly and respectful, although at times one’s enthusiasm can be misconstrued.
Perhaps you’re responding to my use of the phrase “Hokey smokes!” which is a phrase frequently used by Rocky the Flying Squirrel addressing his buddy Bullwinkle the Moose, as in, “Hokey Smokes, Bullwinkle!”. It’s the equivalent of “Gosh!” or “Gee Whiz!” and simply expresses mild surprise, which was my intention in using it. I see from Wikipedia that the Rocky and Bullwinkle show appeared on TV in 1959 when I was a teenager. So both the show and myself are relatively ancient. I didn’t realize at commenting time that many folks might not understand the cartoon reference. Mea culpa
Regarding “wave function collapse”: The wave function is a probability function—a mathematical entity. I realize that there’s been a lengthy (and ongoing) debate about whether wave functions are mathematical or objectively real. In 2012, for instance, Colbeck and Renner presented an argument favoring the objective reality of the wave function:
… but they admit that “Our result is based on the assumption that an experimenter can, in principle, ‘freely’ choose which measurements he would like to carry out … Hence, if one is ready to accept this assumption, our answer can be considered final.”
So it looks like their argument is rooted in the philosophical quandary about the reality of “free choice.” Since this whole business is extremely murky territory, I continue to favor the “function as mathematical entity” perspective.
I recall Greene’s statement as being that it may be possible, in principle, to measure some remnant interference from the decohered waves, but I can’t locate his exact quotation even after scanning The Hidden Reality. Can you point me to it? I ask because may be possible and is possible are distinctly different views of possibility.
19. Stephen,
What’s always made me think the wave is physical is that something causes the interference effects. The success of quantum computing, which crucially depends on the reality of the wave and those interference effects for its parallel processing, also seems to increase support for it.
I’ve having trouble finding the exact passage myself, at least the one I remember, in The Hidden Reality, which is making me wonder if I saw it somewhere else, possibly by another author. However, this passage from Hidden seems to capture the same idea. (It’s at the end of chapter 8.)
Second, in some situations, the predictions of the Many Worlds approach would differ from those of the Copenhagen approach. In Copenhagen, the process of collapse would revise Figure 8.16a to have a single spike. So if you could cause the two waves depicted in the figure—representing macroscopically distinct situations—to interfere, generating a pattern similar to that in Figure 8.2c, it would establish that Copenhagen’s hypothesized wave collapse didn’t happen. Because of decoherence, as discussed earlier, it is an extraordinarily formidable task to do this, but, at least theoretically speaking, the Copenhagen and Many Worlds approaches yield different predictions.12 It is an important point of principle. The Copanhagen and Many Worlds approaches are often referred to as different “interpretations” of quantum mechanics. This is an abuse of language. If two approaches can yield different predictions, you can’t call them mere interpretations. Well, you can. And people do. But the terminology is off the mark.
Greene, Brian. The Hidden Reality: Parallel Universes and the Deep Laws of the Cosmos . Knopf Doubleday Publishing Group. Kindle Edition.
Greene’s description of Many Worlds is one of the best I’ve seen. He has a powerful gift for explaining complex concepts in an approachable manner.
It’s bugging me that I can’t find the passage I remember. I may keep looking for it.
20. Wyrd, our feeling of ‘now’ does not select a special moment. As I wrote previously, all of our conscious moments embedded in the BU are experienced as now’s—all of them. No single moment of consciousness is privileged over any other and no location on the temporal dimension is privileged over any other. They’re all real. Our worldtubes (“we”) are 13 or so billion years distant from what we believe was the Big Bang because that’s our location on the temporal dimension of spacetime relative to the temporal location we’ve computed as the Big Bang’s.
The BU doesn’t imply that everything sprang into existence all-at-once. As I wrote, always having existed, “always was and always will be” is a possibility. We don’t have to account for the BU’s origin for it to exist. And the “dual creation” you suggest would actually be an infinite regress. If the BU were created in some meta-time then that meta-time reality would have had to be created in some meta-meta-time and so on, ad infinitum. As I wrote initially, the BU origin question is unanswerable and always will be unanswerable. The important point is that the origin of the BU is irrelevant to its existence.
Of course the universe extends beyond our visible universe—our Hubble Volume as it’s called. As you know, the Hubble Volume, with the earth at its center, increases in size by one light year every year as light from more distant regions reaches us. Your premise seems to be that those newly visible regions didn’t exist before the light arrived—that those new observables just mysteriously popped into existence and became real.
I completely agree with you that “we can make no statements about events outside our light cone until information about that event reaches us.” But that doesn’t mean that those events do not exist. See the “Chewie’s descendants” statement below … that’s what you’re talking about. That’s from Brian Greene’s explanation of “now slices” and “now-lists” in Chapter 5 of The Fabric of the Universe. (By ‘now’ Greene is referring to the common usage of the clock time correlated with the feeling of a particular moment’s ‘now’.) You can fetch Chapter 5 from my Google Drive here for a few days:
Read his explanation until you get to:
At such an enormous distance, it takes an enormous amount of time for messages to be received and exchanged, so only Chewie’s descendants, billions of years later, will actually receive the light from that fateful night … The point, though, is that when his descendants use this information to update the vast collection of past now-lists, they will find that the Lincoln assassination belongs on the same now-list that contains Chewie’s just getting up and walking away from earth. And yet, they will also find that a moment before Chewie got up, his now-list contained, among many other things, you, in earth’s twenty-first century, sitting still, reading these words.
That bold-font phrase is what you’re talking about but Greene perfectly makes the case that the reality of an event is distinct from one’s ability to know about it.
21. What makes ‘now’ special is that it divides what we remember from what we can only anticipate. Our consciousness is serial, with a past, a now, and a future. In the BU, exactly as you say, no moment is privileged. An evolving universe accounts for our perception of the ‘now’ but the BU does not. It has the same problem as Tegmark’s MUH — it doesn’t account for the subjective flow of time.
My point about the implied origin of the BU is that it’s a huge ask. It’s a key reason I don’t believe in the BUH — I think the need to generate all that structure in advance is far too big an ask for a theory.
And the BUH absolutely does imply all that structure is created in advance — that’s the key feature of the theory; it’s what distinguishes it from evolving and/or non-deterministic cosmologies.
My point about the visible universe has nothing to do with light speed, but with pointing out how large the BU has to be. It’s not just our visible part, but the whole thing (whatever that is). And it’s the whole thing from Big Bang to however many trillions of years it exists. That’s a mind-boggling amount of structure to have to create off-line. Exactly as you say, it leads to infinite regress.
The way out is understanding that the universe calculates its own structure as it evolves. That solves the structure issue and the ‘now’ issue.
I’m glad you agree about statements outside our light cone. The next step is understanding that being able to say, after the fact, that some point in your past was “synchronous” with some point that, at that point, was in the “future” of some other perspective, does not require that “future” exist at that moment in time.
A key aspect of SR is that simultaneity is virtual and relative — it’s a matter of your perspective. You can’t talk about something “five years in the future” being real when it takes ten years to demonstrate it.
22. Wyrd, since all of your assertions contradict relativity physics, I’m repeating my request for verifiable support for your claims from recognized practicing physicists. Here is a list of your claims (italicized) that require substantiation along with my brief relativity-physics-compatible response:
1. Wyrd: The subjective feeling of flowing time proves that objective flowing time exists.
Our perceptions are simulations of sensation events and they don’t correspond to physical reality. Note that physics is unable to find objective flowing time. No experiment has ever been proposed, let alone run, to verify its existence. ‘Time’ is the temporal dimension of 4-dimensional spacetime and nothing else. I’ve proposed that our feeling that there’s an objective “flowing time” is an externalization of the stream of consciousness, which is a fact of normal consciousness that needn’t be accounted for in physics.
2. Wyrd: ‘Now’ is a single moving privileged moment that marks the current temporal location of objective flowing time.
Physics is unable to find a ‘now’ as well—there is no ‘now’ in the universe or in the laws of physics. ‘Now’ is a feeling, an artifact of consciousness, and in BU terms, every conscious moment embedded in the BU is experienced as (feels like) a ‘now’ when it is experienced.
3. Wyrd: Things whose origin is unknown cannot and do not exist.
No comment from me—that’s not a credible assertion.
4. Wyrd: The BU implies that all structure is created in advance.
There’s no such thing as an “in advance” of the BU, which would be a time that precedes existence. Imagining that impossibility is what leads to an infinite regress of ‘meta-times,’ not the size of the universe as you just wrote.
5. The universe calculates its its own structure as it evolves.
Absent objective flowing time, whose existence cannot be demonstrated, that’s an impossibility and its ‘calculation’ mechanism is inconceivable.
6. If we cannot observe something, it doesn’t exist.
Like item 3, that’s also not a credible assertion. Note again, from Brian Greene’s explanation:
The phrases “the same now-list” and “now-list contained” means that the events referred to are co-real—they all exist together (all-at-once), regardless of their location on the temporal axis of spacetime.
At this point, Wyrd, we’ve both described our viewpoints and further discussion seems unwarranted. However, you still haven’t identified any practicing physics professionals and credible physics that support your opinions as itemized above. So please provide them if you respond, preferably numbered for the claim substantiated. Many thanks in advance.
23. My views are entirely mainstream relativity physics. I’m not asserting anything unusual. As I have made clear several times now, I am questioning the view that interprets the relativity of simultaneity the way the BUH does. I see it as contrary to a fundamental tenant of SR that “now” is strictly a local concept, that simultaneity is virtual and a matter of geometric perspective.
The BUH is an interpretation of SR that is contrary to what SR actually says.
I agree there is no point in discussing it further.
24. Wyrd, ‘mainstream’ means that an opinion is the dominant, conventional view. If your view is entirely mainstream then where is the list of supporting citations from professional physicists I’ve requested?
From my reading, those who disagree about BU being a direct implication of the RoS are primarily philosophers rather than physicists. And they typically misunderstand consciousness—they tend to believe that our perceptions accurately represent the external world so that, for them, a feeling of flowing time proves the objective existence of flowing time. It doesn’t, any more than the perception of a red rose proves that the rose has a color.
4-dimensional spacetime is an implication of RoS, not an opinion or ‘interpretation.’ An implication is a logical structure meaning ‘a consequence of’ as in p->q.
So who, then, are we to believe about the reality of spacetime? Not being a physicist, I’m sticking with Einstein and the great majority of physicists. But, being a patient sort, I’ll continue to wait for that list of references of credible, professional physicists who support your opinion.
25. I thought we were done discussing this.
You’ll find what I’ve said about SR and simultaneity in any textbook that addresses SR in-depth. What you won’t find is assertions about points outside a given light cone being “co-real” — that part is an interpretation. In fact, you’ll find SR says nothing can be said about such points and that simultaneity is virtual and depends on your frame of reference.
Consider frame M with three “simultaneous” events located at -10, 0, and +10, on the X-axis, and label them “A”, “B”, and “C”, respectively. Anyone in frame M claims A, B, and C, happen simultaneously.
Frame J has a positive velocity relative to M, so anyone in frame J sees the same events as happening in sequence, first C, then B, finally A. On the other hand, frame K has a negative velocity relative to M, so anyone in frame K sees those events in the sequence A-B-C. The timing between depends on the relative speed.
You’ll find examples like this in any textbook, and they illustrate how, in SR, simultaneity is virtual and depends on your frame of reference.
I can’t account for why people believe the BUH. Or SUSY. Or string theory. Or the MWI or the MUH. All I can say is I don’t, and I’ve explained why.
26. Mike, I took a look at The Hidden Reality, Chapter 8’s Note 12, which says in part:
“To adjudicate between these two pictures, imagine the following. After you measure the
undone. …)
I’m unsure just how one goes about, in fact, not just in principle, having “someone reverse the physical evolution.” And this seems to be an “in principle” on top of another “in principle.”
Any clues?
27. Stephen,
None from me. Greene admitted the difficulty.
The main point is that the theories make different predictions about what would be observed, and experiments along these lines are possible in principle, even if we don’t currently know how to do them. (If we did, I’m sure someone would have already.) No one knew how to test Einstein, Podalski, and Rosen’s assertions in the 1935 EPR paper until John Stuart Bell figured out a way in 1964.
28. So all you need is a time machine, and you can prove or disprove the MWI! 😉
There is also the implications of having to coordinate separate branches such that one world’s spin-up particle can be merged with the other’s spin-down. Not sure how such an experiment is possible without a time machine that affects multiple branches.
In that footnote, Greene says something that rose my eyebrows: “In the Many Worlds approach, by contrast, both the spin-up and spin-down outcomes occur, so, in particular, the spin-up possibility survives fully intact.” I’m not quite sure what he means by that.
In the MWI, the X-axis measurement has a definite outcome, up or down, resulting in two branches, both of which have a wave-function in a definite state. As Greene says, that means there is no knowledge of the other axes. So I’m not sure what he means about the “spin-up possibility” being intact.
This example is what I was talking about above. The Z-axis measurement results in a wave-function in a definite state. The X-axis measurement does, too. Measurement, even in MWI, is one place time symmetry breaks down.
29. I’m sure to Copernicus’ contemporaries, it would have seemed like magic was required to test his theory against Ptolemy’s. Auguste Comte couldn’t imagine how we could ever know the composition of the stars. And as I noted above, Einstein took grief about his EPR paper for engaging in metaphysical speculation.
I don’t think coordinating between the branches, in and of itself, would be the difficulty. Remember that it would be the same experimenters and equipment in both branches, just seeing different aspects of the same quantum system. But it would require an experimental design that could recognize which branch it was in and alter its activity accordingly.
On not understanding what Greene means, this actually is the central difference between Copenhagen and the MWI. Under MWI, there is no collapse. The wave function, with all its branches, continues to evolve according to the math. Anything else wouldn’t be the MWI.
So under MWI, both spin states continue to exist. The measurement interaction decoheres them from each other, but they’re both still there. The trick of these experiments would be to test this prediction. If the two branches can be manipulated in such a way that shows they’re both still there, then MWI is right. Under Copenhagen (or any other collapse interpretation), the other branch no longer exists and this should be impossible.
30. Understood, but I don’t see that that first axis measurement is preserved, even in the MWI. Both decohered branches have a wave-function that have destroyed the first measurement, rather than just one (under the CI) wave-function that has destroyed the first measurement.
MWI doesn’t have collapse, but it does have measurement, and measurement puts the wave-function into a known eigenstate. (Or rather: multiple branches with multiple wave-functions each in one of the possible measurement outcome eigenstates. In this case two, Up and Down.)
31. I think there are different ways we can account for it. Remember that under MWI, there is one universal wave function.
We also have a subset, the wave function of the original quantum system being measured: the system-specific wave function.
Under MWI, that wave function continues to exist after measurement, although with the branches related to the measurement decohered from each other, its overall evolution becomes much more difficult to track.
As a practical manner, after measurement, we can choose to model the portions of the system we have access to as its own wave function: a result-specific wave function. Copenhagen reifies this final one as the one and true wave function of the system, but under MWI the others are all still there.
The experiments Greene discuss assume that the original system-specific wave function is still a meaningful concept, and attempts to test its predictions.
32. I’m afraid I don’t follow. What exactly does the “system-specific wave function” describe? The electron?
If so, then yes, absolutely it continues. The point is the measurement changes it.
33. Yes, the system-specific wave function refers to the electron, or whatever quantum object being measured.
Definitely the measurement changes it. My point is that, under the MWI, the changes can be accounted for at a scope Copenhagen asserts no longer exists, leading to MWI specific predictions.
34. How? What scope retains information about the first measurement?
The two branches with Left and Right measurements, could have some from a spin-up or spin-down eigenstate or from any superposition of states. Once the X-axis measurement is made, where is the information about the previous Y-axis measurement?
35. If we’re talking specifically about Greene’s proposed experiment, we’re at the limit of my knowledge. I know information is supposed to be conserved in QM systems, and under the MWI, everything is deterministic and reversible, so in principle it should be retained in the system-specific wave function. So if its evolution could be reversed, you should be able to get it back to the original state. It seems like doing this would require extremely controlled conditions, maybe a quantum eraser type experiment on steroids.
36. There is a view that spin is a single bit of information, Up or Down, and spin axes are conjugate pairs, so it’s only possible to have a single bit’s worth of information — the spin Up/Down on a chosen axis. So no information is lost. That single bit is distributed among possible measurements.
Per this view, this is why measuring another (orthogonal) axis returns a random result when spin is known on a given axis — that knowledge exhausts the knowledge possible of the system. It can only return a random result.
The Schrödinger equation is deterministic and reversible so long as no measurement occurs. Measurements cause instant changes that I’m not sure are reversible even under the MWI. The MWI absolutely preserves all possible outcomes, but I seriously question that it’s reversible.
37. Mike, as I read it, Greene’s “in principle” remarks would only allow us to discriminate between CI and MWI. I’m not familiar with the other 10 or so QM Interpretations (who is?) so it’s possible that the “in principle” reversing of reality might lend credence to some of those.
Considering the nature of the obstacles to any “in principle” resolution, and the number of Interpretations that might be affected, I’m still comfortable with my view that all of the QM Interpretations are metaphysics (philosophy) rather than science. But with my commitment to rationality, if someone gets a handle on reversing reality before I die, I’ll revisit my opinion. And if you happen to encounter any signs of success in reversing reality, I’m sure you’ll post about it. In fact, you might reverse reality sufficiently on your own to post about it last week … 😉
38. Stephen, fair enough. Some interpretations are philosophy, simply another way of talking about what Copenhagen talks about. But others, like the MWI, de Broglie-Bohm, or GRW, make different predictions about reality. As Greene pointed out, these are different theories.
6. I like your Einstein quote, SAP; and the cartoon: it makes sense that our Creator would compare us to raisins, or to characters in a novel that He had written. Characters die in stories, but they “live” on in our culture. And yet, religious people are obsessed with living forever in heaven. So, it is a strange thing for Einstein to have said. It reminds me of the ancient Greek thought that we need not be worried about death because before we die we are alive and afterwards we will not even be there. Perhaps some people found that comforting because they wanted it to be comforting. Interestingly, that thought does not need a Block Universe.
Liked by 1 person
1. Thanks Martin. That Greek thought is the Epicurean take on death. I do personally find some comfort in it. Many Epicureans in the ancient world had an epithet on their tomb: “I was not; I was; I am not; I care not.”
In the end though, I doubt the fear of death can ever be completely shaken. Evolution has just ingrained it too much into our psyche to be easily dismissed.
1. Yes, we evolved to try to win, and death is the losing of everything. The fear of death seems to be a biological, rather than a philosophical matter. Although the associated idea of reality going on without us, as though we had never really mattered, does make more sense on a philosophy of time like presentism, less on the Block view. So, perhaps that was what Einstein meant, that reality did not go on forever without us, after our deaths, but invariably included us. (Although if so, then that would not be much comfort to those left behind.)
Liked by 1 person
7. I’m not a big fan of the block universe (or the “universe rigid,” as H.G. Wells called it). It makes perfectly logical sense, of course, but I feel like it’s a little too easy and simple of a model. In real life, physics keeps throwing these weird twists and quirks and kinks at us, which makes me suspicious of any simple and easy model of our universe.
I guess you could say I’m applying a reverse Occam’s razor here: whenever we’re talking about the entire universe, I think the simplest model is probably wrong.
Liked by 1 person
1. Universe rigid? That’s an interesting phrase.
I can see what you mean. It seems like any characterization of the universe as a whole will always be an extrapolation of what our most fundamental theories currently say. Maybe that’s all it can ever be. We can’t imagine ever taking a position outside of the universe, outside of space and time, and viewing the whole thing. I mean, we can pretend like we’re imagining it, but it will always be a fantasy scenario.
Liked by 2 people
8. Mike, the block universe is not metaphysics—it’s physics. Metaphysics is Philosophy and Physics is Science. When Parmenides, with his “what is, is” linguistic analysis, posits that all of existence is one whole and unchanging thing—that’s metaphysics.
The block universe, on the other hand, is science. The Relativity of Simultaneity (RoS) is a cornerstone of the Special Theory of Relativity of 1905, which was geometrized by Minkowski in 1908. The block universe is a direct implication of the RoS, an implication of the form ‘if p then q’. In this case, ‘p’ is relativity physics, repeatedly confirmed for over a century, and ‘q’ is the block universe. Minkowski introduced the term ‘spacetime’ to describe the block universe and in Minkowski spacetime the extent of the temporal dimension is endless. Minkowski’s geometric formulation of spacetime is integral to the follow-on General Theory of Relativity which would have not been achieved without it. (Because of the morass of unfortunate connotations clinging to the term ‘time,’ I prefer referring to the dimensions of spacetime as length width, depth and tempth, where ‘tempth’ is the temporal dimension. By the way, the temporal dimension is not spatial—it’s temporal.)
I suspect you’re very comfortable with the word ‘spacetime’ but, in your Presentist conception, the tempth dimension must be a zero-sized moving/flowing infinite plane that terminates existence for all 3-dimensional ‘past’ events behind it and brings into existence a universe-sized collection of 3-dimensional events that weren’t real before, all of this through an unidentified and undescribed mechanism. If that’s not your conception of Time, Mike, please let us know what it is. Were you to look for a scientific validation of the flowing time of Presentism, which I strongly recommend you do, you’d find nothing because no one has ever demonstrated the existence of flowing time or even proposed a credible experiment to test for it. Physics doesn’t recognize it, nor does physics recognize that ‘now’ you insist upon—there is no ‘now’ in the universe or in the laws of physics. Both flowing time and ‘now’ are completely subjective.
To explain the origin of our feeling of flowing time I propose that it’s owing to naïve realism. Without rigorous investigation, our naïve realism tends to externalize our conscious perceptions—we believe the world is what we perceive it to be. For example, we tend to believe that objects in the world have colors and we’re equally sure that the world is a noisy place. But both of those beliefs are false. The world contains differing wavelengths of light which we experience as color. And the world is completely silent—it contains compression waves that we experience as sound. In both these cases, and many others, we mistakenly externalize our perceptions—the perceptual contents of consciousness. The same naïve realism applies to the belief in a flowing time in the world. In this case, however, rather than externalizing a perception, we mistakenly externalize the fundamental characteristic of consciousness—its flowing ‘movie-like’ presentation.
Einstein concluded that spacetime directly implies what he called the “eternity of life”—the endless repeated experiencing of our lives which, like everything else, are permanently encoded in unchanging spacetime. In a quotation repeated in his New York Times 1955 obituary, Einstein identifies consciousness as the agent of that immortality (“… conscious life perpetuating itself through all eternity …”).
But Einstein’s “eternity of life” and the Besso quote’s “[Death] means nothing” are not necessarily comforting. While contemplating the endless re-experiencing of a lifetime that’s mostly agreeable and full of treasured experiences and relationships might be comforting, it is conversely horrifying in the case of an immortal lifetime characterized by suffering. But emotions do not dictate either our formulation or our acceptance of scientific realities.
For a thorough discussion of this entire topic, I invite you to download my paper “The Consequences of Eternalism” from:
Liked by 2 people
1. Stephen,
Me calling the block universe metaphysics wasn’t meant to be a strong statement about it. I’m inclined to use that word because I can’t conceive of a test for it. But I don’t have a strong demarcation between theoretical science and metaphysics.
Certainly I can understand the logic from special and general relativity that leads to that conclusion. But we know relativity and quantum mechanics are incompatible with each other. There will eventually have to be revisions. It’s possible those revisions may alter the logic. And, not having ultimate knowledge, we can never know whether we have the full picture. We can test our fundamental theories to increasing decimal points, but there doesn’t seem any way to know that we have the full picture. No scientific theory is ever final. All are provisional pending new evidence. So unknown factors could frustrate the logical predictions.
Unless you know of a way to test the proposition. I’m very open to the possibility that things we can’t test today may be testable in the future. Are there any conceivable tests of the block universe in and of itself?
It’s the “endless repeated experiencing” part of this that I struggle with. How is it being repeated? In what sense? Is the person reliving their life over and over? If not, then what exactly is repeating? Where is the repeat happening? What causes it to loop back to the beginning?
It seems like that at the BU level, an experience is a static unchanging thing. It’s only for the pattern within the block (us) having the experience that it’s the dynamic process we refer to as “experience”, and we only seem to get one shot at it the sequence we call life.
2. Mike, I’ve numbered my responses to address the separate issues you raise.
1. On Metaphysics
The Wikipedia definition of ‘metaphysics’ is “the branch of Philosophy that examines the fundamental nature of reality.” Categorizing the block universe as philosophy diminishes its scientific credibility whether or not that’s your intention. Black holes and gravity waves are also direct implications of relativity physics but they didn’t transition from Philosophy to Science by virtue of being experimentally detected—they were always scientific implications of relativity physics.
2. On Testability
Physicists can, and have, conceived of tests for the block universe so we amateurs don’t have to. Vesselin Petkov discusses both testability and confirmation in his paper “Is There an Alternative to the Block Universe View?” From the Abstract:
The argument advanced in the paper is that if the world were three-dimensional the kinematic consequences of special relativity and more importantly the experiments confirming them would be impossible.” You can fetch the PDF at:
Click to access Petkov-BlockUniverse.pdf
I also recommend Petkov‘s closely related paper, “On the Reality of Minkowski Space”:
Click to access V.%20Petkov,%20On%20the%20Reality%20of%20Minkowski%20Space.pdf
3. On Relativity, Quantum Mechanics and the “Full Picture”
As I’ve commented before on your blog, relativity and QM are not incompatible with each other. Quoting Petkov again:
A consistent conceptual analysis … almost immediately identifies an implicit assumption —we have been taking for granted that quantum objects exist continuously in time although there has been nothing either in the experimental evidence or in the theory that compels us to do so.
Further, the position that an incompatibility between relativity and QM is sufficient reason to discount relativity’s direct implication of the block universe doesn’t seem valid. If you believe it is, would you claim to a physicist that, because of a supposed incompatibility of relativity an QM, we should disbelieve the relativity of simultaneity, the distortion of spacetime by matter, the slowing of accelerated clocks and the reality of black holes and gravity waves?
4. Your Questions
Regarding the endless re-experiencing, you ask:
All of your questions are answered in my paper so I invite you to investigate therein. I’ll take a brief shot here, but the paper has the definitive answers. To begin, here’s an excerpt from the abstract:
Our lives in the block universe are unchanging recordings in spacetime that do not ‘happen’ and have never ‘happened’ but, rather, encode a series of conscious streams which are continuously being experienced from every conscious point in the recording.
As Einstein wrote, “… mind is immortal in the same sense as the body,” meaning that our consciousness is a permanent feature of spacetime. As such, our unchanging conscious moments are continuously and repeatedly experienced as components of a stream of consciousness. There is no “loop back to the beginning”—all of the conscious moments of all of the conscious organisms that have ever existed or will exist are being experienced always. As physicist Brian Greene wrote: “Every [conscious] moment is.”
At the BU level, the configuration of certain static and unchanging events as functioning brains gives rise to consciousness, i.e., to dynamic experiences. But those experiences are not those static events—they’re the outcome of that configuration of static
events. It’s true, as you say, that we only seem to get “one shot at the sequence.” That’s because our awareness is always confined to a single stream in the ongoing stream of streams. The continuous re-experiencing provides for repeated “shots.”
After you’ve read “The Consequences of Eternalism” I’ll happily to address any additional questions about the material.
? The final “always” doesn’t seem to belong. Time (for a given observer) is a dimension along the block universe; it isn’t something you can pop out of the block, and then map the entire popped-out timeline onto each point in the block.
1. That’s the tricky and sometimes confusing result of using tensed language in this type of discussion. Perhaps it’s more correct to say of all of those conscious moments that they never stop being experienced.
Or similar … that’s the meaning I intended though.
Liked by 1 person
3. Mike, most physicists believe in the block universe because not doing so is equivalent to not believing in relativity physics. Even physicist Lee Smolin knows that relativity’s BU implication must be accepted until he can construct his Shape Dynamics replacement for relativity. He wrote:
… Einstein’s theories of relativity are the strongest arguments we have for time being an illusion masking a truer, timeless universe.
As far as I know, Smolin’s years’ long effort to reclaim his free will is still far from succeeding.
Since you clearly reject the reality of the block universe, I would like to read your explanation of your alternate conception of Time, hopefully evidence-based. The only other option appears to be the wildly popular naïve realistic belief in Presentism with its flowing time, a conception which I find somewhat incoherent as I described in my original comment:
The central issue in all of this is, of course, what is real?
I don’t think a comment in this block universe post would allow sufficient space and focus to describe your belief properly so I suggest a new blog post on the topic. And you’d no doubt like to research further before posting. I’m certain it would be a most interesting post on a topic you don’t seem to have written about yet. I’m very much looking forward to reading it so Thanks in Advance!
1. Stephen,
1. As a prediction of existing scientific theories, I have no trouble calling the block universe science. I put it in the same category as the more grounded, but currently untestable, cosmology and multiverse predictions.
2. Glancing at the Petkov paper, it appears to be an argument that SR implies the BU. I’m onboard with that. But SR is a special case of GR, and we know GR breaks down in some situations (like the center of a black hole).
3. You’ve cited Petkov’s views on QM before. For me to buy a sparse existence argument, I’d need an accounting of what brings quantum objects in and out of existence. If the answer is any variation of “just is”, I can’t say I find it convincing.
My point about the QM and GR issues is that we know the combination isn’t the final answer. Future revisions could alter any currently untested prediction, including the BU one.
4. I read your interesting paper when it was Einstein’s Breadcrumbs. Is there a particular section in the new edition where this particular point gets addressed?
Can’t say I’m a fan of Smolin’s presentism views. Or most of his views in general.
“Since you clearly reject the reality of the block universe,”
I’m puzzled why you keep coming away with that impression. My credence in it isn’t as high as yours, but I do see it as an interesting and very credible possibility.
I don’t have well developed or strong views about time. I generally accept the account from GR, but I’m also open to the possibility it isn’t the final answer.
2. Yo Mike!
1. Well, “calling the block universe science” is significant progress from your initial “interesting metaphysical concept” but you’ve tossed in another unusual word choice with ‘prediction.’ The BU is an implication of SR, not a forecast or a prophecy.
2. A glance at Petkov? You said you were unable to conceive of an experiment to demonstrate the existence of the BU and Petkov provides exactly that in those papers I linked. I’d like to know what fault you’ve discovered in your glance at his logic though. If none, then his conclusion—that conducting the experiments confirming the kinematic consequence of SR demonstrate the existence of the BU—is valid. If you disagree, what is the flaw in his argument?
SR stands on its own. GR is irrelevant to the issue at hand. And, rather than GR breaking down, I’ve read that the center of a black hole doesn’t actually exist—the calculations yielding infinities means instead that spacetime itself has been abolished.
3. The QM argument of Petkov (and Stuckey too) isn’t one of “sparse existence.” Instead, it maintains that a detected QM object has always been at the spacetime coordinates where it is detected and always exists at those spacetime coordinates. No QM object is a worldline but, instead, is a collection of discontinuous worldpoints along the temporal axis of spacetime. All events in spacetime are fixed and unchanging. Nothing in the BU comes into or goes out of existence.
Another strange usage of the word ‘prediction’ here: “Future revisions [of GR?] could alter any currently untested prediction, including the BU one.” To repeat: The BU is a logical implication of SR, not a forecast or a prophecy.
4. Perhaps you’ve forgotten the contents of “Einstein’s Breadcrumbs.” All of your questions are answered and explained in “Breadcrumbs” and I believe all of them are now briefly answered in just the Abstract of “The Consequences of Eternity.” For the full explanation of the answers you need to read the paper. There have been significant revisions since you last read a much earlier version.
Lastly, regarding your comments about my request for a blog post describing your beliefs about Time:
Smolin isn’t a Presentist. He believes in the BU because he’s a professional physicist who believes in relativity physics. In fact, my paper recommends his book Time Reborn for its lucid descriptions of SR, the RoS and the BU. Smolin has decided he wants his free will back but, as a principled scientist, he realizes that goal requires a replacement of relativity. That’s what his still unsuccessful Shape Dynamics effort is intended to do.
Regarding my impression that “you clearly reject the reality of the block universe”: There are only two choices regarding the nature of Time. Either Time is 1) an objective flowing present mechanism or 2) the unending temporal dimension of spacetime. If you “generally accept the account from” SR (not GR) then you apparently accept ‘2’ … the existence of the BU, rather than considering it just a possibility.
I believe the BU’s consequences for our understanding of the human condition are remarkably profound and disruptive and I’d appreciate the contributions of any and all thoughtful persons in the effort to understand them further and to contemplate an answer to the question that it poses: “What should we do?”
1. Stephen, I’m just not the kind of guy who makes statements of absolute certitude, even for propositions I argue for. So I fear the best you’ll get from me about the block universe is that it’s very plausible.
I downloaded your paper and will take a look a it.
1. Mike, I heartily agree that absolute certitude is never possible. However, that will always be the case so because life is short I believe in drawing the best conclusions possible using the best information available in this era. On top of which I find rigorous neutrality to be emotionally unsatisfying despite its philosophical purity.
As a for instance, I’m quite concerned about the impending devastation from human-caused climate change, even though only 95% (or whatever the figure is) of climate scientists concur. We will most assuredly destroy the world waiting for absolute certainty before acting. IMO of course … 😉
Liked by 1 person
2. Mike, even though you may not agree with high confidence about the block universe being our reality, the number of physicists who believe it is certainly compelling. You might be thinking it’s a somewhat probable to highly probable proposition, perhaps 70% to 90%. The knowledge that Einstein believed in the reality of the block universe would suggest the percentage tends towards the higher of that range. While we both realize that these sorts of beliefs can’t be 100% certain, the possibility is certainly high enough that we can meaningfully ask the question, “What does it mean if it’s true?
I wrote in “The Consequences of Eternalism” (TCoE), “… how do we resolve our personal experience—the dynamic-view—with the unchanging reality of the universe—the block-view?” I believe TCoE presents a credible explanation that consciousness resolves those two views.
Beyond that, TCoE is asking what the block universe means for our conception of the human condition. I don’t know if you’ve yet had time to read TCoE but if/when you do, I hope you’ll see that the human condition revealed by the block universe is hugely and disturbingly different from anything ever conceived, with ramifications for our conception of ourselves far beyond Einstein’s “eternity of life.”
Because of those “huge and disturbing differences” I’d like to engage other thinkers to consider the explanations and ramifications I’ve identified. Each new reader of TCoE brings the possibility of learning other’s views—evaluations of the ideas I’ve presented and creative thinking that would move beyond where I’ve gone. In that regard, I’d much appreciate learning your thoughts. Perhaps your circle of acquaintances could enlarge that learning potential, so I would gratefully appreciate your passing it on if that’s possible. TCoE has been read by a physicist (Stuckey, who found no fault from a physics perspective) and a few philosophers, both amateur and professional, including Schwitzgebel. Frankly, Mike, the impression I get is that it’s more comfortable to turn away and take refuge in the familiar—it’s much easier to ignore the peril to our conception of ourselves than confront the ramifications of the block universe head-on.
Liked by 1 person
3. Stephen,
On having high confidence, my assessment is that the block universe is plausible. I might even go so far as to say highly plausible. It’s the logical consequence of relativity. But I perceive that our only justification for it is that logic. We can’t test it directly. As a result, we can’t establish it as reliable knowledge in the same sense we can for special and general relativity, or other successful scientific theories.
The problem with logical conclusions is they’re only as good as the knowledge foundations they’re built on. If evidence were found for an actual physical wave function collapse per something like GRW, I think it would undermine the block universe. We also don’t know what a successful theory of quantum gravity might do. And we simply don’t know what else we don’t yet know.
Sorry to admit I haven’t had a chance to read through your paper yet. I did scan the conclusion section just now to see your points about the human condition. My own philosophical views lean Epicurean (in the ancient prudent-hedonist sense, not the modern gourmet one), so I’m not that far from what you discuss.
But it does seem to me that, even if a changing dynamic reality only exists in our consciousness, I can’t see that we can escape it. We still have to live in this dynamic world. Ignoring it seems to produce painful consequences. We don’t have the option to simply stop making the best decisions we can on our guesses about the future, and instead act as a static pattern in the block universe. That makes the block universe an interesting thing to ponder intellectually, similar to the all the emergent classic worlds that are also plausible from the MWI, but neither strike me as having any significant effect on how I live my life, at least not based on what I currently know.
Anyway, I’ll try to go through the whole thing sometime soon. Maybe some of your arguments will sway me.
Liked by 1 person
4. I appreciate your thoughtful response Mike. I suspect, however, that it’s we’re as likely to prove a physical wave function exists we are to construct a time viewer, a gadget that would reliably and repeatedly fetched information from a second in the future. That gadget would prove the future exists and confirm the block universe implication. The wave function experiment and the time viewer are both improbable science fiction, but at least the time viewer case can be imagined.
I suspect you believe in flowing time, Mike, because everybody does. After all, it’s a cultural belief that’s learned from our earliest toddler years, although, of course, not explicitly. Lakoff and Johnson’s Philosophy in the Flesh devotes an entire chapter to time metaphors. The major time metaphoric group is called “The Moving Time Metaphor.” They provide these (and many more) examples that illustrate how we learn to believe in flowing time simply by learning and using language:
The time for action has arrived. The deadline is approaching. … Thanksgiving is coming up. The summer just zoomed by. Time is flying by..”
And, of course, it then just feels that way. So let’s briefly analyze our flowing time belief to determine if it’s a plausible belief (i.e., reasonable, probable) with the same scrutiny we used to determine the plausibility of the block universe implication. Let’s start out at 100% plausibility for flowing time. The belief is clearly rooted in perception—and we know how naïve realism usually works out … down to 90%? No one is able to define ‘time’ or flowing time—no one knows what it is … down to 50%? No one has suggested a mechanism by which the present is continuously ‘updated’ … down to 30%? As is customary, we turn to science to investigate and learn that physics cannot find an objective flowing time in the universe or any “moving now” … down to 10%? 0%?
Doing that intellectually honest assessment of our two choices seems to lead to a conclusion that the timeless block universe is far more plausible than flowing time, regardless of whether it’s highly plausible or not. Mike, your stated downside for the block universe belief—the inability to test for it directly—applies to both proposals. This comparative plausibility analysis is compelling and suggests that we must take the block-view seriously.
Aside from its explanatory value in physics, I believe the block-view can have a significant effect on how we live our lives even though we cannot escape the dynamic-view of consciousness, as you put it.
The major implication of the block-view is that we should realize that our lives are constructed. No one is the choreographer of his life so no one should be judged at fault for anything. No one should be judged at all. Compassionately helped—decidedly yes, to the extent possible. Judged?—never. Our ‘selves’ are constructions as well, so we should stop being so enamored of the one we experience and valuing it over the selves of others. Most importantly, in my view, all suffering is eternal so we should do everything possible to avoid inflicting unnecessary suffering and hurting others’ feelings. Some suffering, like symptoms of illness and post-surgical pain, is unavoidable and even beneficial, but should be reduced as much as possible.
But, since we know there is no free will, how can we achieve improvements? We know that we can change the way our brains unconsciously evaluate action possibilities with memes. Successfully implant a meme in childhood—that all suffering is eternal, for instance—and behaviors change. The saving grace, if there is one, is that the future is always unknown, so we have no reason to be discouraged.
That’s some my thinking so far Mike. I could use some help.
Liked by 2 people
9. Reality is the most familiar thing there is, and you can quote me on that.
You hard science guys have your heads screwed on all wrong, often.
To living things, to beings that experience, and to language users, ‘the world’ is as much ‘us’ as it is ‘other’.
And I thought you appreciated Dennett?
1. In Dennett’s Intentional Stance there are three basic approaches for us to take: the physical, the design, and the intentional. All three are “reality” and compatible. Our “naive” perceptions involve all three of these, and especially the intentional. Too much intentional initially, but still “elbow room” for it now. He uses Conway’s “Life” to suggest how more complex vocabularies and ‘objects’ are possible and real.
So, yes, our naive perceptions are accurate, more or less, which is about what you can say for the physical stance too. In ‘fact’, you and I communicating, now, is based in/on intentionality, as much as design and —- in an attenuated sense, physics too.
Also, our “naive perspective” has not done so poorly in terms of evolutionary success. It’s like Dennett’s reference to the old quip, “If I’m so dumb, how am I so successful?”
I hate the way you throw around the term “reality”. I guess you have the key to it? Pity us, we the naive, so out of touch! (And yet still here.)
Please explain all I have missed. Seriously.
Liked by 1 person
1. Maybe I can find some common ground here by noting that we are in tune with the affordances and evolutionary threats in our environment. We’re in tune with the portions of reality and on the scale we need to be to survive in day to day life. (Although in the modern world, we face threats our ancestors never faced, threats we need a scientific insights to succeed against.)
I do use the word “reality” to refer to everything that is real, and that is far more vast than what we can perceive. I’m not sure what other word we could use.
Anyway, if the above isn’t sufficient, this is probably an area where we’ll just have to agree to disagree.
2. “So, yes, our naive perceptions are accurate, more or less, which is about what you can say for the physical stance too.”
I tend to agree, and I’m not sympathetic to views that suggest our mental models and perceptions are “off-center” from reality. I think, to the contrary, they are centered, but approximate and incomplete. I think of our perceptions as something of a wire-frame compared to the fully rendered version.
It seems fashionable among some to say we’re wandering around mired in delusion and illusion, but exactly as you quote, “If I’m so dumb, how am I so successful?” Our perceptions have allowed us to be extremely successful.
Liked by 1 person
1. Naïve perceptions are accurate? Hmm. Not sure about that.
What looks like a table is mostly empty space reflecting light waves that my eye receives and my brain interprets. It isn’t actually what it looks like. It isn’t really red or blue or white. That is in my brain. What it looks like may be useful for my brain to make use of it for supporting plates while I eat.
2. I’m not sure if you’re responding to me or GregWW? The phrase you cited wasn’t mine.
As far as what I said, the keywords are “off-center” (versus centered) and “wire-frame”.
3. Greg, regarding your statement “… our naive perceptions are accurate, more or less”:
Our perceptions are an internal representation of sensory events, a felt simulation of a very narrow range of the events in the world. The external world and our internal representation based on those events are not at all alike, as I pointed out in my earlier comment in the cases of light wavelengths vs. color and compression waves vs. sound. We do not see photons—we see light and color. We do not hear compression waves—we hear sound. Those are only two examples from our perceptual range, but our entire perceptual imagery is simulative and, strictly speaking, not at all ‘accurate,’ in that we do not perceive the world as-it-is.
We do, however, tend to externalize our simulated version of the world and believe it to be the actual world. That’s naïve realism. The word ‘naïve’ refers to the human tendency to believe that we see the world around us objectively when we, in fact, do not. Of course our internal simulation of the world has evolved to be successful. In very many cases, if the experienced simulation fundamentally misrepresents the objective reality, you do not reproduce. You die.
1. The world “as-it-is” sounds like a very metaphysical position. Then add in your contention for a consciousness ‘that is all the consciousnesses of all things that have con’ and it sounds like, well, not science and maybe religion.
My comments have been based in the idea that we often fall prey to splitting “the world” too drastically into external and internal, real world and our representations of it , subjective and objective.
2. Greg, I’m using the phrase “the world as-it-is” to refer to the world of phenomena as revealed by scientific investigation which has demonstrated that which the external world and our conscious representation/simulation of it are drastically different. I believe that’s an uncontroversial conclusion.
10. I think it’s mildly comforting.
When someone dies, they haven’t really ceased to exist. They are just living in another part of spacetime. It’s sad from our point of view, because we can never interact with them again and we will miss them, but they’re still “out there” in a sense. I am sorry for myself and their friends and family for losing them, but I am not quite as sorry for the deceased. Their existence has not been erased, even though it may have been shorter than might have been hoped.
To put it another way, I am not better off than Darwin, even though I am alive and he is dead. I’m just living at a different time. I may well die younger than he did, so if more life is better than less, he is better off than I.
Liked by 1 person
1. I can see how that might be comforting. It doesn’t really do much for me, but if you’re contrasting it with the view that only the present exists, I can see how the alternative, that the past is utterly gone, can seem a stark and uninviting notion.
On the other hand, not everything in the past or present is comforting. And there’s something to be said for the stance that we have at least some control over the future. I don’t see those views as necessarily incompatible, but it takes mental effort to reconcile them.
11. Yah, like you just said above, and like I said earlier. Science is a human “instrumental” point of view. It is always in a relation to us, “to how we handle things on a day to day basis.” It’s no better than an art, but useful in a different and Limited Way.
Gee, i’m sorry for you that you will never get outside of the universe and see it all at once. That means you are not God, as if gods ever did exist. You are just stuck with a “participants point of view” and from there “reality” is as familiar as it is “strange”!
Sorry, don’t mean to sound snooty, but this is all just kind of Kantian, post-Kant stuff.
1. Me? Spam?
I would never suggest that belief in the the block universe could improve your love life … although there is Wells’ very suggestive “rigid universe” … 😉
And I much dislike that canned mystery meat product too.
Liked by 1 person
12. I’m not afraid of death, I’m afraid of dying a long, agonizing, and painful death. When I was old enough to understand that everybody dies eventually, the idea of a long and painful dying process kept me awake all night, not death itself.
I prefer a peaceful one.
Liked by 1 person
1. Hi Linda,
I’m with you there. A quote often attributed (probably misattributed) to Mark Twain: “I do not fear death. I was dead for billions of years before I was born and did not suffer the slightest inconvenience from it.”
That said, even if I knew my death would be painless, if I knew it was imminent, I’m pretty sure I’d be in serious distress.
2. Linda, I also find the prospect of a “long, agonizing, and painful death” abhorrent. But, having just recently re-read Awakenings by Oliver Sacks, I was reminded that many people live a long, agonizing and painful life. You may have read the book (or seen the movie) in which sufferers of severe Parkinsonism following the encephalitis lethargica epidemic a century ago lived for years, and sometimes decades, trapped in excruciating physical positions with additional painful effects like oculogyric crises—“forced deviations of gaze”—where the eyeballs rotate to extreme positions. Gruesome pain.
But years and decades of suffering can also be found in sufferers of schizophrenia, child abuse, people (usually women) trapped in abusive and destructive relationships, cases of deep depression and so on. And on and on and on, unfortunately.
When I initially began to understand Einstein’s “eternity of life” I was horrified to realize that, in addition to its possibly comforting effects, it also meant that all suffering is eternal—it’s re-experienced endlessly. When I wrote “The Consequences of Eternalism” I was partly motivated by the idea that if everyone learned about our eternal suffering, then humanity as a whole might decide that its highest priority was to reduce suffering of all kinds. But, of course, sad to say, that’s just a childish dream.
Oh yes. Very horrifying.
I agree with the whole “long agonizing life” part.
I always say there are fates worse than death. And the Parkinsonism thing you mentioned sounds like one of them.
13. There is now. Now is where everything is at the moment.
There is history. History is where everything was previously.
There is speculation, speculation as to where everything will be soon.
Everything can only be in one place at one time.
There is not room for everything to be everywhere at the same time.
Therefore, the notion of a block universe, where stuff is where it was and where it is and where it will be all at the same time, is false.
Liked by 1 person
1. Hey Marvin. Good hearing from you!
I think you’re reasoning makes sense in a Newtonian universe, but has issues in one ruled by Special and General Relativity. You might want to check out the videos in the next post by Matt O’Dowd.
He covers why it’s more complicated than it appears. It also gets at your other comment. For example, General Relativity establishes that gravity is not action at a distance. (It was under Newton, which many of Newton’s contemporaries criticized his theory for.) Under GR, gravity is the warping of spacetime, which propagates at the speed of light.
14. I was curious, so I did the math while watching Nicole Wallace today. Assume Blair has a relative velocity to Alex of 300 m/s (1080 km/hr, 671 mi/hr), which is pretty fast but doable. Assume c=3×10^8 to make the math simple.
That gives a γ of 1.0000000000005 — essentially one. The frame shift at 1 LS is 1 micro-second. (For reference, the Moon is 1.3 LS away.) If we multiply by 10^6, then there is a 1-second frame shift at 1-million LS. Which is 277.77 LH, or 11.57 LD, or 1.8×10^11 miles, or just under 2000 AU (just at the inner edge of the Oort Cloud).
Whether that frame shift is 1-second into the “past” or “future” depends on Blair’s direction relative to Alex. Of course, Alex has the opposite view as Blair, because both views are relative, correct, and virtual (and about events over 11 light days away).
15. Here’s a way to understand why SR does not mean the “future” is “co-real” because of how the surface of simultaneity undergoes a frame shift due to relative motion:
Firstly, per SR, motion towards distant events shifts them into the apparent future of a frame at rest relative to those events. Motion away from distant events shifts them into the apparent past of the rest frame. (One implication of this is that distant events with space-like separation — i.e. no causal connection — do not have a definite time order. Motion towards them makes the more distant events appear to happen first; motion away makes the closest events appear to happen first. In the rest frame, the events appear simultaneous.)
Secondly, also per SR, anything moving at light speed doesn’t experience time. The motion vector and simultaneity vector coincide, so all points along the (massless!) particle’s path are simultaneous. From the POV of something moving at c, the universe moves past it at c, which means the path has zero length.
Now consider what this means for a photon that leaves a star 10 LY away. From our perspective, we know that a photon that arrives today left that star 10 years ago. It spent 10 years in flight while the universe evolved for 10 years. We know that photon is 10 years old.
But from the photon’s POV, the moment it begins its journey, it sees its eventual destination as being zero distance away and, therefore, simultaneous with it. And, indeed, the Lorentz shift for something moving at c towards Earth shifts the surface of simultaneity such that Earth 10 years later (relative to the star) is simultaneous with it.
This doesn’t mean that 10 year future is real when the photon is created 10 years ago. The surface of simultaneity is virtual. The universe evolves for 10 years while the photon is in flight, so when it finally arrives, sure enough, it’s simultaneous with today.
This is all textbook SR, and it illustrates how simultaneity is virtual.
We can extend this to a massive object moving just below c (also from a star 10 LY away). Imagine it’s moving very close to c, though. Then it will arrive somewhat later than 10 years — say it takes 11 years to make the trip (one year longer than light). The surface of simultaneity will also shift symmetrically. I haven’t done the math, but say, when the journey begins, the velocity shifts the surface 9 years (one year less than for light).
If we imagine this fast object arriving here today, it left the star eleven years ago. When it left, the virtual simultaneity was +8 years, putting it at today’s -3 years, so the object initially saw 2017 as “simultaneous” with its departure. During its 11 trip, that point shifts forward until, when the object arrives at Earth, it has shifted up to the current moment and the object is simultaneous at its destination.
Again, we know the object actually started 11 years ago and the universe evolved for 11 years during that time. When the photon began 10 years ago, or when the object began 11 years ago, today did not exist and was not “co-real” with the star 10 or 11 years ago.
1. Wyrd, these three excerpts (in two separate comments) aren’t from the textbooks which you recommended, but are from the writings of professional physicists. They don’t concur with your concept of virtual simultaneity (a concept I’ve not encountered anywhere) and virtually real objects somehow becoming really real objects. It appears Paul Davies has provided the only possibility for your viewpoint to be correct—you actually have to be a solipsist, in which case Mike Smith and all of us commenting on his blog don’t exist.
I’ll request, once again, that you provide credible references from professional physicists to support your “virtual simultaneity” views. Three supporting references would be fine.
1. Physicist William Stuckey, Beyond the Dynamical Universe:
As a consequence of RoS, consider an observer Alice in A passing an observer Bob in B.
Except for one another, Alice and Bob will disagree on who exists simultaneously with them at that instant of time (call it ‘to-day’); people at rest with respect to Alice will exist simultaneously with her ‘today,’ while those at rest with respect to Bob will exist simultaneously with him ‘today.’ According to SR, the people in Bob’s plane of simultaneity will exist with people in Alice’s past and future, and vice-versa. So, Bob and Alice exist together ‘today’ and people in Bob’s ‘today’ exist together with people in Alice’s ‘tomorrow’ and ‘yesterday.’ Likewise, people in Alice’s ‘today’ exist together with people in Bob’s ‘tomorrow’ and ‘yesterday.’ If there is no empirical means of discrimination, then both Alice and Bob are justified in their designations of who exists with them ‘today,’ so their pasts and futures are as real as their presents
2. Physicist Paul Davies, About Time
If two events occur at different places (e.g., one on Earth, another in Andromeda), then the time sequence of the two events can be reversed, but only if the two spatially separated events occur close enough in time so that light can’t get from one to the other in the duration available. Consequently there can be no causal connection between the events, because, according to Einstein, no information or physical influence can travel faster than light between the events to link them causally. So reversing the time order in this restricted case isn’t serious: it can’t reverse cause and effect, producing causal paradoxes, because the events concerned are completely causally independent. However, this limited ambiguity in the time order of spatially separated events does have an important implication. If reality really is vested in the present, then you have the power to change that reality across the universe, back and forth in time, by simple perambulation. But, then, so does an Andromedan sentient green blob. If the blob oozes to the left and then the right, the present moment on Earth (as judged by the blob, in its frame of reference) will lurch through huge changes back and forth in time.
Unless you are a solipsist, there is only one rational conclusion to draw from the relative nature of simultaneity: events in the past and future have to be every bit as real as events in the present. In fact, the very division of time into past, present and future seems to be physically meaningless. To accommodate everybody’s nows—Ann’s, Betty’s, the green blob’s, yours and mine—events and moments have to exist “all at once” across a span of time. We agree that you can’t actually witness those differing there-and-now events “as they happen,” because instantaneous communication is impossible. Instead, you have to wait for light to convey them to you at its lumbering three hundred thousand kilometers per second. But to make sense of the notions of space and time, it is necessary to imagine that those there-and-now events are somehow really “out there,” spanning days, months, years and, by extension, … all of time.
2. 3. Lee Smolin, Time Reborn
If the direction of the laws of nature can be reversed, then there cannot, in principle, be any difference between the past and the future—and the fact that we have very different relationships with the past and the future cannot be a fundamental property of the world.
We’ll concern ourselves with two concepts from special relativity. The first is the relativity of simultaneity. The second, which follows from it, is the block universe. Each was a major step in the expulsion of time from physics.
Let’s begin by agreeing that the present is real. We may not be so sure that the future or the past are real—indeed, the point of this argument is to find out how real they are—but we have no doubt that the present is real. The present consists of many events, none of which is more real than another. We don’t know whether two events in the future are real, but we will agree that if two events take place at the same time they’re equally real, whether that time is the present, past, or future.
If we are operationalists, we have to talk about what observers see. So we assert that two events are equally real if they’re seen by some observer to be simultaneous. We also will assume that being equally real is what is called a transitive property; that is, if A and B are equally real, and B and C are equally real, then so are A and C. The argument then exploits the fact that the present is observer-dependent in special relativity. Pick any two events in the history of the universe, one of which is a cause of the other. Let’s call them A and B. Now there will always be some other event X that has the following property: There is an observer, Maria, who sees A to be simultaneous with X. And there is another observer, Freddy, who sees X to be simultaneous with B. …
To understand why X must exist, you need to know not only that simultaneity is relative but that it is as relative as possible, in the following sense: One consequence of Einstein’s postulates is that if two events take place simultaneously for some observer, all other observers will judge them to be not causally related. It’s also true that if two events aren’t causally related, there will be some observer who sees them as simultaneous, thus simultaneity is as relative as it possibly could be, while respecting causality.
If B is far in A’s future, then X must be far enough away from both so that no light signal could travel from A to X or from X to B. But the universe that Minkowski describes is infinite, so this is no problem.
Now we can reason as follows: By the criterion I gave, A is as real as X is. But B is also as real as X is. So A and B are equally real. But A and B are any two causally related events in the history of the universe. So if there is any sense in which an event in the universe is real, that reality is shared by every other event. There is thus no difference between present, past, and future. What is real is all the events of the universe, taken together. So we conclude that the reality of the world consists in its history taken as one. There is no reality to moments of time or their flow.
What’s powerful about this block-universe argument is that to entertain it you need only believe that the present is real; the argument then forces you to believe that the future and the past are as real as the present. But if there is no distinction between present, past, and future —if the formation of the Earth or the birth of my great great great granddaughter are as real as the moment in which I write these words—then the present has no special claim to reality, and all that’s real is the whole history of the universe.
… the philosophically interesting features of special relativity do extend to Einstein’s theory of general relativity. The relativity of simultaneity remains true—and, indeed, is extended. So the philosophical argument I just outlined still holds and leads to the same conclusion: that the only reality is the whole history of the universe taken as one. It also remains true in general relativity that all the information that’s observer-independent is captured in causal structure and proper time. If the history of the whole universe is represented in general relativity, the result remains the block-universe picture.
1. Trust me, I’m familiar with the argument. As I’ve said repeatedly, I think it’s wrong. I’ve explained why. The example above illustrates exactly why. Please stop parroting things you read in books at me and read and consider the example I presented.
Do you deny that the photon was created 10 years ago and take 10 years to make the journey? Do you understand that, when the photon begins its journey 10 years ago, it sees the present day as simultaneous with it?
However, suppose at the 5-year mark a Vogon Constructor Fleet destroys the Earth. When the photon arrives five years later, there is no Earth, and it can’t know that until it gets here.
The only explanation is that simultaneity is virtual, which is exactly what SR ways. It explicitly says we can make no statements about events outside our light cone. The photon can’t know the Earth is here until it gets here.
2. Wyrd, I’m not “parroting” professional physicists—I’m quoting them word-for-word. And the number of physicists who fully concur with the three I’ve quoted is legion.
We must conclude that not a single professional physicist agrees with your “Virtual Simultaneity” theory, since you’ve not been able to provide, or even “parrot” any such references. Wyrdian physics apparently stands alone, completely unsupported by the relativity physics community. Wyrdian physics is “textbook SR”? Please cite the textbook titles.
You wrote that ‘time’ is “something that flows” (i.e., Newtonian time) or something we are flowing through (i.e., no one’s time). Wyrdian physics doesn’t care that physics cannot find any evidence that a (zero-sized, therefore unitless) flowing-something exists. Neither physics nor biology can find the ‘now’ you claim is a perceived manifestation of that flowing-something. What sensory biology detects the flowing-something or its ‘now’?
In relativity physics, as you know, ‘time’ is precisely the temporal dimension of 4-dimensional spacetime. Spacetime is defined (per Geroch) as: “the collection of all possible events in the universe—all events that have ever happened, all that are happening now, and all that will ever happen; here and elsewhere.” For a graphic of a light cone and the ‘Elsewhere’ which Wyrdian physics claims does not exist, please see this Wikipedia diagram:
Your argument that nothing exists outside our light cone (i.e., that the Elsewhere isn’t co-real—doesn’t exist) appears easily falsified. Consider the expansion of our Hubble Volume by one-light year per year. Each year, a year’s worth of light from far distant objects finally reaches us and renders those light-years-distant objects observable. Wyrdian physics maintains that if we can’t observe something (the Elsewhere) it doesn’t exist—that those newly observable objects were not real before the light from them reached us. Since non-existent, not real objects cannot emit light, from what existing physical object did the newly arrived light originate?
All of Wyrdian physics’ references to spacetime or any spacetime-related concepts, like the spacetime interval and its mathematics, are invalid because Wyrdian physics’ description of the universe is strictly Newtonian: three spatial dimensions plus a flowing-something—Newtonian ‘time’ which “… of itself, and from its own nature, flows equably without relation to anything external.” The claim that a non-existent flowing-something flows along the temporal dimension of spacetime, let alone flows at the speed of light, is inconceivable in relativity physics. Nothing moves in spacetime. As one expert in SR wrote, “From a ‘happening’ in three-dimensional space, physics becomes, as it were, an ‘existence’ in the four-dimensional ‘world’.” So Wyrdian physics disagrees with Albert Einstein, the original SR physicist!
I’ll consider your example of “what a photon knows” after you’ve convincingly identified and explained the flaws in the co-reality descriptions provided by Stuckey, Davies and Smolin. Their explanations cannot be determined to be wrong simply because your explanation disagrees—you need to explain the flaws in their descriptions. I trust you agree with the transitive principle: if A and B are equally real, and B and C are equally real, then so are A and C. Or maybe you don’t.
1. Mike, what syntax in a comment will fetch and insert a graphic inline, like the Wikipedia one I referenced? TIA …
1. Stephen, normally if you put a URL to the image on a line by itself, if WP recognizes that type of image, it will inline it. The URL you put was to the Wikipedia file entry, although it ended with an image file extension, which apparently confused WP. I edited it to point to the actual image. Let me know if it’s not what you intended.
2. Mike, from the Wikipedia “Spacetime” article where the graphic appears, when I right-click on the image using Firefox, I can choose either “Copy link location” or “Copy image location”. The File: one is the link location and the Wikimedia one is the image location. I learn something new every day … 😉
Liked by 1 person
1. Mike, I didn’t feel any heat. But the use of the word ‘parroting’, which means “to repeat exactly what someone else says, without understanding it or thinking about its meaning” might be running a mild temperature. 😉
I never mean to belittle or offend anyone, but if I’ve gone over the line somewhere please let me know so I can apologize.
2. I said parroting because, as far as I can tell from your responses, you don’t seem to me to have a deep understanding of SR or of what I’m saying. You just repeat the same argument that I’ve assured you repeatedly that I fully understand and don’t agree with. But seeing why I don’t requires a grasp of SR that goes beyond pop science books. It requires analysis, not quotes.
The scenario above illustrates my argument nicely. You can demonstrate your facility with SR by engaging with it.
2. “Wyrd, I’m not ‘parroting’ professional physicists—I’m quoting them word-for-word.”
Which is exactly what “parroting” means.
Did you notice the diagram you linked to labels the time axis ct? Above you claimed that I was ‘misconstruing’ the mathematics of SR when I discussed it. As you see from your own link, I know what I’m talking about.
And you clearly don’t, so this “debate” is over. I’m out. Believe whatever you need to.
1. Sorry to see you bow out with issues in the air, Wyrd. I was looking forward to your references. Looking for references myself, yesterday I Google’d “virtual simultaneity” and a few variants, but only turned up someone who believes the aether is real, a complex, equation-riddled discussion of quantum wave behavior, a few artists and several links with the two words separated.
Yes, the time axis in the graphic is correctly labeled ct. As the Wikipedia article on spacetime makes clear, “The constant c, the speed of light, converts time units (like seconds) into space units (like meters). Seconds times meters/second = meters.” So ct is a measure of distance—its appearance on the time axis doesn’t mean that anything is moving.
Until next time, amigo.
This is the sort of reply that makes me think you don’t understand the material. This point arose because I made an aside about about how, when we’re not moving through space, we’re moving through time at the speed of light.
And the thing is, we always move through spacetime at the speed of light. When “standing still” all that motion is along the time axis. In the diagram you linked, that black vertical line is the observer’s motion through spacetime — specifically along the time axis at the speed of light.
As you begin to move through space, you move less through time. But you always move at c. That’s what’s behind the spacetime interval. Your total speed through spacetime is constant, but it can be split between time and space. If you could move through space at c, then you’re no longer moving through time at all; your spacetime interval is zero.
(Again, this isn’t any special physics on my part; it’s textbook SR.)
3. Wyrd, I can understand the confusion because of a crucial omission in the video, which mirrors your own. The confusion disappears with the realization that, although the three spatial dimensions are measured by rulers, the time dimension is measured by clocks, which both you and the video fail to mention.
One of the comments for that video referred to Brian Greene’s The Hidden Universe, so I located his description of the video’s content. In chapter 2, Greene writes:
Einstein proclaimed that all objects in the universe are always traveling through spacetime at one fixed speed—that of light. … We are presently talking about an object’s combined speed through all four dimensions—three space and one time—and it is the object’s speed in this generalized sense that is equal to that of light. … this one fixed speed can be shared between the different dimensions—different space and time dimensions, that is. If an object is sitting still (relative to us) and consequently does not move through space at all, then … all of the object’s motion is used to travel through one dimension—in this case, the time dimension. Moreover, all objects that are at rest relative to us and to each other move through time—they age—at exactly the same rate or speed. If an object does move through space, however, this means that some of the previous motion through time must be diverted.”
There’s the clarification that you and Sabine don’t mention: motion through time is measured by clocks. Clocks slow with acceleration—time dilation, meaning that objects moving through space relative to us age more slowly because their clocks tick more slowly. So the claim that we’re moving through time at the speed of light actually means—we’re all getting older at the rate our clocks tick!
This entire issue is irrelevant to any discussion of the block universe.
4. Your complaint is that no one explicitly mentioned time is measured by clocks? At this point you’re either trolling me or demonstrating a serious incompetence with spacetime physics.
Dude, the first sentence of your own quote reiterates the point, and the whole quote goes on to say exactly what I’ve been telling you. I don’t know why you thinking bolding “they age” demonstrates anything — moving through time means things age.
As you say:
“Clocks slow with acceleration—time dilation,…”
Firstly, acceleration is a different topic, General Relativity, which is out of scope here. It’s true that clocks, either under acceleration relative to an observer, or clocks in heavier gravity relative to an observer, run slower as seen by that observer. But that doesn’t apply here.
What you mean in this context is that clocks moving relative to a given frame run slower relative to that frame. Which is true, and what’s being explained here is why that’s true.
It’s true because we’re always moving through spacetime at light speed. If we’re at rest relative to a frame, all that motion is through time. Thus we move through time — or age, if you prefer — or our clocks tick, if you prefer — at light speed.
Motion through space deducts from that motion through time — this is why clocks of moving objects appear to run slower. Their (relative) motion through space deducts from their (relative) motion through time.
If your motion is entirely through space, i.e. at light speed, then you have no motion through time.
This, as I’ve said over and over, is SR 101, textbook SR, and you don’t seem to have any grasp of the material. (At the same time you presume to say Dr. Hossenfelder and I are confused, which smells like trolling to me.)
“This entire issue is irrelevant to any discussion of the block universe.”
Yes, it’s a sub-thread from when you said I misconstrued spacetime mathematics. Apparently one PhD. theoretical physicist isn’t enough to convince you. I suggest you Go Ogle [do I move through time at the speed of light] for more.
Maybe I can clear up another point for you: searching for “virtual simultaneity” isn’t likely to return relevant links because it’s not a term of art, like “proper time” or “spacetime interval” — it’s an understanding obtained from the material. What you do find in the text is that it’s not possible to make statements about points with space-like separation from you, and all points in the surface of simultaneity have space-like separation.
One last point:
Either you’re trolling by deliberately misrepresenting me or you’re showing that you don’t understand me at all. It should be self-evident I don’t have a Newtonian view. At this point, until you can demonstrate you understand the scenario I presented above, I don’t see how this moves forward.
5. Wyrd, my comments here are contributions to discussions to further understanding by exchanging viewpoints and to learn from those thoughtful and verifiable viewpoints that enlarge, refine or even replace one’s own conceptions. There’s no contest to see who is right. I infer that you’ve become emotionally engaged, however, based on the ad hominem’s in your replies: parroting, trolling, incompetent and ignorant, for instance. For me those judgments cause no offense (a waste of time and energy), but it would be helpful for our discussions if you could restrain the judgmental vocabulary. I’ve delayed posting this comment for a few days hoping you’ll be able to read it dispassionately and try to understand what I’m trying to communicate. If my sometimes clumsy scribblings cause confusion, just ask me to clarify or rephrase.
Lightspeed: The statement that “we travel through time at the speed of light” certainly gets one’s attention in the “Golly!” sense, so I can understood why it’s widely used in YouTube videos. Note the “Golly!” comment “Thank you for totally blowing my mind!” below Hossenfelder’s video. My point is strictly about the units of measurement her statement implies. Look up what the word ‘lightspeed’ or “speed of light” means and you’ll find “300 million meters per second” or “186,000 miles per second” or similar. Substituting that definition into the statement in question yields: “we travel through time at 300 million meters per second.” What I’ve been trying to explain is that the statement “we travel through time at ruler-units per clock-units” is literally incorrect because temporal motion is measured in clock-units alone.
I’ve never disputed the SR physics, just the literally incorrect statement. Below Hossenfelder’s video is a comment by Tudor Montescu: “Did you mean, by any chance, to say: ‘Do we travel through spacetime at the speed of light?’ (i.e. not through time). Because it was delta s / delta t = c. We instead travel through time at delta t / delta t = 1 second / second, as you yourself mentioned at one point.” An “object’s speed through time” is “the rate at which time elapses on its own clock,” as Greene wrote.
Block Universe First, I’d like to answer your question of 11/1: “If the BUH is correct, why haven’t our senses evolved to have some sense of it?” The answer is, “some sense of it” has evolved—it’s called consciousness. However, as I’ve pointed out several times (and again on 11/3), “Our perceptions are simulations of sensation events and they don’t correspond to the sensation events, the physical reality.” There is no color or sound in the world, for example. The events in the block universe are fixed and unchanging, but our simulated representation of the block universe is a streaming, movie-like experience. That’s how consciousness operates. But, just like our perception of a red rose doesn’t mean that the rose possesses color, our perception of a movie-like flow of the world doesn’t mean that the external world (or an imagined objective time) is flowing. The block universe—the block-view—is the reality revealed by SR. The flowing perception—the dynamic-view—is our perception of that reality. If all consciousness vanished from the universe the dynamic-view would vanish with it.
There are pathologies of consciousness, specifically one caused during the encephalitis lethargica epidemic a century ago, that slow, speed up and even stop that flowing presentation. When the flow of consciousness is stopped, the sufferer experiences the block universe directly but is unable to function. In several cases, the patients did not even appear to physically age, retaining a youthful appearance over a period of decades. This pathological condition might lend support from neuroscience to the physics view that the “flow of time” is not an objective reality. Based on autopsy results, the condition is believed to have been caused by viral damage to the midbrain’s substantia nigra which, when healthy, would seem to support the streaming continuity of consciousness.
6. BU and Virtual simultaneity Wyrd, I searched for “virtual simultaneity” in a way that the words could appear separate from each other and in any order. That search should have turned up references that agree with your viewpoint even without the explicit phrase.
You haven’t yet replied to my Hubble Volume question. I would think that your contention that our Hubble Volume contains everything that is real is likely not shared by astrophysicists. That contention is a corollary of your “light cone contains everything that is real” premise that nothing exists outside our light cone because we cannot know anything about objects until their light arrives. I suspect astrophysicists would explain that, as the Hubble Volume expands, we see objects for the first time that existed—that were real—in the presumed infinite universe prior to becoming visible. If you can locate any astrophysicists that support your contention, they might be able to rescue your virtual simultaneity hypothesis from falsification. Having knowledge about objects (light cone) and the reality of objects are two different things.
On Newtonian Time: To disambiguate, the word ‘time’ has two quite distinct usages. The first is the “flowing time” usage (aka Newtonian time, “time as experienced” and so on). This usage of the word ‘time’ is your “flowing something.” The second usage of the word ‘time’ denotes the temporal dimension of spacetime, in my view an unfortunate word choice because of the confusions resulting from equivocation. Moving forward, I’ll instead use the word ‘tempth’ (like length, width and depth) to refer unambiguously to the temporal dimension of spacetime.
Your statements about “time being fundamental” must be referring to the first usage—the Newtonian flowing time usage. Assuming that by ‘fundamental’ you mean either “existed prior to” or “something on which other things are based” (you’re welcome to clarify your meaning of ‘fundamental’), the idea that an objectively real flowing time is ‘fundamental’ in either sense of the word ‘time’ is, at best, an unverifiable philosophical belief. While it’s true that a number of Philosophers agree with your view that Newtonian flowing time is fundamental. But, as Einstein said to Henri Bergson, “The time of the philosophers is dead.”
On the other hand, since spacetime is a single 4-dimensional geometry, neither tempth nor any of the three spatial dimensions can be sensibly thought of as ‘fundamental’—no dimension of spacetime could have preceded the others or formed a basis on which the other dimensions exist. Tempth also does not and cannot ‘flow’—it’s a measurable extent, a dimension of spacetime.
So my statement about your conception of physics being Newtonian is based on your belief in time as a fundamental “flowing something” rather than a measurable extent. Perhaps my confusion is results from your equivocation in your use of the word ‘time’—you’re mixing the flowing time usage with tempth in your remarks.
Wyrd, if you’re feeling agitated again, please reread this comment a few times and try to understand the meaning I’m communicating. Please ask questions if you would like additional clarification. And have a pleasant day! 🙂
7. “Look up what the word ‘lightspeed’ or ‘speed of light’ means and you’ll find ‘300 million meters per second’ or ‘186,000 miles per second’ or similar. “
Yes, 299,792,458 m/s. That’s how far a photon travels through spacetime in one second. Since a photon moves at light speed, all its motion is through space, and none through time, it covers that distance in one second.
An object traveling only through time, but not through space, moves through spacetime at the same rate, but all its motion is through time (at light speed, which is a velocity — the differential of distance over time).
“That search should have turned up references that agree with your viewpoint even without the explicit phrase.”
Once again, it’s something that emerges from understanding the material.
“You haven’t yet replied to my Hubble Volume question.”
Actually I kind of have. Just expand my scenario above, which concerns a location 10 LY away, to any distance you like. The same logic applies. I’ll repeat it below.
“Your statements about ‘time being fundamental’ must be referring to the first usage—the Newtonian flowing time usage.”
Right, but then wrong. Newtonian time is the idea of absolute time, and the notions of SR and GR don’t apply. I subscribe to time as described by GR. (FWIW, I do think time is axiomatic and existed in some fashion before the Big Bang, and I agree that’s a metaphysical view.)
This may answer your Hubble question, too: Consider the universe expanding from the Big Bang. Each particle, once formed, has a worldline and its own proper time, which depends on the local environment. The particles that ended up in, for instance, the Milky Way, have roughly similar worldlines and have aged (to use your word) roughly the same.
With some variation. The particles at the center of the Earth, because of the higher gravity, are about 2.5 years younger than those at the surface. For that matter, the tip of Mount Everest is ever slow slightly older than the rest of the Earth. The differential is higher for Jupiter, and even higher for the Sun.
Given that, we can reasonably assume that the particles of a system, say, 50 LY away have aged roughly the same time as we have (give or take). Despite that SR is explicit that we cannot say anything about events happening “right now” 50 LY away, it’s reasonable to assume that’s the case on the presumption those particles have had the chance to age up to now.
But note that it will take 50 years to receive a signal that we can then, and only then, use to retroactively state that such and such and event was simultaneous with a time 50 years ago. Note that when that signal leaves for Earth, it sees 50 years into the future as simultaneous (as well as all points along the path). But obviously the Earth has to age those 50 years before it gets here.
“…neither tempth nor any of the three spatial dimensions can be sensibly thought of as ‘fundamental’…”
When I say fundamental, I mean, among other things, irreducible. Spacetime is a unified geometry, yes, but it still reduces to X,Y,Z,T axes.
8. Wyrd, I should have replied here, but I mistakenly started a new thread, so please see my comment posted a few minutes ago. Thanks …
Liked by 1 person
16. Wyrd, I’ve been busy discussing definitions as facts-of-the-matter on Eric Schwitzgebel blog, but I did wish to reply to your comment of the 11/13.
I can understand your defending the “blow your mind” version because it’s what you originally mentioned telling your friends. I quoted Greene above, that we are “… talking about an object’s combined speed through all four dimensions—three space and one time—and it is the object’s speed in this generalized sense that is equal to that of light.” But when talking about motion only through time, we’re no longer discussing it in that generalized “all four dimensions” sense.
You wrote on the 13th: “An object traveling only through time, but not through space, moves through spacetime at the same rate, but all its motion is through time (at light speed, which is a velocity—the differential of distance over time)”, or the differential of ruler-units over clock units. But tempth is measured in clock-units … where is that distance you mention coming from?
On the BU and “virtual simultaneity” you wrote that “… it’s something that emerges from understanding the material.” Wouldn’t it be better if the notion were explicitly stated in the material rather than “emerging” from it? I’ve searched through several books on SR and cannot find any mention of anything that corresponds to your virtual simultaneity, which I believe you defined as:
But what you are denying is co-reality which is existence, not knowledge. Co-reality is what RoS is about. Using the equations of SR, events futureward of us on the timeline can be calculated to exist by distant observers moving relative to us (towards or away), even though neither of us can have knowledge of, i.e. information about them—please reread Greene’s example of the distant alien Chewy. The knowledge of those futureward events only arrives several generations later but then Chewy’s descendants can confirm that his SR co-reality calculations were correct. Note that in your examples of a photon 10 LY away, we have to wait 10 years to know of it, but the photon and the Earth are co-real at the start of your examples.
I have to conclude that your proof that the BU doesn’t exist is not convincing until, at a minimum, 1) you falsify Smolin’s co-reality logic (in my comment of 11/6) and 2) you provide credible physicist references confirming the validity of your virtual simultaneity views. As always, my conclusions about such issues are always provisional and subject to revision. I’d like to believe that applies to yours as well.
Liked by 1 person
1. Hard pass. You’ve consistently ignored my argument, and it’s clear at this point it’s because you don’t understand it. Possibly you can’t understand it, so I see no point in wasting any more time on you.
Liked by 1 person
17. Oddly coincidental and perhaps amusing, a very recent bottom-of-the page quote on from mathematician David Guaspari reads:
I agree with you wholeheartedly, Wyrd. I don’t and can’t understand your argument. I’ve looked for others who do and I can’t find any. Apologies for wasting your time.
Liked by 1 person
1. You seem to be suffering from incontinent judgmentalism Wyrd. As I’ve explained, not being a physicist, I rely on the understandings of physics professionals when forming my own beliefs. My inability to understand your argument is not owing to limitations—it’s because your argument implies nonsense about what exists that conflicts with the physics community’s understanding of RoS and even light cones. I’ve provided their explanations, which you won’t address and cannot falsify … shall we interpret that inability as a limitation?
Have a Happy Thanksgiving Wyrd.
Liked by 1 person
1. My judgement is based on my perceptions, and I stand by it. Everything I’ve explained is textbook SR that any physicist would agree with. The only even slightly debatable claim I’m making is that the future isn’t real.
Liked by 1 person
Comments are closed. |
84d5d4392ec84760 | The Unitary Gas and its Symmetry Properties
Yvan Castin and Félix Werner Yvan Castin Laboratoire Kastler Brossel, Ecole normale supérieure, CNRS and UPMC, Paris (France), 1Félix Werner Department of Physics, University of Massachusetts, Amherst (USA), 2
The physics of atomic quantum gases is currently taking advantage of a powerful tool, the possibility to fully adjust the interaction strength between atoms using a magnetically controlled Feshbach resonance. For fermions with two internal states, formally two opposite spin states and , this allows to prepare long lived strongly interacting three-dimensional gases and to study the BEC-BCS crossover. Of particular interest along the BEC-BCS crossover is the so-called unitary gas, where the atomic interaction potential between the opposite spin states has virtually an infinite scattering length and a zero range. This unitary gas is the main subject of the present chapter: It has fascinating symmetry properties, from a simple scaling invariance, to a more subtle dynamical symmetry in an isotropic harmonic trap, which is linked to a separability of the -body problem in hyperspherical coordinates. Other analytical results, valid over the whole BEC-BCS crossover, are presented, establishing a connection between three recently measured quantities, the tail of the momentum distribution, the short range part of the pair distribution function and the mean number of closed channel molecules.
The chapter is organized as follows. In section 1, we introduce useful concepts, and we present a simple definition and basic properties of the unitary gas, related to its scale invariance. In section 2, we describe various models that may be used to describe the BEC-BCS crossover, and in particular the unitary gas, each model having its own advantage and shedding some particular light on the unitary gas properties: scale invariance and a virial theorem hold within the zero-range model, relations between the derivative of the energy with respect to the inverse scattering length and the short range pair correlations or the tail of the momentum distribution are easily derived using the lattice model, and the same derivative is immediately related to the number of molecules in the closed channel (recently measured at Rice) using the two-channel model. In section 3, we describe the dynamical symmetry properties of the unitary gas in a harmonic trap, and we extract their physical consequences for many-body and few-body problems.
1 Simple facts about the unitary gas
1.1 What is the unitary gas ?
First, the unitary gas is a gas. As opposed to a liquid, it is a dilute system with respect to the interaction range : its mean number density satisfies the constraint
For a rapidly decreasing interaction potential , is the spatial width of . In atomic physics, where may be viewed as a strongly repulsive core and a Van der Waals attractive tail , one usually assimilates to the Van der Waals length .
The intuitive picture of a gas is that the particles mainly experience binary scattering, the probability that more than two particles are within a volume being negligible. As a consequence, what should really matter is the knowledge of the scattering amplitude of two particles, where is the relative momentum, rather than the dependence of the interaction potential . This expectation has guided essentially all many-body works on the BEC-BCS crossover: One uses convenient models for that are very different from the true atomic interaction potential, but that reproduce correctly the momentum dependence of at the relevant low values of , such as the Fermi momentum or the inverse thermal de Broglie wavelength, these relevant low values of having to satisfy for this modelization to be acceptable.
Second, the unitary gas is such that, for the relevant values of the relative momentum , the modulus of reaches the maximal value allowed by quantum mechanics, the so-called unitary limit livre_collisions . Here, we consider -wave scattering between two opposite-spin fermions, so that depends only on the modulus of the relative momentum. The optical theorem, a consequence of the unitarity of the quantum evolution operator livre_collisions , then implies
Dividing by , and using , one sees that this fixes the value of the imaginary part of , so that it is strictly equivalent to the requirement that there exists a real function such that
for all values of . We then obtain the upper bound . Ideally, the unitary gas saturates this inequality for all values of :
In reality, Eq.(4) cannot hold for all . It is thus important to understand over which range of Eq.(4) should hold to have a unitary gas, and to estimate the deviations from Eq.(4) in that range in a real experiment. To this end, we use the usual low- expansion of the denominator of the scattering amplitude livre_collisions , under validity conditions specified in math_re :
The length is the scattering length, the length is the effective range of the interaction. Both and can be of arbitrary sign. Even for , even for an everywhere non-positive interaction potential, can be of arbitrary sign. As this last property seems to contradict a statement in the solution of problem 1 in §131 of Landau , we have constructed an explicit example depicted in Fig. 1, which even shows that the effective range may be very different in absolute value from the true potential range , i.e. for may be in principle an arbitrarily large and negative number. Let us assume that the in Eq.(5) are negligible if , an assumption that will be revisited in §2.3. Noting a typical relative momentum in the gas, we thus see that the unitary gas is in practice obtained as a double limit, a zero range limit
and an infinite scattering length limit:
A class of non-positive
potentials (of compact support of radius
Figure 1: A class of non-positive potentials (of compact support of radius ) that may lead to a negative effective range in the resonant case . The resonant case is achieved when the three parameters and satisfy Then from Smorodinskii’s formula, see Problem 1 in §131 of Landau , one sees that . One also finds that when with , , where solves .
At zero temperature, we assume that , where the Fermi momentum is conventionally defined in terms of the gas total density as for the ideal spin-1/2 Fermi gas:
In a trap, and thus are position dependent. Condition (7) is well satisfied experimentally, thanks to the Feshbach resonance. The condition is also well satisfied at the per cent level, because the Van der Waals length is in the nanometer range. Up to now, there is no experimental tuning of the effective range , and there are cases where is not small. However, to study the BEC-BCS crossover, one uses in practice the so-called broad Feshbach resonances, which do not require a too stringent control of the spatial homogeneity of the magnetic field, and where ; then Eq.(6) is also satisfied.
We note that the assumption , although quite intuitive, is not automatically correct. For example, for bosons, as shown by Efimov Efimov70 , an effective three-body attraction takes place, leading to the occurrence of the Efimov trimers; this attraction leads to the so-called problem of fall to the center Landau , and one has of the order of the largest of the two ranges and . Eq.(6) is then violated, and an extra length scale, the three-body parameter, has to be introduced, breaking the scale invariance of the unitary gas. Fortunately, for three fermions, there is no Efimov attraction, except for the case of different masses for the two spin components: If two fermions of mass interact with a lighter particle of mass , the Efimov effect takes place for larger than Efimov73 ; Petrov . If a third fermion of mass is added, a four-body Efimov effect appears at a slightly lower mass ratio CMP . In what follows we consider the case of equal masses, unless specified otherwise.
At non-zero temperature , another length scale appears in the unitary gas properties, the thermal de Broglie wavelength , defined as
At temperatures larger than the Fermi temperature , one has to take in the conditions (6,7). In practice, the most interesting regime is however the degenerate regime , where the non-zero temperature does not bring new conditions for unitarity.
1.2 Some simple properties of the unitary gas
As is apparent in the expression of the two-body scattering amplitude Eq.(4), there is no parameter or length scales issuing from the interaction. As a consequence, for a gas in the trapping potential , the eigenenergies of the -body problem only depend on and on the spatial dependence of : the length scale required to get an energy out of is obtained from the shape of the container.
This is best formalized in terms of a spatial scale invariance. Qualitatively, if one changes the volume of the container, even if the gas becomes arbitrarily dilute, it remains at unitarity and strongly interacting. This is of course not true for a finite value of the scattering length : If one reduces the gas density, drops eventually to small values, and the gas becomes weakly interacting.
Quantitatively, if one applies to the container a similarity factor in all directions, which changes its volume from to , we expect that each eigenenergy scales as
and each eigenwavefunction scales as
Here is the set of all coordinates of the particles, and the -dependent factor ensures that the wavefunction remains normalized. The properties (10,11), which are at the heart of what the unitary gas really is, will be put on mathematical grounds in section 2 by replacing the interaction with contact conditions on . Simple consequences may be obtained from these scaling properties, as we now discuss.
In a harmonic isotropic trap, where a single particle has an oscillation angular frequency , taking as the scaling factor the harmonic oscillator length , one finds that
where the functions are universal functions, ideally independent of the fact that one uses lithium 6 or potassium 40 atoms, and depending only on the particle number.
In free space, the unitary gas cannot have a -body bound state (an eigenstate of negative energy), whatever the value of . If there was such a bound state, which corresponds to a square integrable eigenwavefunction of the relative (Jacobi) coordinates of the particles, one could generate a continuum of such square integrable eigenstates using Eqs.(10,11). This would violate a fundamental property of self-adjoint Hamiltonians analyse_spectrale . Another argument is that the energy of a discrete universal bound state would depend only on and , which is impossible by dimensional analysis.
At thermal equilibrium in the canonical ensemble in a box, say a cubic box of volume with periodic boundary conditions, several relations may be obtained if one takes the thermodynamic limit , with a fixed density and temperature , and if one assumes that the free energy is an extensive quantity. Let us consider for simplicity the case of equal population of the two spin states, . Then, in the thermodynamic limit, the free energy per particle is a function of the density and temperature . If one applies a similarity of factor and if one change to so as to keep a constant ratio , that is a constant occupation probability for each eigenstate, one obtains from Eq.(10) that
At zero temperature, reduces to the ground state energy per particle . From Eq.(13) it appears that scales as , exactly as the ground state energy of the ideal Fermi gas. One thus simply has
where is defined by Eq.(8) and is a universal number. This is also a simple consequence of dimensional analysis Ho . Taking the derivative with respect to or to the volume, this shows that the same type of relation holds for the zero temperature chemical potential, , and for the zero temperature total pressure, , so that
At non-zero temperature, taking the derivative of Eq.(13) with respect to in , and using , where is the mean energy and is the entropy, as well as , one obtains
From the Gibbs-Duhem relation, the grand potential is equal to , where is the pressure of the gas. This gives finally the useful relation
that can also be obtained from dimensional analysis Ho , and that of course also holds at zero temperature (see above). All these properties actually also apply to the ideal Fermi gas, which is obviously scaling invariant. The relation (18) for example was established for the ideal gas in ideal .
Let us finally describe at a macroscopic level, i.e. in a hydrodynamic picture, the effect of the similarity Eq.(11) on the quantum state of a unitary gas, assuming that it was initially at thermal equilibrium in a trap. In the initial state of the gas, consider a small (but still macroscopic) element, enclosed in a volume around point . It is convenient to assume that is a fictitious cavity with periodic boundary conditions. In the hydrodynamic picture, this small element is assumed to be at local thermal equilibrium with a temperature . Then one performs the spatial scaling transform Eq.(10) on each many-body eigenstate of the thermal statistical mixture, which does not change the statistical weigths. How will the relevant physical quantities be transformed in the hydrodynamic approach ?
The previously considered small element is now at position , and occupies a volume , with the same number of particles. The hydrodynamic mean density profile after rescaling, , is thus related to the mean density profile before scaling as
Second, is the small element still at (local) thermal equilibrium after scaling ? Each eigenstate of energy of the locally homogeneous unitary gas within the initial cavity of volume is transformed by the scaling into an eigenstate within the cavity of volume , with the eigenenergy . Since the occupation probabilities of each local eigenstate are not changed, the local statistical mixture remains thermal provided that one rescales the temperature as
A direct consequence is that the entropy of the small element of the gas is unchanged by the scaling, so that the local entropy per particle in the hydrodynamic approach obeys
Also, since the mean energy of the small element is reduced by the factor due to the scaling, and the volume of the small element is multiplied by , the equilibrium relation Eq.(18) imposes that the local pressure is transformed by the scaling as
1.3 Application: Inequalities on and finite-temperature quantities
Using the previous constraints imposed by scale invariance of the unitary gas on thermodynamic quantities, in addition to standard thermodynamic inequalities, we show that one can produce constraints involving both the zero-temperature quantity and finite-temperature quantities of the gas.
Imagine that, at some temperature , the energy and the chemical potential of the non-polarized unitary Fermi gas have been obtained, in the thermodynamic limit. If one introduces the Fermi momentum Eq.(8) and the corresponding Fermi energy , this means that on has at hand the two dimensionless quantities
As a consequence of Eq.(18), one also has access to the pressure . We now show that the following inequalities hold at any temperature :
In the canonical ensemble, the mean energy is an increasing function of temperature for fixed volume and atom number . Indeed one has the well-known relation , and the variance of the Hamiltonian is non-negative. As a consequence, for any temperature :
From Eq.(14) we then reach the upper bound on given in Eq.(25).
In the grand canonical ensemble, the pressure is an increasing function of temperature for a fixed chemical potential. This results from the Gibbs-Duhem relation where is the grand potential and the volume, and from the differential relation where is the entropy. As a consequence, for any temperature :
For the unitary gas, the left hand side can be expressed in terms of using (18). Eliminating the density between Eq.(15) and Eq.(16) we obtain the zero temperature pressure
This leads to the lower bound on given in Eq.(25).
Let us apply Eq.(25) to the Quantum Monte Carlo results of Burovski : At the critical temperature , and , so that
This deviates by two standard deviations from the fixed node result Carlson . The Quantum Monte Carlo results of Bulgac , if one takes a temperature equal to the critical temperature of Burovski , give and ; these values, in clear disagreement with Burovski , lead to the non-restrictive bracketing . The more recent work Goulko finds and at this critical temperature, and , leading to
Another, more graphical application of our simple bounds is to assume some reasonable value of , and then to use Eq.(25) to construct a zone in the energy-chemical potential plane that is forbidden at all temperatures. In Fig.2, we took , inspired by the fixed node upper bound on the exact value of Carlson : The shaded area is the resulting forbidden zone, and the filled disks with error bars represent the in principle exact Quantum Monte Carlo results of various groups at . The prediction of Burovski lies within the forbidden zone. The prediction of Bulgac is well within the allowed zone, whereas the most recent prediction of Goulko is close to the boundary between the forbidden and the allowed zones. If one takes a smaller value for , the boundaries of the forbidden zone will shift as indicated by the arrows on the figure. All this shows that simple reasonings may be useful to test and guide numerical studies of the unitary gas.
For the spin balanced uniform unitary gas at thermal equilibrium: Assuming
Figure 2: For the spin balanced uniform unitary gas at thermal equilibrium: Assuming in Eq.(25) defines a zone (shaded in gray) in the plane energy-chemical potential that is forbidden at all temperatures. The black disks correspond to the unbiased Quantum Monte Carlo results of Burovski et al. Burovski , of Bulgac et al. Bulgac , and of Goulko et al. Goulko at the critical temperature. Taking the unknown exact value of , which is below the fixed node upper bound 0.41 Carlson , will shift the forbidden zone boundaries as indicated by the arrows.
1.4 Is the unitary gas attractive or repulsive ?
According to a common saying, a weakly interacting Fermi gas () experiences an effective repulsion for a positive scattering length , and an effective attraction for a negative scattering length . Another common fact is that, in the unitary limit , the gas properties do not depend on the sign of . As the unitary limit may be apparently equivalently obtained by taking the limit or the limit , one reaches a paradox, considering the fact that the unitary gas does not have the same ground state energy than the ideal gas and cannot be at the same time an attractive and repulsive state of matter.
This paradox may be resolved by considering the case of two particles in an isotropic harmonic trap. After elimination of the center of mass motion, and restriction to a zero relative angular momentum to have -wave interaction, one obtains the radial Schrödinger equation
with the relative mass . The interactions are included in the zero range limit by the boundary conditions, the so-called Wigner-Bethe-Peierls contact conditions described in section 2:
that correctly reproduce the free space scattering amplitude
The general solution of Eq.(31) may be expressed in terms of Whittaker et functions. For an energy not belonging to the non-interacting spectrum , the Whittaker function diverges exponentially for large and has to be disregarded. The small behavior of the Whittaker function , together with the Wigner-Bethe-Peierls contact condition, leads to the implicit equation for the relative energy, in accordance with Wilkens :
with the harmonic oscillator length of the relative motion, .
The function is different from zero and diverges on each non-positive integers. Thus Eq.(34) immediately leads in the unitary case to the spectrum . This can be readily obtained by setting in Eq.(31) , so that obeys Schrödinger’s equation for a 1D harmonic oscillator, with the constraint issuing from Eq.(32) that , which selects the even 1D states.
The graphical solution of Eq.(34), see Fig. 3, allows to resolve the paradox about the attractive or repulsive nature of the unitary gas. E.g. starting with the ground state wavefunction of the ideal gas case, of relative energy , it appears that the two adiabatic followings (i) and (ii) lead to different final eigenstates of the unitary case, to an excited state for the procedure (i), and to the ground state for procedure (ii).
Figure 3: For the graphical solution of Eq.(34), which gives the spectrum for two particles in a three-dimensional isotropic harmonic trap, plot of the function
The same explanation holds for the many-body case: The interacting gas has indeed several energy branches in the BEC-BCS crossover, as suggested by the toy model 111This toy model replaces the many-body problem with the one of a matterwave interacting with a single scatterer in a hard wall cavity of radius . of PricoupenkoToy , see Fig. 4. Starting from the weakly attractive Fermi gas and ramping the scattering length down to one explores a part of the ground energy branch, where the unitary gas is attractive; this ground branch continuously evolves into a weakly repulsive condensate of dimers PetrovShlyapSalomon if further moves from to and then to . The attractive nature of the unitary gas on the ground energy branch will become apparent in the lattice model of section 2. On the other hand, starting from the weakly repulsive Fermi gas and ramping the scattering up to , one explores an effectively repulsive excited branch.
In the first experiments on the BEC-BCS crossover, the ground branch was explored by adiabatic variations of the scattering length and was found to be stable. The first excited energy branch was also investigated in the early work Bourdel_Eint , and more recently in Ketterle_excited looking for a Stoner demixing instability of the strongly repulsive two-component Fermi gas. A difficulty for the study of this excited branch is its metastable character: Three-body collisions gradually transfer the gas to the ground branch, leading e.g. to the formation of dimers if .
Figure 4: In the toy model of PricoupenkoToy , for the homogeneous two-component unpolarized Fermi gas, energy per particle on the ground branch and the first excited branch as a function of the inverse scattering length. The Fermi wavevector is defined in Eq.(8), is the Fermi energy, and is the scattering length.
1.5 Other partial waves, other dimensions
We have previously considered the two-body scattering amplitude in the -wave channel. What happens for example in the -wave channel ? This channel is relevant for the interaction between fermions in the same internal state, where a Feshbach resonance technique is also available Salomon_p ; Jin_p . Can one also reach the unitarity limit Eq.(4) in the -wave channel ?
Actually the optical theorem shows that relation Eq.(3) also holds for the -wave scattering amplitude . What differs is the low- expansion of , that is now given by
where is the scattering volume (of arbitrary sign) and has the dimension of the inverse of a length. The unitary limit would require negligible as compared to . One can in principle tune to infinity with a Feshbach resonance. Can one then have a small value of at resonance ? A theorem for a compact support interaction potential of radius shows however that LudoPRL_ondeP ; Jona
A similar conclusion holds using two-channel models of the Feshbach resonance Jona ; Chevy . thus assumes a huge positive value on resonance, which breaks the scale invariance and precludes the existence of a -wave unitary gas. This does not prevent however to reach the unitary limit in the vicinity of a particular value of . For large and negative, neglecting the in Eq.(35) under the condition , one indeed has , so that , in a vicinity of
Turning back to the interaction in the -wave channel, an interesting question is whether the unitary gas exists in reduced dimensions.
In a one-dimensional system the zero range interaction may be modeled by a Dirac potential . If is finite, it introduces a length scale that breaks the scaling invariance. Two cases are thus scaling invariant, the ideal gas and the impenetrable case . The impenetrable case however is mappable to an ideal gas in one dimension, it has in particular the same energy spectrum and thermodynamic properties Gaudin .
In a two-dimensional system, the scattering amplitude for a zero range interaction potential is given by Olshanii2D
where is Euler’s constant and is the scattering length. For a finite value of , there is no scale invariance. The case corresponds to the ideal gas limit. At first sight, the opposite limit is a good candidate for a two-dimensional unitary gas; however this limit also corresponds to an ideal gas. This appears in the 2D version of the lattice model of section 2 Tonini . This can also be checked for two particles in an isotropic harmonic trap. Separating out the center of mass motion, and taking a zero angular momentum state for the relative motion, to have interaction in the -wave channel, one has to solve the radial Schrödinger equation:
where is the reduced mass of the two particles, is an eigenenergy of the relative motion, and is the single particle angular oscillation frequency. The interactions are included by the boundary condition in :
which is constructed to reproduce the expression of the scattering amplitude Eq.(38) for the free space problem.
The general solution of Eq.(39) may be expressed in terms of Whittaker functions and . Assuming that does not belong to the ideal gas spectrum , one finds that the solution has to be disregarded because it diverges exponentially for . From the small behavior of the solution, one obtains the implicit equation
where the relative harmonic oscillator length is and the digamma function is the logarithmic derivative of the function. If , one then finds that tends to the ideal gas spectrum from below, see Fig. 5, in agreement with the lattice model result that the 2D gas with a large and finite is a weakly attractive gas.
Figure 5: For the graphical solution of Eq.(41), which gives the spectrum for two interacting particles in a two-dimensional isotropic harmonic trap, plot of the function where stands for and the special function is the logarithmic derivative of the function.
2 Various models and general relations
There are basically two approaches to model the interaction between particles for the unitary gas (and more generally for the BEC-BCS crossover).
In the first approach, see subsections 2.1 and 2.3, one takes a model with a finite range and a fixed (e.g. infinite) scattering length . This model may be in continuous space or on a lattice, with one or several channels. Then one tries to calculate the eigenenergies, the thermodynamic properties from the thermal density operator , etc, and the zero range limit should be taken at the end of the calculation. Typically, this approach is followed in numerical many-body methods, such as the approximate fixed node Monte Carlo method Carlson ; Panda ; Giorgini or unbiased Quantum Monte Carlo methods Burovski ; Bulgac ; Juillet . A non-trivial question however is whether each eigenstate of the model is universal in the zero range limit, that is if the eigenenergy and the corresponding wavefunction converge for . In short, the challenge is to prove that the ground state energy of the system does not tend to when .
In the second approach, see subsection 2.2, one directly considers the zero range limit, and one replaces the interaction by the so-called Wigner-Bethe-Peierls contact conditions on the -body wavefunction. This constitutes what we shall call the zero-range model. The advantage is that only the scattering length appears in the problem, without unnecessary details on the interaction, which simplifies the problem and allows to obtain analytical results. E.g. the scale invariance of the unitary gas becomes clear. A non-trivial question however is to know whether the zero-range model leads to a self-adjoint Hamiltonian, with a spectrum then necessarily bounded from below for the unitary gas (see Section 1.2), without having to add extra boundary conditions. For bosons, due to the Efimov effect, the Wigner-Bethe-Peierls or zero-range model becomes self-adjoint only if one adds an extra three-body contact condition, involving a so-called three-body parameter. In an isotropic harmonic trap, at unitarity, there exists however a non-complete family of bosonic universal states, independent from the three-body parameter and to which the restriction of the Wigner-Bethe-Peierls model is hermitian Jonsell ; WernerPRL . For equal mass two-component fermions, it is hoped in the physics literature that the zero-range model is self-adjoint for an arbitrary number of particles . Surprisingly, there exist works in mathematical physics predicting that this is not the case when is large enough Teta ; Minlos ; however the critical mass ratio for the appearance of an Efimov effect in the unequal-mass body problem given without proof in Minlos was not confirmed by the numerical study CMP , and the variational ansatz used in Teta to show that the energy is unbounded below does not have the proper fermionic exchange symmetry. This mathematical problem thus remains open.
2.1 Lattice models and general relations
The lattice models
The model that we consider here assumes that the spatial positions are discretized on a cubic lattice, of lattice constant that we call as the interaction range. It is quite appealing in its simplicity and generality. It naturally allows to consider a contact interaction potential, opposite spin fermions interacting only when they are on the same lattice site. Formally, this constitutes a separable potential for the interaction (see subsection 2.3 for a reminder), a feature known to simplify diagrammatic calculations NSR . Physically, it belongs to the same class as the Hubbard model, so that it may truly be realized with ultracold atoms in optical lattices BlochMott , and it allows to recover the rich lattice physics of condensed matter physics and the corresponding theoretical tools such as Quantum Monte Carlo methods Burovski ; Juillet .
The spatial coordinates of the particles are thus discretized on a cubic grid of step . As a consequence, the components of the wavevector of a particle have a meaning modulo only, since the plane wave function defined on the grid is not changed if a component of is shifted by an integer multiple of . We shall therefore restrict the wavevectors to the first Brillouin zone of the lattice:
This shows that the lattice structure in real space automatically provides a cut-off in momentum space. In the absence of interaction and of confining potential, eigenmodes of the system are plane waves with a dispersion relation , supposed to be an even and non-negative function of . We assume that this dispersion relation is independent of the spin state, which is a natural choice since the and particles have the same mass. To recover the correct continuous space physics in the zero lattice spacing limit , we further impose that it reproduces the free space dispersion relation in that limit, so that
The interaction between opposite spin particles takes place when two particles are on the same lattice site, as in the Hubbard model. In first quantized form, it is represented by a discrete delta potential:
The factor is introduced because is equivalent to the Dirac distribution in the continuous space limit. To summarize, the lattice Hamiltonian in second quantized form in the general trapped case is
The plane wave annihilation operators in spin state obey the usual continuous space anticommutation relations if and are in the first Brillouin zone 222In the general case, has to be replaced with where is any vector in the reciprocal lattice., and the field operators obey the usual discrete space anticommutation relations . In the absence of trapping potential, in a cubic box with size integer multiple of , with periodic boundary conditions, the integral in the kinetic energy term is replaced by the sum where the annihilation operators then obey the discrete anticommutation relations for .
The coupling constant is a function of the grid spacing . It is adjusted to reproduce the scattering length of the true interaction. The scattering amplitude of two atoms on the lattice with vanishing total momentum, that is with incoming particles of opposite spin and opposite momenta , reads
as derived in details in Houches03 for a quadratic dispersion relation and in Tangen for a general dispersion relation. Here the scattering state energy actually introduces a dependence of the scattering amplitude on the direction of when the dispersion relation is not parabolic. If one is only interested in the expansion of up to second order in , e.g. for an effective range calculation, one may conveniently use the isotropic approximation thanks to (43). Adjusting to recover the correct scattering length gives from Eq.(46) for :
with . The above formula Eq.(47) is reminiscent of the technique of renormalization of the coupling constant Randeria ; Randeria2 . A natural case to consider is the one of the usual parabolic dispersion relation,
A more explicit form of Eq.(47) is then Mora ; LudoVerif :
with a numerical constant given by
and that may be expressed analytically in terms of the dilog special function.
Simple variational upper bounds
The relation Eq.(49) is quite instructive in the zero range limit , for fixed non-zero scattering length and atom numbers : In this limit, the lattice filling factor tends to zero, and the lattice model is expected to converge to the continuous space zero-range model, that is to the Wigner-Bethe-Peierls model described in subsection 2.2. For each of the eigenenergies this means that
where in the right hand side the set of ’s are the energy spectrum of the zero range model. On the other hand, for a small enough value of , the denominator in the right-hand side of Eq.(49) is dominated by the term , the lattice coupling constant is clearly negative, and the lattice model is attractive, as already pointed out in kitp . By the usual variational argument, this shows that the ground state energy of the zero range interacting gas is below the one of the ideal gas, for the same trapping potential and atom numbers :
Similarly, at thermal equilibrium in the canonical ensemble, the free energy of the interacting gas is below the one of the ideal gas:
As in Blaizot one indeed introduces the free-energy functional of the (here lattice model) interacting gas, , where is any unit trace system density operator. Then
where is the thermal equilibrium density operator of the ideal gas in the lattice model, and is the interaction contribution to the -body Hamiltonian. Since the minimal value of over is equal to the interacting gas lattice model free energy , the left hand side of Eq.(54) is larger than . Since the operator is negative for small , because , the right hand side of Eq.(54) is smaller than . Finally taking the limit , one obtains the desired inequality. The same reasoning can be performed in the grand canonical ensemble, showing that the interacting gas grand potential is below the one of the ideal gas, for the same temperature and chemical potentials :
In ChevyNature , for the unpolarized unitary gas, this last inequality was checked to be obeyed by the experimental results, but it was shown, surprisingly, to be violated by some of the Quantum Monte Carlo results of Burovski . For the particular case of the spatially homogeneous unitary gas, the above reasonings imply that in Eq.(14), so that the unitary gas is attractive (in the ground branch, see subsection 1.4). Using the BCS variational ansatz in the lattice model 333One may check, e.g. in the sector , that the BCS variational wavefunction, which is a condensate of pairs in some pair wavefunction, does not obey the Wigner-Bethe-Peierls boundary conditions even if the pair wavefunction does, so it looses its variational character in the zero-range model. Varenna06 one obtains the more stringent upper bound Randeria2 :
Finite-range corrections
For the parabolic dispersion relation, the expectation Eq.(51) was checked analytically for two opposite spin particles: For , in free space the scattering amplitude (46), and in a box the lattice energy spectrum, converge to the predictions of the zero-range model LudoVerif . It was also checked numerically for particles in a box, with two particles and one particle: As shown in Fig. 6, for the first low energy eigenstates with zero total momentum, a convergence of the lattice eigenenergies to the Wigner-Bethe-Peierls ones is observed, in a way that is eventually linear in for small enough values of . As discussed in Tangen , this asymptotic linear dependence in is expected for Galilean invariant continuous space models, and the first order deviations of the eigenergies from their zero range values are linear in the effective range of the interaction potential, as defined in Eq.(5), with model-independent coefficients:
However, for lattice models, Galilean invariance is broken and the scattering between two particles depends on their center-of-mass momentum; this leads to a breakdown of the universal relation (57), while preserving the linear dependence of the energy with at low BurovskiNJP .
Figure 6: Diamonds: The first low eigenenergies for three fermions in a cubic box with a lattice model, as functions of the lattice constant LudoVerif . The box size is , with periodic boundary conditions, the scattering length is infinite, the dispersion relation is parabolic Eq.(48). The unit of energy is . Straight lines: Linear fits performed on the data over the range , except for the energy branch which is linear on a smaller range. Stars in : Eigenenergies predicted by the zero-range model.
A procedure to calculate in the lattice model for a general dispersion relation in presented in Appendix 1. For the parabolic dispersion relation Eq.(48), its value was given in Varenna06 in numerical form. With the technique exposed in Appendix 1, we have now the analytical value:
The usual Hubbard model, whose rich many-body physics is reviewed in AntoineVarenna , was also considered in Varenna06 : It is defined in terms of the tunneling amplitude between neighboring lattice sites, here , and of the on-site interaction . The dispersion relation is then
where the summation is over the three dimensions of space. It reproduces the free space dispersion relation only in a vicinity of . The explicit version of Eq.(47) is obtained from Eq.(49) by replacing the numerical constant by . In the zero range limit this leads for to , corresponding as expected to an attractive Hubbard model, lending itself to a Quantum Monte Carlo analysis for equal spin populations with no sign problem Burovski ; Bulgac . The effective range of the Hubbard model, calculated as in Appendix 1, remarkably is negative Varenna06 :
It becomes thus apparent that an ad hoc tuning of the dispersion relation may lead to a lattice model with a zero effective range. As an example, we consider a dispersion relation
where is a numerical constant less than . From Appendix 1 we then find that
The corresponding value of is given by Eq.(49) with .
As pointed out in BurovskiNJP , additionally fine-tuning the dispersion relation to cancel not only but also another coefficient (denoted by in BurovskiNJP ) may have some practical interest for Quantum Monte Carlo calculations that are performed with a non-zero , by canceling the undesired linear dependence of thermodynamical quantities and of the critical temperature on .
Energy functional, tail of the momentum distribution and pair correlation function at short distances
A quite ubiquitous quantity in the short-range or large-momentum physics of gases with zero range interactions is the so-called “contact”, which, restricting here for simplicity to thermal equilibrium in the canonical ensemble, can be defined by
For zero-range interactions, this quantity determines the large- tail of the momentum distribution
as well as the short-distance behavior of the pair distribution function
Here the spin- momentum distribution is normalised as . The relations (63,64,65) were obtained in Tan1 ; Tan2 . Historically, analogous relations were first established for one-dimensional bosonic systems Lieb ; Olshanii with techniques that may be straightforwardly extended to two dimensions and three dimensions Tangen . Another relation derived in Tan1 for the zero-range model expresses the energy as a functional of the one-body density matrix:
where is the spatial number density.
One usually uses (64) to define , and then derives (63). Here we rather take (63) as the definition of . This choice is convenient both for the two-channel model discussed in Section 2.3 and for the rederivation of (64,65,66) that we shall now present, where we use a lattice model before taking the zero-range limit.
From the Hellmann-Feynman theorem (that was already put forward in Lieb ), the interaction energy is equal to . Since we have [see the relation (47) between and ], this can be rewritten as
Expressing in terms of using once again (47), adding the kinetic energy, and taking the zero-range limit, we immediately get the relation (66). For the integral over momentum to be convergent, (64) must hold (in the absence of mathematical pathologies).
To derive (65), we again use (67), which implies that the relation
holds for , were is the zero-energy two-body scattering wavefunction, normalised in such a way that
[see Tangen for the straightforward calculation of ]. Moreover, in the regime where is much smaller than the typical interatomic distances and than the thermal de Broglie wavelength (but not necessarily smaller than ), it is generally expected that the -dependence of is proportional to , so that (68) remains asymptotically valid. Taking the limits and then gives the desired (65).
Alternatively, the link (64,65) between short-range pair correlations and large- tail of the momentum distribution can be directly deduced from the short-distance singularity of the wavefunction coming from the contact condition (75) and the corresponding tail in Fourier space Tangen , similarly to the original derivation in 1D Olshanii . Thus this link remains true for a generic out-of-equilibrium statistical mixture of states satisfying the contact condition Tan1 ; Tangen .
Absence of simple collapse
To conclude this subsection on lattice models, we try to address the question of the advantage of lattice models as compared to the standard continuous space model with a binary interaction potential between opposite spin fermions. Apart from practical advantages, due to the separable nature of the interaction in analytical calculations, or to the absence of sign problem in the Quantum Monte Carlo methods, is there a true physical advantage in using lattice models ?
One may argue for example that everywhere non-positive interaction potentials may be used in continuous space, such as a square well potential, with a range dependent depth adjusted to have a fixed non-zero scattering and no two-body bound states. E.g. for a square well potential , where is the Heaviside function, one simply has to take
to have an infinite scattering length. For such an attractive interaction, it seems then that one can easily reproduce the reasonings leading to the bounds Eqs.(52,53). It is known however that there exists a number of particles , in the unpolarized case , such that this model in free space has a -body bound state, necessarily of energy Blatt ; Panda ; Baym . In the thermodynamic limit, the unitary gas is thus not the ground phase of the system, it is at most a metastable phase, and this prevents a derivation of the bounds Eqs.(52,53). This catastrophe is easy to predict variationally, taking as a trial wavefunction the ground state of the ideal Fermi gas enclosed in a fictitious cubic hard wall cavity of size theseFelix . In the large limit, the kinetic energy in the trial wavefunction is then , see Eq.(14), where the Fermi wavevector is given by Eq.(8) with a density , so that
Since all particles are separated by a distance less than , the interaction energy is exactly
and wins over the kinetic energy for large enough, for the considered ansatz. Obviously, a similar reasoning leads to the same conclusion for an everywhere negative, non-necessarily square well interaction potential 444In fixed node calculations, an everywhere negative interaction potential is used Carlson ; Panda ; Giorgini . It is unknown if in these simulations exceeds the minimal value required to have a bound state. Note that the imposed nodal wavefunction in the fixed node method, usually the one of the Hartree-Fock or BCS state, would be however quite different from the one of the bound state.. One could imagine to suppress this problem by introducing a hard core repulsion, in which case however the purely attractive nature of would be lost, ruining our simple derivation of Eqs.(52,53).
The lattice models are immune to this catastrophic variational argument, since one cannot put more than two spin fermions “inside” the interaction potential, that is on the same lattice site. Still they preserve the purely attractive nature of the interaction. This does not prove however that their spectrum is bounded from below in the zero range limit, as pointed out in the introduction of this section.
2.2 Zero-range model, scale invariance and virial theorem
The zero-range model
The interactions are here replaced with contact conditions on the -body wavefunction. In the two-body case, the model, introduced already by Eq.(32), is discussed in details in the literature, see e.g. HouchesCastin99 in free space where the scattering amplitude is calculated and the existence for of a dimer of energy and wavefunction is discussed, being the reduced mass of the two particles. The two-body trapped case, solved in Wilkens , was already presented in subsection 1.4. Here we present the model for an arbitrary value of .
For simplicity, we consider in first quantized form the case of a fixed number of fermions in spin state and a fixed number of fermions in spin state , assuming that the Hamiltonian cannot change the spin state. We project the -body state vector onto the non-symmetrized spin state with the first particles in spin state and the remaining particles in spin state , to define a scalar -body wavefunction:
where is the set of all coordinates, and the normalization factor ensures that is normalized to unity 555 The inverse formula giving the full state vector in terms of is , where the projector is the usual antisymmetrizing operator . . The fermionic symmetry of the state vector allows to express the wavefunction on another spin state (with any different order of and factors) in terms of . For the considered spin state, this fermionic symmetry imposes that is odd under any permutation of the first positions , and also odd under any permutation of the last positions .
In the Wigner-Bethe-Peierls model, that we also call zero-range model, the Hamiltonian for the wavefunction is simply represented by the same partial differential operator as for the ideal gas case:
where is the external trapping potential supposed for simplicity to be spin state independent. As is however well emphasized in the mathematics of operators on Hilbert spaces analyse_spectrale , an operator is defined not only by a partial differential operator, but also by the choice of its so-called domain . A naive presentation of this concept of domain is given in the Appendix 2. Here the domain does not coincide with the ideal gas one. It includes the following Wigner-Bethe-Peierls contact conditions: For any pair of particles , when for a fixed position of their centroid , there exists a function such that
These conditions are imposed for all values of different from the positions of the other particles |
1e366da0bec25360 | Schrödinger equation
Quantum Mechanics
Introduction: Schrödinger equation
The time evolution of a quantum system follows from the solution of the TDSE, the Time-Dependent Schrödinger Equation. For simplicity consider the TDSE describing a system that can be in no more than two states. For a quantum system that has only two possible states phi_1 and phi_2 the TDSE reads
Time-dependent Schroedinger equation in matrix form Explanation symbols hbar and t
Hamiltonian in matrix form Explanation of symbols in the Hamiltonian
is the Hamiltonian describing the system. The solution of this equation gives a complete description of the time evolution of the quantum system. For instance, the probability to find the system in state 1 at time T is given by
P_1 (T) = | phi_1 (T) |^2
The Hamiltonian for a particle in an electromagnetic potential is given by
Hamiltonian in terms of the momentum, the vector potential and the scalar potential Explanation of symbols in the Hamiltonian
The quantum state of the particle is characterized by the amplitude Psi (R,T) for any point in space and time. This amplitude is also called the wave function of the particle. As before, the TDSE governs the time evolution of the wave function. The probability to find the particle at the position at time T is given by
P (R,T) = | Psi(R,T) |^2
The wave function contains all the information about the quantum system. Once it is known for all points in space and time, any physical quantity can be calculated. |
3635165c963c8424 | Friday, June 12, 2015
Where are we on the road to quantum gravity?
1. How could the problem of time be blamed on quantization? It seems to be rooted in classical GR.
2. It seems to be rooted in the Hamiltonian formulism.
3. "how detached quantum gravity is from experiment." The only predictive gravitation is geometric, thus 90 days in a geometric Eotvos experiment. Everything exactly cancels except geometry where the most extreme composition and field contrasts are inert. A non-zero signal is definitive. "Experimental Search for Quantum Gravity: The Hard Facts" Green's function imposes mirror-symmetry. There is only contrary evidence that the vacuum is exactly mirror-symmetric toward matter. Baryogenesis eludes theory, Sakharov conditions or otherwise.
Ashtekar (plus Immirzi) is GR chiral decomposition. The
Coupe du Roi also shows how a perfectly symmetric ball has hidden structure.
4. "...discourage people who follow long standing established research programs..."
So, people like Lee Smolin? He's been doing LQG for how long and GR still hasn't been derived from it in any kind of limit? He says, "The emergence of general relativity from the semiclassical approximation of the path integral is understood." which is clearly a lie. What does he mean by "understood"? If it hasn't been explicitly shown, then it is not "understood".
It doesn't surprise me that he says all these things. I remember a while back Lee Smolin hyping up a particular research program concerning braids in LQG and how it was going to show us that the Standard Model can be shown to emerge from the dynamics of LQG. Nothing has come from that.
At this point, I take very little of what he says seriously.
To be honest, I think all these research programs are more or less worthless. The program that comes out ahead is string theory since, at least, it allows for some kind of unification.
Even quantum gravity phenomenology is semi-worthless since, by definition, quantum gravity cannot be observed with current methods. Quantum gravity can only be observed at extremely high energies, way beyond what we are capable of.
Oh well....
5. Call me an optimist.
It is taken a few decades, but we are approaching an era in which voluminous and precise astronomy observations, coupled with computational power that would have been almost unimaginable when the Standard Model was formulated and Sting Theory started to take shape, can provide genuine empirical tests of various proposals for inflation; dark energy/cosmological constant; the behavior of particles in very strong field regime of white dwarfs, neutron stars, and black hole fringes; the possibility the dark matter phenomena are cause is part or predominantly by modifications to gravity.
The litany of null results out of the LHC for dark matter candidates or other new physics also isn't nothing. A huge swath of parameter space for new physics has been definitively ruled out.
A space telescope program with an LHC scale budget could take that to a whole new level. Something as simple as sending space telescopes to opposite ends of the solar system to allow more observations to be calibrated against parallax measurements could greatly reduce systemic error in gobs of data that we already have in hand.
Progress in particle physics and engineering has also pushed our instrumentation that allows us to test every detail of gravitational phenomena at the solar system level to almost maximal theoretically possible precision.
We also have a very deep bench of investigators worldwide who through mechanisms like arVix are sharing information with each other with near theoretically minimal friction. There are more new publicly available papers on GR and quantum gravity written by well trained PhDs each week than there would be for whole years for the first half century of GR.
The biggest threat we face, I think, is group-think. Because so many thousands of investigators are so intimately in touch with what each other are thinking, the risk that conventional wisdom will discourage out of the box thinking and destroy the benefits of having a legion of skilled people doing the work is a very real one. There might be something to be said for figuratively locking a few hundred of the most innovative and divergent physicists in a box at some institution in the middle of nowhere to at least have two independent communities of investigators to pursue their own sequence of insights for a few decades, imitating for the theoretical community the notion of having dual independent experiments at Tevatron and the LHC.
6. The class of solutions that Lee and Sabine (and 99.99% of the physics establishment) all seem to think have weight have one major problem - they all assume that QM is some sort of bedrock. The fact that 1000's of PhDs and postdocs have been spent on chasing the Quantization of Gravity should mean one thing.
It can't be done.
Lee does show some light when he says: "I believe that quantum theory requires a completion, in a deeper theory that allows a complete description of individual processes. I see no other way to resolve the measurement problem."
There are not many non linear physical theories which we know work, other than General Relativity. Yet the world of physics is so stuck in the linear QM world that it is GR which is assumed to be some approximation, when it is more than likely that the exact opposite is the case. Recent experiments show that QM like behaviour can emerge from classical fields, so it follows that GR may have the strength and flexibility to build QM, rather than the other way around.
7. I guess you're not posting my comment. It's not like I said anything inappropriate. I just offered a critical view. Last I checked, this wasn't The Reference Frame.
Oh well...
8. "... how detached quantum gravity is from experiment ..." If the space roar number is 6 ± .1 then MOND from string theory with the finite nature hypothesis yields 4.99 ± .03 for the number in the photon underproduction crisis. Is string theory with the finite nature hypothesis a revolution waiting to happen? "In the physics I have learned there were many examples of where the mathematics was giving infinite degenerate solutions to a certain problem (classical mechanical problems e.g.) There the problem was always a mistake in the physics assumption. Infinity is mathematical not physical, as far as I know." — Maria Spiropulu See Maria Spiropulu, THE LANDSCAPE, . Is MOND empirically valid because a complete infinity does not occur in nature?
“The failures of the standard model of cosmology require a new paradigm”, Jan. 2013
9. Vince,
Sorry for the wait, but I can't sit at my computer 24/7 and approve comments. Please give me at least 24 hours, more when traveling. I know it's annoying, but I've gotten really tired of all the crackpottery in my comment sections. I generally don't check email between 7pm and 7am.
10. Tom,
You didn't actually read what I wrote, did you?
11. Vince,
Regarding your comment about qg pheno, you are talking about direct detection, and you're just demonstrating you don't know a lot about the research area if you think that's it. For starters, please read this.
12. I think you are too impatient. What are 15 years of the life of a person in comparison with the entire life of the cosmos?
And I think, that a physicist must truly excel in mathematics (and must always be angry of his/her ignorance) and must have a strong mathematical curiosity and a mathematical mind. If one underestimates mathematics compared to physics as you do, one is automatically led astray. Because mathematics is the language of nature and there exists and there shall never exist any other language. Even good experiments, when they are carefully planned, have some kind of mathematical/logical design.
If one neglects sophisticated mathematics, one neglects seeking for the appropriate means of expression in which a thought in physics can be properly uttered. If one needs to devote time to problems on convergence and stability it is because the problems in physics that one is addressing require the consideration of those problems. A blackhole is a singularity and this has both, a mathematical and a physical meaning.
Mathematics is also wonderful for the sake of itself and that is the reason why most physicists are attracted by her. And never fatally attracted: Mathematics truly satisfies all human needs of some people (even people who live below a bridge cannot fail to do mathematics if they happen to be mathematicians: I have seen this) and one can speak of a happiness that is so full of joy that one happily renounces to the world of riches, pomps and vanities (the 'physical' world there where it is lacking in humility) for the Platonic world of joy and order with which God delights the mind of those who are strong enough and which appear as weak, poor and masochistic in the world of pomps and riches. All good physicists that I know want always to learn more mathematics and to express things clearly and rigorously.
A theoretical physicist is a poet of the real things and his/her word is the equation. An equation, where all terms are properly defined, belongs to the realm of mathematics and opens the realm of the infinite. One has to study an equation in itself in order to know where the equation cannot fail to be valid. The unified theory of physics should be valid in every instance. The complement of the set of things explained by theory is God, whose beauty and magnificence cannot be grasped with our words, concepts and equations.
13. I find the interview with Smolin quite useful and informative, thanks for sharing. And I will also read your preprint.
14. I have read your article and I have found it very interesting. I think your point is reflected in the quantization condition, where you introduce the field alpha, so that you can tune the quantization condition and decouple gravity as hbar tends to zero (the gravitational coupling constant G being proportional to hbar). I find the idea nice and simple.
In fact, I had also a very similar approach that I did never try to publish (I am not an specialist in quantum gravity and did not know what to do with that idea, even when I considered it interesting because of linking quantum mechanics with things in which I was then involved). What I considered, instead of your alpha, was the mean field r of a system of globally coupled nonlinear oscillators described by a Kuramoto model. The coupling constant of the model was as in your case so that, when hbar ->0 one has an incoherent population of oscillators (and hence the average order parameter r ->0). This would represent your 'unquantized' state. When hbar is nonzero, however, one has a synchronized phase emerging out of incoherence, the order parameter r -> 1 as more and more oscillators are synchronized, and one approaches the traditional quantization condition.
I did not know how to connect the Kuramoto model with gravity. Now, in reading your article, I have got another interesting idea and probably it is time to rescue those old crazy exercises with nonlinear oscillators. I share these ideas here freely for if you have any suggestion to make. I have to study your work in more detail. The monk Zacharias is also interested.
15. Sabine,
Thanks for your article, which I did read - I can see how I was perhaps too hard on you. You do seem to want a way of the stagnant morass that theoretical physics has become.
Emergent QM along the lines of Bush, Couder, Brady and others is something that mainstream physics wrongly ignores. In fact all emergent phenomena are often looked at as something 'below physics'. Perhaps we don't need new fundamental equations to advance physics at all.
16. Tom,
I wrote about Couder's theory here. I haven't had time to read the more recent paper. Best,
17. Hermannus,
How interesting that you had a similar idea! Unfortunately I don't know anything about the Kuramoto model. I'm not sure what you mean with 'globally coupled', I'd hope they are locally coupled, otherwise you'll run into trouble combining your model with gravity. Best,
18. Ok, thank you very much 'globally coupled' do not mean globally coupled in space but in their phases. I considered, in a rather crazy way, that each point in space time contains (locally) an infinite collection of oscillators. When they are decoupled then I reproduced the Poisson brackets of classical mechanics and when they are all coupled Heisenberg's commutation relationship. When there is a 'something in between situation' then I had something as your Eq. (3), with r, the order parameter of the collection of oscillators, replacing your alpha.
A wonderful introduction to the Kuramoto model is provided by Strogatz (I have teach this article at the University)
If you have problems in downloading the article let me know. And if you are interested I can send you an script with more details when I finish it. I cannot reveal you my identity, however, because this would go against the benedictine law to which I am subject and which I must carefully observe.
19. Usually, the lectures on the Kuramoto model I gave where accompanied with an experiment that I did on synchronization of metronomes, which constitute an example of application of the Kuramoto model:
What I did consider is that we do not have strings but a local collection of 'metronomes' that do something as above.
One solves the equations of motion of a system of metronomes horizontally coupled by a common support. The equations of motion reduce to the Kuramoto model in the limit of weak coupling (introduced by the common support and by the conservation of momentum of the center of mass of the whole system).
Best regards
20. Dear Sabine!
I am interested what do You think about approach like this one:
J. Ambjorn (NBI Copenhagen and U. Utrecht), J. Jurkiewicz (U. Krakow), R. Loll (U. Utrecht)
(Submitted on 17 May 2005 (v1), last revised 6 Jun 2005 (this version, v2))
We provide detailed evidence for the claim that nonperturbative quantum gravity, defined through state sums of causal triangulated geometries, possesses a large-scale limit in which the dimension of spacetime is four and the dynamics of the volume of the universe behaves semiclassically. This is a first step in reconstructing the universe from a dynamical principle at the Planck scale, and at the same time provides a nontrivial consistency check of the method of causal dynamical triangulations. A closer look at the quantum geometry reveals a number of highly nonclassical aspects, including a dynamical reduction of spacetime to two dimensions on short scales and a fractal structure of slices of constant time.
21. fiksacie,
I wrote about CDT here and most recently here.
22. "How do we know this procedure isn’t scale dependent? How do we know it works the same at the Planck scale as in our labs? "
Nobody is claiming this, so I think you are banging on open doors here.
Such approach should work in the perturbative regime though with weakly coupled Langrangians. At strong coupling if you a have a continuous limit you are ok.
But don't forget that there are theories with no classical limit and without Langrangians.
For QG it is quite obvious for me that you need new degrees of freedom, the new degrees of freedom are stringy ones.
23. I read the interview with Lee and saw his remark that asymptotically safe gravity has a problem with stability. I wasn't quite sure what he was referring to at first. I'm thinking now that he must have meant that at any larger structure size than a proton the weak force comes into play and particles and energy can escape. (Remember the fission nuclear bomb. If that is what he is thinking then he is overlooking something big.
ASG depends on a closed, finite universe. That isn't so strange. If you can't define the borders of what you are attempting to define then there is no hope for solving it. Just assume it and see how far you can get. You can get pretty darn far! If one assumes a closed universe then any coming apart of a bound structure, at any scale you can name, will release energy that will accelerate something else in the universe that will then become bound together with that same asymptotically safe force. The only difference between a proton, which seems to be unconditionally stable, and the universe as whole with an asymptotically safe structure, is that the stability hops from one structure to the next. This is all dependent on a closed universe. Why not?
24. I guess I should add that the fission atomic bomb was used to deploy the fusion atomic bomb. That should lead to some basic intuition about the "global" stability of asymptotically safe gravity.
25. Sabine,
Thanks for the reference. I'll have a look. I do have an open mind. :-)
26. Hi Bee:
Does Ashtekar have his own theory of quantum gravity, different from Smolin's LQG? Can you summarize results in few lines?(!!)
27. kashyap, It's both the same theory but with different variables, that's the short story.
28. Eric,
No, that's not what he meant. He probably meant that it's not known whether the fixed point has a Hamiltonian that is bounded from below, and last time I looked they still didn't know that.
Bravo, Bee, bravo!
Now to read the rest of the article :)
30. When we do not know what the majority of the gravitating mass in the universe consists of; we know little about it except its gravitational effects - why do we think we are in a position to produce a unified theory? Is it because we seem to know that dark matter interactions are so weak that there are no additional forces, beyond the electroweak, strong and gravity?
Of course, not understanding the matter content of the universe has nothing to do with quantizing gravity, except in unification scenarios like string theory?
31. Do we know the way to quantum gravity?
32. @Arun: Gravitation ignores composition and field - black holes, neutron stars, white dwarfs, hydrogen stars, Nordtvedt effect, lab stuff. Don't describe it or challenge it with such.
Newton does not parameterize to GPS. Quantum gravitation is not predictive and the standard model has no SUSY. Perfect derivation creates non-empirical models. A founding postulate is geometrically anomalous at the starting gate where physics "knows" it need not look. Nothing else matters, by observation. Listen to the dog that does not bark. First heretical experiment, then applicable theory.
33. I haven't (yet) read any of Smolin's books. One problem with popular-science books (I'm not sure if his fit into that category) is that someone with a background in the science in general, but not in the topic of the book (e.g. physics but not quantum gravity) learns little, if anything, new. (Such books might be OK for "interested laymen" who are interested in a very broad-brush overview.) On the other hand, one hasn't the time to read the technical literature outside of one's own field. There is a real need for something in between. For example, in 1991 Narlikar and Padmanabhan wrote a review called "Inflation for astronomers". Another good example: John Barrow's The Book of Universes (the level is not as high as that of the review, but higher than the typical popular-science book).
So, could I learn anything from Smolin's books?
34. Phillip,
Yes, I know what you mean. I read popular science books in physics primarily because I have an interest in writing. Did I learn anything from that book? It's been a decade ago that I read it and honestly I can't recall very much about it. I think I didn't previously know anything about spin networks, and that was the first time I heard about it. I vaguely recall having to look up "node" in a dictionary :p
I sometimes find lecture notes quite useful to get an introduction to a field I'm not so familiar with, but then you're not always lucky and find something suitable.
35. Arun,
I think it's because most particle physicists expect that whatever the unified theory is it will contain a suitable dark matter candidate. To them it's kind of exactly the opposite: instead of dark matter standing in the way of unification, dark matter is a motivation for unification.
36. I read a lot of divulgative book on the same arguments (gravity) because I wanted and I want to understand it. I stopped to buy divulgative books when I discover to main authors: Einstein and J.A. Wheeler. I haven't stop yet to read and rad again. Every time I understand a bit more.
Some authors are really.... Fantastic!
For sure I'll never miss a Sabine's book!
37. Hi Bee,
As with three Roads to Quantum Gravity we see where Lee evolves his perspective over time. Subject to change, of course, his perception may evolve too.
Anyway it has been sort of enlightening when one sees what we are doing in context of Quantum Cognition utilizing quantum theory as a foundational base when exploring our potentials. Why not, when it comes to Quantum Gravity? :)
Que dos to MarkusM.
38. Perhaps we should try a road less traveled?
39. Something noticeable about 'stringy' culture, is that almost exclusively people from that culture use extreme put-downs of basically anything that looks another way. Look above at the attack on Lee Smolin. It's not that he's wrong, or been wrong in the past about this matter or that matter. Stringy people don't look at things that way, I think because, phrase like that, it's self-evident there is no case to answer: nothing wrong with trying things and being wrong.
So what they do, the stringy people, is take things to a personal and couch their 'criticism' in terms of dishonesty, lies, deliberate omission, theft, and so on.
Now, sometimes in life it's true that there is gross dishonesty and deliberate omission and all the rest. At times like that, it's right to call a spade a spade. Problem is, dishonest, lying, strategies are just as likely to take the offensive, if not more likely.
Therefore the old adage, that when someone is being attacked by someone else on personal grounds, involving dishonest, lying, etc, etc, then someone is always guilty of exactly that. But which one?
One way to resolve this in detective logic, is to observe that it's not actually easy to fallaciously attack people the way Smolin is attacked. Severe moral and ethical compromises need to be made. And it's just one of those things, that we can't do that, and then stay the same in the other areas of our lives. We can't switch it on and off. If we give up our standards, we go to the new standard that we effectively choose. The whole of us.
So from that we can identify which side have sold themselves out for less along the way. They will be targeting indiscriminately. Their views will display cynicism across the board except for their 'own'.
How about something more local like that can be demonstrated here in the comment. Well yeah, that's doable, because the one thing that always HAS to go, when an intellectual makes that compromise so that he can project onto his victim, is the normal high-standards practice, of seeking always to see past communication shortcomings, and secondary items, flawed instantiations of examples in what the other person is saying, so as to to 'see' as far as possible, what the other person is 'seeing' for the fundamentally critical purpose of putting their position to its strongest form, and answering it, only there.
And for why? Because if you don't do that, you are answering a wholly different matter, that not only is not what that person is saying, but not what anyone is saying or has ever said. You don't even know what your answering. And that's a corrosive harmful infliction on yourself. That's the price. Of selling out for less.
Look here at the answer to Sabine's very good, highly plausible, and original idea.
Nobody is claiming this, so I think you are banging on open doors here."
That's coming from a stringy friend. Does he address a reasonable proxy for her position here? I am seeing the mirror opposite of that.
Maybe I'm the dishonest one in putting that example down? Or it representative of a lot of what's coming out of the string place these days? If it's me, then I'm sorry, because that isn't something I'd want to do.
And that means I don't think it is me. But if it is me it's settled immediately by the fact no one else recognizes anything of the sort from their observations and experiences of the stringy friends.
40. Giotis,
If I'm banging on open doors, then all the rooms seem to be empty ;) If you apply a different quantization prescription to strings you get a different theory (I believe Thiemann wrote a paper about this 10 years ago or so), so it's an assumption that matters. I don't know if you can include the prescription in the effective action and if that makes hbar (and possibly other constants) run.
41. This was shot down by Helling, Policastro
42. Giotis,
Thanks for the reference. I'm not sure what you mean though. I'm not saying that this is a good quantization method or one that one should use, I was just using this as an example that the assumption of the quantization method makes a difference for the outcome.
43. "Have an eye on Achim Kempf and Raffael Sorkin..."
By the way, his name is Rafael. Great guy, too!
44. Lee Smolin was correct...there is a theory of quantum gravity in 2015. In fact, there are any number of theories of quantum gravity in 2015. He then further stipulates that experiment must validate that theory, but which experiment he does not stipulate. Any experiment? I doubt that just any old experiment would satisfy Smolin.
My own sense is from the outside looking in and qg seems to be caught in a recursion of space and motion and continuous time. Science builds its theories with space and motion and continuous time, but there are other conjugate axioms besides space and motion. The Schrödinger equation works well for other conjugates like discrete matter and time. Why Smolin and others do not build their theories on discrete instead of continuous time is a mystery to me.
The answer seems so obvious...
45. I thing the Freidel-Leigh-Minic preprint that Smolin mentions [] and also the same authors' previous paper [] are fascinating, and may be the most important pair of QG papers I've read in a long time. And as you suggest, they are explicitly doing something other than the standard form of quantization -- in fact they have a rather plausible-sounding argument that to quantize gravity, or as they put it equivalently to gravitize quantum mechanics, you have to do an extrapolation of Born's suggestion: both the space-time and the quantum momentum space need to have curvature metrics, and these both need to be dynamical. Which normally would cause horrible failures of locality and unitarity, but they show that for string theory, it doesn't. Seriously, go read these two papers.
Comment moderation on this blog is turned on. |
f3fd6474da5a4197 |
Application of the Feshbach-resonance management
to a tightly confined Bose-Einstein condensate
G. Filatrella, B.A. Malomed and L. Salasnich Department of Biological and Environmental Sciences, University of Sannio, via Port’Arsa 11, 82100 Benevento, Italy
CNR-INFM, Regional Laboratory SUPERMAT, via S. Allende, 84081 Baronissi, Italy
CNR-INFM, CNISM, and Department of Physics “Galileo Galilei”, University of Padua, 35122 Padua, Italy
We study suppression of the collapse and stabilization of matter-wave solitons by means of time-periodic modulation of the effective nonlinearity, using the nonpolynomial Schrödinger equation (NPSE) for BEC trapped in a tight cigar-shaped potential. By means of systematic simulations, a stability region is identified in the plane of the modulation amplitude and frequency. In the low-frequency regime, solitons feature chaotic evolution, although they remain robust objects.
I Introduction
Dilute atomic Bose-Einstein condensates (BECs) are accurately described by the Gross-Pitaevskii equation (GPE), alias the cubic nonlinear Schrödinger equation (NLSE) leggett . The sign of the cubic term in the GPE corresponds to the self-defocusing or focusing, if interactions between atoms in the condensate are characterized, respectively, by the positive or negative s-wave scattering length. The self-focusing GPE in any dimension (1D, 2D, or 3D) gives rise to soliton solutions, which are stable in the 1D case. The creation of 1D matter-wave solitons has been reported in experimental works soliton , while 2D and 3D solitons are unstable against the critical and supercritical collapse, respectively (these 2D states are usually called Townes solitons, TSs) Berge' . It was predicted that TSs may be stabilized in the framework of the 2D GPE, without using an external potential, if the constant scattering length is replaced by a time-dependent one, that periodically changes its sign book-boris . In BEC, this can be implemented by means of the Feshbach-resonance management (FRM), i.e., by applying a low-frequency ac magnetic field which acts via the Feshbach resonance Feshbach . This stabilization mechanism was demonstrated in optics, in terms of the transmission of a light beam through a bulk medium composed of layers with alternating signs of the Kerr nonlinearity Isaac , and then in the framework of the 2D GPE 2Dstabilization ; Spain . A somewhat similar technique was proposed recently, making use of a linear coupling, induced by means of a resonant electromagnetic wave, between two different hyperfine states of atoms, which feature opposite signs of the scattering length Randy . The analysis of the FRM was extended to include averaging techniques averaging , generation of solitons from periodic waves periodic , the stabilization of higher-order solitons higher-order , management of discrete arrays discrete , and the case of a chirped modulation frequency Nicolin . However, the stabilization based on the FRM may be, strictly speaking, a transient dynamical regime, as extremely long simulations suggest that the FRM-stabilized TS may be subject to a very slow decay Japan .
The stabilization of 3D solitons by means of the FRM technique alone is not possible, but stable 3D solitons were predicted in a model combining the FRM and a 1D periodic potential Warsaw . Similarly, the stabilization is possible when the FRM is applied in combination with a parabolic potential which strongly confines the condensate in one direction Spain . Most relevant to the experiment is the “cigar-shaped” setting, with the BEC tightly confined in two transverse directions, while the third direction remains free soliton . In the usual approximation, with the cubic nonlinearity in the corresponding 1D GPE, the analysis of the FRM in the latter setting amounts to that reported in Refs. Feshbach . However, if the density of the condensate is not very low, the description in terms of the cubic nonlinearity is inappropriate, the respective 1D equation taking the form of the nonpolynomial Schrödinger equation (NPSE). In particular, it admits the onset of the collapse in the self-attractive condensate in the framework of the 1D description sala1 . Accordingly, a relevant problem, which is the subject of the present work, is to study the possibility of the collapse suppression by means of the FRM technique in the framework of the 1D NPSE. It is relevant to mention that the NPSE was recently used to describe Faraday waves generated in the cigar-shaped trap by a time-periodic modulation of the strength of the transverse confinement Ricardo . We introduce the model in Section 2, and report results obtained by means of systematic numerical simulations in Section 3.
Ii The nonpolynomial Schrödinger equation
The normalized form of the 3D GPE with the transverse harmonic trapping potential, which acts in the plane, is
Here is the mean-field wave function, and . Further, is the nonlinearity strength, with the s-wave scattering length, the total number of atoms in the condensate, and the confinement radius imposed by the transverse harmonic potential of frequency , with the atomic mass. In Eq. (1) length and time are measured in units of and . As usual, and correspond to the repulsion and attraction between atoms in the BEC, respectively, and is a weak axial potential, which may be present in addition to the strong transverse confinement. Being interested in the stabilization mechanism that does not require the extra potential, we set . Then, the 3D equation can be reduced to the NPSE by means of ansatz sala1
where 1D wave function is subject to the normalization condition, . Following Refs. sala1 -we , one can eliminate the transverse width, , arriving at the NPSE,
In the case of , stationary solutions are looked for as , where is the chemical potential, and real function obeys equation
Some numerical methods for simulations of the GPE and NPSE (with const) were presented in Ref. sala-numerics .
In the case of the attractive nonlinearity, , the form of Eq. (3) implies that the amplitude of the wave function is limited from above by a critical value,
A dynamical collapse sets in, with transverse width shrinking to zero and the solution developing a singularity in finite time, as approaches the critical value sala-soli . In Ref. antipatici , this was called two-dimensional primary collapse, as it is related to the transverse 2D dynamics.
In addition to the dynamical collapse, the NPSE also admits a static collapse, in the framework of stationary equation (4): for and , this equation admits bright-soliton solutions only below the critical value of the nonlinearity strength, sala-soli . At , the axial density in the bright-soliton solution is smaller than the critical value imposed by condition (5). In Ref. antipatici , this kind of the collapse was called three-dimensional primary collapse, as it involves a quasi-spherical 3D soliton. With regard to the definition of , this restriction determines the largest number of atoms possible in the soliton, . With and on the order of m and nm, respectively, which is typical for experiments in the Li condensate soliton , one has atoms.
The FRM technique which makes it possible to stabilize 2D matter-wave solitons is based on the respective GPE, where constant is replaced by a periodic function, with , so that alternates between attraction and repulsion. The stabilization requires the presence of the constant (“dc”) component which corresponds to the self-attraction on the average, i.e., .
The action of the FRM within the framework of the NPSE was not considered before. To explore this situation, we take as indicated above, arriving at the following modification of Eq. (3):
Our objective is to identify a region in parameter space where the solitons subjected to the “management” represent stable solutions of Eq. (6).
Iii Results
Localized solutions to Eq. (6) were categorized as stable ones if, in direct simulations, they featured persistent pulsations, avoiding collapse or decay up to (in some cases, the stability was checked up to ). However, the application of this criterion to the case of is complicated by the fact that, under the action of the low-frequency management, the soliton tends to develop an apparently chaotic behavior, although without a trend to decay, see below. Fixing the time interval as that comprising a large number of periods, simulations become increasingly more difficult for .
The simulations we performed by means of the Crank-Nicolson algorithm with open-ended boundary conditions. The initial state was , that was refined by the integration of Eq. (6) with in imaginary time sala-numerics . The so generated configuration was then used as the input to simulate Eq. (6) in real time.
Typical examples of stable and unstable solutions are shown in Figs. 1 and 2 (in the model with , the respective soliton is stable). In the latter case, a gradually growing amplitude of the soliton attains critical value (5) at finite , which implies the onset of the dynamical collapse. It happens when the argument of the square root in Eq. (6) becomes zero, i.e., the transverse width of the soliton shrinks to zero. Note that, with , critical density (5) of the axial wave function does not necessarily correspond to the maximum of . Indeed, in Fig. 2(a) the collapse happens at smaller than its maximum value, .
A typical example of a stable soliton solution to Eq. ( A typical example of a stable soliton solution to Eq. (
Figure 1: A typical example of a stable soliton solution to Eq. (6). (a) The evolution of the soliton’s amplitude in time. (b) A snapshot of the soliton at . The integration step is , and the size of the integration domain is . Parameters are , , .
A typical example of the collapsing soliton. (a) The evolution of
the amplitude up to A typical example of the collapsing soliton. (a) The evolution of
the amplitude up to
Figure 2: A typical example of the collapsing soliton. (a) The evolution of the amplitude up to , when it reaches critical value (5). (b) A snapshot of the soliton just before the onset of the collapse. Parameters are the same as in Fig. 1, except for .
Results of systematic simulations are summarized in the form of the stability diagram displayed in Fig. 3. The stability thresholds shown in the figure, i.e. the maximum value of admitting stable solitons, were found by slowly increasing in steps of , until the instability was attained. The shape of the stability domain in the plane of the management parameters, , is roughly similar to that which was found in management models of a different type, with the time-periodic modulation applied not to the nonlinearity, but to the optical-lattice potential, which is necessary for the existence of stable solitons in those cases. These include the 1D model for gap solitons, with a positive scattering length Thawatchai1 , and the 2D GPE with a negative scattering length and 1D or 2D periodic potential, that stabilizes TSs in the respective settings Thawatchai2 . As in those works, one may expect that here, at very large values of , the stability region will start to expand in the direction of larger values of , as in the limit of the ac term averages to zero.
(Color online). Stability borders in the plane of the
time-modulation parameters,
Figure 3: (Color online). Stability borders in the plane of the time-modulation parameters, , as obtained from systematic simulations for different fixed values of . In cases when the threshold depends upon the integration time (see Fig. 4), the respective symbol corresponds to the mean value, with the error bars given as per the respective semi-dispersion.
The stability borders in Fig. 3 are not extended to , as in the region of the low-frequency modulation the solitons feature persistent but apparently chaotic evolution. In fact, the stability domain is well defined for , while in the intermediate region, , the randomness of the soliton evolution makes the stability border dependent on the integration time – see Fig. 4, which demonstrates a natural trend to a decrease of the effective instability threshold with the increase of the evolution time, if the threshold is sensitive to it at all.
Dependence of
Figure 4: Dependence of at the soliton’s instability border on the integration time, for different modulation frequencies and . Symbols denote the stability limits at different frequencies: (diamonds), (crosses), (triangles), (squares).
Iv Conclusion
We have used the NPSE, i.e., the 1D mean-field equation for tightly trapped BEC, with the nonpolynomial nonlinearity admitting the onset of the collapse in the framework of the 1D description, for the study of the stabilization of solitons by means of the FRM (Feshbach-resonant-management) technique. The results were reported in the form of stability diagrams in the plane of the management parameters, . The stability domain is roughly similar to that reported in linear-management models for 1D gap solitons and 2D TSs (Townes solitons), supported by optical lattices subjected to the time-periodic modulation. However, stability domains of such a form have not been reported before in models of the nonlinearity management. At small values of the modulation frequency, the stability border becomes fuzzy, as solitons feature chaotic evolution in that case.
|
faac20961e6caa1d | The J Curve
Thursday, March 06, 2008
The Joy of Rockets
A short talk that I gave at TED, under the apt mavericks conference theme:
So many people have contacted me since this video went up to relay how rocketry inspired them in their childhood. Rocket science is tangible.
Here are some recent rocket photos and videos:
Icarus Rocket’s Red Glare Rocket-eye’s View Mile High View Go Canada Space, the Final Frontier L3 Bird Walking on the Moon
Silicon Valley boasts the largest rocketry club in the world. Yet, there is no "legal" launch site anywhere in the Bay Area, a situation that has become endemic across America.
There have been over 500 million Estes rocket launches in the U.S. alone. It's not for safety that rocketry has been pushed out of suburban areas; it's fear of the unknown. Local communities would rather forbid launches in their backyard than think about the systemic effect once all communities do so. This recently happened here, when Livermore shut down the last Silicon Valley site for launches. We are on the search for a new site (DeAnza college used to host sites, and we are currently pitching NASA Ames). If you have a large plot of land and would welcome some excited kids of all ages, please contact us at LUNAR. UPDATE: we succeeded in getting NASA Ames as our low-power launch site. Thanks!!!
Saturday, May 05, 2007
The words just warm the heart. WIRED recently launched the GeekDad blog with multiple contributors.
Parenthood is an atavistic adventure, especially for geeks who rediscover their child-like wonder and awe… and find that they can relate better to kids than many adults. The little people really appreciate arrested development in adults. =)
Another cause for celebration is the rediscovery of toys, but as an adult with a bigger allowance. Chris Anderson, editor in chief of Wired, put it well in one of his GeekDad posts: “Get Lego Mindstorms NXT. Permission to build and program cool toy robots is not the only reason to have children, but it's up there.”
Here are my contributions so far:
Beginner Ants with the NASA gel ant farm
Beginner’s Video Rocketry to capture video feeds from a soaring rocket
Peering into the Black Box: Household appliances become less mysterious when you take them apart
Cheap Laser Art: amazing emergent images with just a laser pointer and a camera
Slot Cars Revisited: modern cars with modern materials
Rocket Science Redux : Trying to build the smallest possible rocket is a great way for children to learn rocket science
Easter Egg Deployment by Rocket with a hundred little parachutes
Celebrate the Child-Like Mind, a topical repost from the J-Curve
From what I can see, the best scientists and engineers nurture a child-like mind. They are playful, open minded and unrestrained by the inner voice of reason, collective cynicism, or fear of failure.
Children remind us of how to be creative, and they foster an existential appreciation of the present. Our perception of the passage of time clocks with salient events. The sheer activity level of children and their rapid transformation accelerates the metronome of life.
Monday, January 01, 2007
Happy New Year
We broke out a wonderful bottle of bubbly with some friends last night, and discovered the official drink of The J Curve....
Starting in mid 2004, I blogged on a weekly basis, then bimonthly in 2005, and just twice in 2006. My creativity here has withered, supplanted by a daily photoblog on flickr.
I have wondered why I find it so much easier to post a daily photo than to sculpt prose on any kind of regular basis. For me, the mental hurdle for a daily photo post is so much lower than text. A photo can be a quick snapshot, without much care for quality, and this is immediately apparent to the viewer. You don't have to waste much time with uninteresting images. With text, if I dash off a few sloppy and poorly thought out paragraphs (like these ones =), the reader has to waste some time to realize that this is a throw-away post, or maybe meant to be tongue-in-cheek. I hold myself to a much higher quality hurdle for linear media — something thoughtful and provocative — and so I procrastinate. Many of my text posts are repurposed material that I wrote for external deadlines (magazines, conferences, congressional testimony), without which I may never had crystallized my disparate thoughts into something coherent.
Anyway, here are my 30 favorite photos and my best shot of 2006. Cheers!
Thursday, July 13, 2006
The Dichotomy of Design and Evolution
Wednesday, June 28, 2006
Brainstorm Questions
The editors of FORTUNE magazine asked four questions of the attendees of Brainstorm 2006. Ross Mayfield is blogging the replies and the ongoing conference. Here are my answers to two of the questions:
None of the individuals named today.
I would bet that in 2016, when we look back on who has had the greatest impact in the prior 10 years, it will be an entrepreneur, someone new, someone unknown to us at this time.
Looking forward from the present, we tend to amplify the leaders of the past. But in retrospect, it’s always clear that the future belongs to a new generation. A new generation of leaders will transcend political systems that cater to the past. I would bet more on a process of empowerment than any particular person.
I tend to be out of touch with fear as an emotion, and so I find myself rationally processing the question and thinking of the worst near-term catastrophe that could affect all of us.
At perhaps no time in recorded history has humanity been as vulnerable to viruses and biological pathogens as we are today. We are entering the golden age of natural viruses, and genetically modified and engineered pathogens dramatically compound the near term threat.
Bill Joy summarizes that “The risk of our extinction as we pass through this time of danger has been estimated to be anywhere from 30% to 50%.”
Why are we so vulnerable now?
The delicate "virus-host balance" observed in nature (whereby viruses tend not to be overly lethal to their hosts) is a byproduct of biological co-evolution on a geographically segregated planet. And now, both of those limitations have changed. Organisms can be re-engineered in ways that biological evolution would not have explored, or allowed to spread widely, and modern transportation undermines natural quarantine formation.
One example: According to Preston in The Demon in the Freezer, a single person in a typical university bio-lab can splice the IL-4 gene from the host into the corresponding pox virus. The techniques and effects are public information. The gene is available mail order.
The IL-4 splice into mousepox made the virus 100% lethal to its host, and 60% lethal to mice who had been vaccinated (more than 2 weeks prior). Even with a vaccine, the IL-4 mousepox is twice as lethal as natural smallpox (which killed ~30% of unvaccinated people).
The last wave of “natural” human smallpox killed over one billion people. Even if we vaccinated everyone, the next wave could be twice as lethal. And, of course, we won’t have time to vaccinate everyone nor can we contain outbreaks with vaccinations.
Imagine the human dynamic and policy implications if we have a purposeful IL-4 outbreak before we are better prepared…. Here is a series of implications that I fear:
1) Ring vaccinations and mass vaccinations would not work, so
2) Health care workers cannot come near these people, so
3) Victims could not be relocated (with current people and infrastructure) without spreading the virus to the people involved.
4) Quarantine would be essential, but it would be in-situ. Wherever there is an outbreak, there would need to be a hair-trigger quarantine.
5) Unlike prior quarantines, where people could hope for the best, and most would survive, this is very different: everyone in the quarantine area dies.
6) Where do you draw the boundary? Neighborhood? The entire city? With 100% lethality, the risk-reward ratio on conservatism shifts.
7) How do you enforce the quarantine? Everyone who thinks they are not yet infected will try to escape with all of the fear and cunning of someone facing certain death if they stay. It would require an armed military response with immediate deployment capabilities.
8) The ratio of those available to enforce quarantine to those contained makes this seem completely infeasible. With unplanned quarantine locations, there is no physical infrastructure to assist in the containment.
9) Once word about a lost city spreads, how long would it take for ad-hoc or planned “accelerated quarantine” to emerge?
10) Once rumor of the quarantine policy spreads, doctors would have a strong perverse incentive to not report cases until they made it out of town…
Sunday, December 11, 2005
Books I am Enjoying Now
and the library they came from. Each image links to comments
. .
Symbolic Immortality Bookshelf@work
Saturday, October 29, 2005
Keep On Booming
(I thought I’d post an excerpt from testimony I gave to the WHCoA: the White House Conference on Aging. I tried to use language that might appeal to the current political regime. =)
Every 60 seconds, a baby boomer turns 60. In thinking about the aging demographic in America, let me approach the issue as a capitalist. Rather than regarding the burgeoning ranks of “retirees” as an economic sink of subsidies, I see an enormous market and an untapped opportunity. Many marketers are realizing the power of the boom, and some of our largest investors have made their fortune attending to the shifting needs of the boomers.
Aging boomers are numerous and qualitatively different. Compared to an older generational cohort, the average boomer is twice as likely to have college degree and 3x as likely to have Internet experience.
Envision a future where many aging boomers are happily and productively working, flex-time, from home, on tasks that require human judgment and can be abstracted out of work flows.
Fortunately, we are clearly entering an information age for the economy. The basis of competition for most companies and all real GNP growth will come from improvements in information processing. Even in medicine and agriculture, the advances of the future will derive from better understanding and manipulation of the information systems of biology.
In short, the boomers could be America’s outsourcing alternative to off-shoring. The Internet’s latest developments in web services and digital communications (VOIP and videoconferencing) lower the transaction costs of segmenting information work across distributed work organizations.
There is a wonderful economic asymmetry between those who have money and those who have time, between those who need an answer and those with information. This is a boomer opportunity. Imagine a modern-day Web librarian. Think of professional services, like translation, consulting or graphic arts. The majority of economic activity is in services, much of which is an information service, freely tradable on a global basis. Imagine an eBay for information. Boomers may be the beneficiaries.
The free market will naturally exploit opportunities in secondary education and retraining, telecommuting technologies for rich communication over the Internet, web services to segment and abstract workflow processes and ship them over the network to aging boomers, and technology to help all of us retain our mental acuity and neural plasticity as we age. Lifelong learning is not just about enlightenment; it’s an economic imperative.
Where can the government help? Primarily in areas already entrenched in regulation. I will point out two areas that need attention:
1) Broadband Access. Broadband is the lifeline to the economy of the future. It is a prerequisite to the vision I just described. But America trails behind twelve other countries in broadband adoption. For example, our per-capita broadband adoption is less than half that of Korea. The Pew Internet Project reports that “only 15% of Americans over the age of 65 have access to the Internet.”
Broadband is infrastructure, like the highways. The roads have to be free for innovation in the vehicles, or software, that run on them. Would we have permitted GM to build the highways in exchange for the right to make them work exclusively with GM cars? Would we forbid he building of new roads because they compete with older paths? Yet that is what we are doing with current broadband regulation.
2) Reengineering the FDA and Medicare. No small feat, but this should be a joint optimization. Medicare has the de facto role to establish reimbursement policy, and it often takes several years after FDA approval for guidelines to be set. This could be streamlined, and shifted to a parallel track to the FDA approval process so that these delays are not additive.
Why is this important? We are entering an intellectual Renaissance in medicine, but the pace of progress is limited by a bureaucracy that evolves at a glacial pace, relative to the technological opportunities that it regulates.
The FDA processes and policies will need to undergo profound transitions to a future of personalized and regenerative medicine. The frustration and tension with the FDA will grow with the mismatch between a static status quo and an exponential pace of technological process. Exponential? Consider that 80% of all known gene data was discovered in the past 12 months. In the next 20 years, we will learn more about genetics, systems biology and the origin of disease than we have in all of human history.
The fate of nations depends on their unfettered leadership in the frontier of scientific exploration. We need to explore all promising possibilities of research, from nanotechnology to neural plasticity to reengineering the information systems of biology. We are entering a period of exponential growth in technological learning, where the power of biotech, infotech, and nanotech compounds the advances in each formerly discrete domain. In exploring these frontiers, nations are buying options for the future. And as Black-Scholes option pricing reveals, the value of an option grows with the range of uncertainty in its outcome.
These are heady times. Historians will look back on the upcoming nano-bio epoch with no less portent than the Industrial Revolution. If we give our aging boomers free and unfettered broadband access, and our scientists free and unfettered access to the frontiers of the unknown, then our greatest generation, when the look to the next, can take pride in knowing that the best is yet to come.
Saturday, October 01, 2005
XPRS: Big Rockets in the Black Rock Desert
"In terms of sheer coolness, few things beat rocketry."
— Paul Allen, Microsoft co-founder
I just had the most exciting weekend of my life.
For those who are not subscribers to my unified Feedburner RSS feed, the links here are to the relevant photos and commentary.
There was a steady stream of high power rockets, all day and into the night. Their roar quickens the pulse. Especially when they fall from the sky as supersonic lawn darts, shred fins at Mach 2, or go unstable and become landsharks. I had been warned about what happens when a supersonic rocket meets a Chevy Suburban.
The Hybrid Nitrous Oxide rockets and Mercury Joe scale model had glorious launches. To get my L1 Certification for high power rocketry, I had to build a rocket and H-size motor, and then successfully recover them after launch. I also tested my rocket videocam and GPS and altimeter systems.
Black Rock Desert in Nevada is the only place in the country with an FAA waiver to shoot up 100,000 feet, way beyond the end of the atmosphere.
I was camping with a member of the 100K team. It is a beautiful rocket, but this weekend a software bug brought the upper stage back to earth as a supersonic ground-penetrating “bunker buster” that tunneled and blasted a cave 14 feet under ground.
My inner child can’t wait for the next one…
Saturday, July 23, 2005
Reverberations of Friendship
On my flight to Estonia for a Skype board meeting, I was reading my usual geek fare, such as Matt Ridley’s Nature Via Nurture, a wonderful synthesis of phylogenetic inertia, nested genetic promoter feedback loops, bisexual bonobo sisterhoods, and the arrested development of domesticated animals.
While reading various interviews of Craig Venter, I stumbled across a nugget of sculptured prose from Patti Smith, which eloquently captures the resonant emotional filtration of a newfound friend and, in a more abstract way, the curious cultural immersion I felt in my Estonian homeland:
“There are those whom we seek and there are those whom we find. Occasionally we find – however fractured the relativity – one we recognize as kin. In doing so, certain curious aspects of character recede and we happily magnify the common ground.”
Friday, March 25, 2005
Ode to Carbon
I took a close look at the benzene molecular model on my desk, and visions of nested snake loops danced in my head…
Is there something unique about the carbon in carbon-based life forms?
Carbon can form strong bonds with a variety of materials, whereas the silicon of electronics is more finicky. Some elements of the periodic table are quite special. Herein may lie a molecular neo-vitalism, not for the discredited metaphysics of life, but for scalable computational architectures that exploit three dimensions.
Why is the difference in bonding variety between carbon and silicon important? The computational power of nature relies on a multitude of shapes (in the context of Wolfram’s principle of computational equivalence whereby any natural process of interest can be viewed as a comparably complex computation).
“Shape based computing is at the heart of hormone-receptor hookups, antigen-antibody matchups, genetic information transfer and cell differentiation. Life uses the shape of chemicals to identify, to categorize, to deduce and to decide what to do.” (Biomimicry, p.194)
Jaron Lanier abstracts the computation of molecular shapes to phenotropic computation along conformational and interacting surfaces, rather than linear strings like a Turing Machine or a data link. Some of these abstractions already apply to biomimetic robots that “treat the pliability of their own building materials as an aspect of computation.” (Lanier)
When I visited Nobel Laureate Smalley at Rice, he argued that the future of nanotech would be carbon based, due to its uniquely strong covalent bond potential, and carbon’s ability to bridge the world of electronics to the world of aqueous and organic chemistries, a world that is quite oxidative to traditional electronic elements.
At ACC2003, I moderated a debate with Kurzweil, Tuomi and Prof. Michael Denton from New Zealand. While I strongly disagreed with Denton's speculations on vitalism, he started with the interesting proposition that "self-replication arises from unique types of matter and can not be instantiated in different materials... The key to self-replication is self-assembly by energy minimization, relieving the cell of the informational burden of specifying its 3D complexity... Self-replication is not a substrate independent phenomenon." (Of course, self-replication is not impossible in other physical systems, for that would violate quantum mechanics, but it might be infeasible to design and build within a reasonable period of time.)
Natural systems exploit the rich dynamics of weak bonds (in protein folding, DNA hybridization, etc.) and perhaps the power of quantum scanning of all possible orbitals (there is a probability for the wave function of each bond). Molecules snap together faster than predicted by normal Brownian interaction rates, and perhaps this is fundamental to their computational power.
For example, consider the chemical reaction of a caffeine molecule binding to a receptor (something which is top of mind =). These two molecules are performing a quantum mechanical computation to solve the Schrödinger equation for all of their particles. This simple system is finding the simultaneous solution for about 2^1000 equations. That is a task of such immense complexity that if all of the matter of the universe was recast into BlueGene supercomputers, they could not find the solution even if they crunched away for the entire history of the universe. And that’s for one of the molecules in your coffee cup. The Matrix would require a different approach. =)
A simultaneous 3D exploration of all possible bonds warps Wolfram’s classical computational equivalence into a neo-vitalist quantum equivalence argument for the particular elements and material sets that can best exploit these dynamics. A quantum computer with 1000 logical qubits could perfectly simulate the coffee molecule by solving the Schrödinger equations in polynomial time.
Of course this begs the question of how we would design and program these conformational quantum computers. Again, nature provides an existence proof – with the simple process of evolutionary search surpassing intelligent design of complex systems. Which brings us back the earlier blog prediction, that biology will drive the future of information technology – inspirationally, metaphorically, and perhaps, elementally.
Traditional electronics design, on the other hand, has the advantages of exquisite speed and efficiency. The biggest challenge may prove to be the hybridization of these domains and design processes.
Sunday, March 06, 2005
TED Reflections
TED is a wonderfully refreshing brain spa, an eclectic ensemble of mental exercise that helps rekindle the childlike mind of creativity.
This year’s theme was “Inspired by Nature”, which I believe has broad and interdisciplinary relevance, especially to the future of intelligence and information technology. By the end of the conference, there was a common thread running throughout the myriad talks, a leitmotif along the frontiers of the unknown. I felt as if I had been immersed in a fugue of biomimicry.
I am still trying to synthesize the discussions I had with Kurzweil, Venter and Hillis about subsystem complexity in evolved systems, but until then, I thought I’d share some of my favorite quotes and photos.
• Rodney Brooks, MIT robotocist:
“Within 2-3 weeks, freshmen are adding BioBricks to the E.Coli bacteria chassis. They make oscillators that flash slowly and digital computation agents. But the digital abstraction may not be right metaphor for programming biology.”
“Polyclad flatworms have about 2000 neurons. You can take their brain out and put it back in backwards. The worm moves backwards at first, but adapts over time back to normal. You can rotate its brain 180 degrees and put it in upside down, and it still works. Biology is changing our understanding of complexity and computation.”
• Craig Venter, when asked about the risks of ‘playing God’ in the creation of a new form of microbial life: “My colleague Hammie Smith likes to answer: ‘We don’t play.’”
“With Synthetic Genomics, genes are the design components for the future of biology. We hope to replace the petrochemical industry, most food, clean energy and bioremediation.”
“The sea is very heterogeneous. We sampled seawater microbes every 200 miles and 85% of the gene sequences in each sample were unique... 80% of all known gene data is new in the last year.”
“There are about 5*10^30 microbes on Earth. The Archaea alone outweigh all plants and animals... One milliliter of sea water has 1 million bacteria and 10 million viruses.”
• Graham Hawkes, radical submarine inventor, would agree:
“94% of life on Earth is aquatic. I am embarrassed to call our planet ‘Earth’. It’s an ocean planet.”
• Janine Benyus, author of Biomimicry (discussion):
“Our heat, beat and treat approach to manufacturing is 96% waste... Life adds information to matter. Life creates conditions conducive to life.”
• Kevin Kelly, a brilliant author and synthesizer:
“Organisms hack the rules of life. Every rule has an exception in nature.”
“Life and technology tend toward ubiquity, diversity, specialization, complexity and sociability…. What does technology want? Technology wants a zillion species of one. Technology is the evolution of evolution itself, exploring the ways to explore, a game to play all the games.”
• James Watson, on finding DNA's helix: “It all happened in about two hours. We went from nothing to thing.” (Photo and discussion)
• The Bill Joy nightmare ensemble: GNR epitomized in Venter (Genetics), Kurzweil (Nanotech) and Brooks (Robotics).
• The Feynman Fan club: particle diagrams take on human form =)
• GM’s VP of R&D on the importance of hydrogen to the auto industry.
• Amory Lovins on the inefficiency of current autos
And, for entertainment, a Grateful Dead drum circle, Pilobolus, and polypedal studies.
• Bono, Streaming video of his TED Prize acceptance speech:
“A head of state admitted this to me: There’s no chance this kind of hemorrhaging of human life would be accepted anywhere else other than Africa. Africa is a continent in flames.”
Sunday, January 09, 2005
Thanks for the Memory
While reading Jeff Hawkins’ book On Intelligence, I was struck by the resonant coherence of his memory-prediction framework for how the cortex works. It was like my first exposure to complexity theory at the Santa Fe Institute – providing a perceptual prism for the seeing the consilience across various scientific conundrums. So, I had to visit him at the Redwood Neuroscience Institute.
As a former chip designer, I kept thinking of comparisons between the different “memories” – those in our head and those in our computers. It seems that the developmental trajectory of electronics is recapitulating the evolutionary history of the brain. Specifically, both are saturating with a memory-centric architecture. Is this a fundamental attractor in computation and cognition? Might a conceptual focus on speedy computation be blinding us to a memory-centric approach to artificial intelligence?
• First, the brain:
“The brain does not ‘compute’ the answers to problems; it retrieves the answers from memory… The entire cortex is a memory system. It isn’t a computer at all.”
Rather than a behavioral or computation-centric model, Hawkins presents a memory-prediction framework for intelligence. The 30 billion neurons in the neocortex provide a vast amount of memory that learns a model of the world. These memory-based models continuously make low-level predictions in parallel across all of our senses. We only notice them when a prediction is incorrect. Higher in the hierarchy, we make predictions at higher levels of abstraction (the crux of intelligence, creativity and all that we consider being human), but the structures are fundamentally the same.
More specifically, Hawkins argues that the cortex stores a temporal sequence of patterns in a repeating hierarchy of invariant forms and recalls them auto-associatively. The framework elegantly explains the importance of the broad synaptic connectivity and nested feedback loops seen in the cortex.
The cortex is relatively new development by evolutionary time scales. After a long period of simple reflexes and reptilian instincts, only mammals evolved a neocortex, and in humans it usurped some functionality (e.g., motor control) from older regions of the brain. Thinking of the reptilian brain as a “logic”-centric era in our development that then migrated to a memory-centric model serves as a good segue to electronics.
• And now, electronics:
The mention of Moore’s Law conjures up images of speedy microprocessors. Logic chips used to be mostly made of logic gates, but today’s microprocessors, network processors, FPGAs, DSPs and other “systems on a chip” are mostly memory. And they are still built in fabs that were optimized for logic, not memory.
The IC market can be broadly segmented into memory and logic chips. The ITRS estimates that in the next six years, 90% of all logic chip area will actually be memory. Coupled with the standalone memory business, we are entering an era for complex chips where almost all transistors manufactured are memory, not logic.
At the presciently named HotChips conference, AMD, Intel, Sony and Sun showed their latest PC, server, and PlayStation processors. They are mostly memory. In moving from the Itanium to the Montecito processor, Intel saturated the design with memory, moving from three megabytes to 26.5MB of cache memory. From a quick calculation (assuming 6 transistors per SRAM bit and error correction code overhead), the Montecito processor has ~1.5 billion transistors of memory, and 0.2 billion of logic. And Intel thought it had exited the memory business in the 80’s. |-)
Why the trend? The primary design enhancement from the prior generation is “relieving the memory bottleneck.” Intel explains the problem with their current processor: "For enterprise work loads, Itanium executes 15% of the time and stalls 85% of the time waiting for main memory.” When the processor lacks the needed data in the on-chip cache, it has to take a long time penalty to access the off-chip DRAM. Power and cost are also improved to the extent that more can be integrated on chip.
Given the importance of memory advances and the relative ease of applying molecular electronics to memory, we may see a bifurcation in Moore’s Law, where technical advances in memory precede logic by several years. This is because molecular self-assembly approaches apply easily to regular 2D structures, like a memory array, and not to the heterogeneous interconnect of logic gates. Self-assembly of simple components does not lend itself to complex designs. (There are many more analogies to the brain that can be made here, but I will save comments about interconnect, learning and plasticity for a future post).
Weaving these brain and semi industry threads together, the potential for intelligence in artificial systems is ripe for a Renaissance. Hawkins ends his book with a call to action: “now is the time to start building cortex-like memory systems... The human brain is not even close to the limit” of possibility.
Hawkins estimates that the memory size of the human brain is 8 terabytes, which is no longer beyond the reach of commercial technology. The issue though, is not the amount of memory, but the need for massive and dynamic interconnect. I would be interested to hear from anyone with solutions to the interconnect scaling problem. Biomimicry of the synapse, from sprouting to pruning, may be the missing link for the Renaissance.
P.S. On a lighter note, here is a photo of a cortex under construction. ;-)
Thursday, November 25, 2004
Giving Thanks to our Libraries & Bio-Hackers
As I eat a large meal today, I am reminded of so much that we should be thankful for. Most evidently, we should give thanks to the epiglottis, the little valve that flaps with every swallow to keep food and drink out of our windpipe. Unlike other mammals, we can’t drink and breathe at the same time, and we are prone to choking, but hey, our larynx location makes complex speech a lot easier.
Much of our biology is more sublime. With the digitization of myriad genomes, we are learning to decode and reprogram the information systems of biology. Like computer hackers, we can leverage a prior library of evolved code, assemblers and subsystems. Many of the radical applications lie outside of medicine.
For example, a Danish group is testing a genetically-modified plant in the war-torn lands of Bosnia and Africa. Instead of turning red in autumn, this plant changes color in the presence of land mines or unexploded ordinance. Red marks the spot for land mine removal.
At MIT, researchers are using accelerated artificial evolution to rapidly breed M13 viruses to infect bacteria in such a way that they bind and organize semiconductor materials with molecular precision.
At IBEA, Craig Venter and Hamilton Smith are leading the Minimal Genome Project. They take the Mycoplasma genitalium from the human urogenital tract, and strip out 200 unnecessary genes, thereby creating the simplest synthetic organism that can self-replicate (at about 300 genes). They plan to layer new functionality on to this artificial genome, to make a solar cell or to generate hydrogen from water using the sun’s energy for photonic hydrolysis (perhaps by splicing in novel genes discovered in the Sargasso Sea for energy conversion from sunlight).
Venter explains: “Creating a new life form is a means of understanding the genome and understanding the gene sets. We don’t have enough scientists on the planet, enough money, and enough time using traditional methods to understand the millions of genes we are uncovering. So we have to develop new approaches… to understand empirically what the different genes do in developing living systems.”
Thankfully, these researchers can leverage a powerful nanoscale molecular assembly machine. It is 20nm on a side and consists of only 99 thousand atoms. It reads a tape of digital instructions to concatenate molecules into polymer chains.
I am referring to the ribosome. It reads mRNA code to assemble proteins from amino acids, thereby manufacturing most of what you care about in your body. And it serves as a wonderful existence proof for the imagination.
So let’s raise a glass to the lowly ribosome and the library of code it can interpret. Much of our future context will be defined by the accelerating proliferation of information technology, as it innervates society and begins to subsume matter into code.
(These themes relate to the earlier posts on the human genome being smaller than Microsoft Office and on the power of biological metaphors for the future of information technology.)
P.S. Happy Thanksgiving, even to the bears… =)
Sunday, November 21, 2004
Nanotech is the Nexus of the Sciences
Disruptive innovation, the driver of growth and renewal, occurs at the edge. In startups, innovation occurs out of the mainstream, away from the warmth of the herd. In biological evolution, innovative mutations take hold at the physical edge of the population, at the edge of survival. In complexity theory, structure and complexity emerge at the edge of chaos – the dividing line between predictable regularity and chaotic indeterminacy. And in science, meaningful disruptive innovation occurs in the inter-disciplinary interstices between formal academic disciplines.
Herein lies much of the excitement about nanotechnology. Quite simply, it is in the richness of human communication about science. Nanotech exposes the core areas of overlap in the fundamental sciences, the place where quantum physics and quantum chemistry can cross-pollinate with ideas from the life sciences.
Over time, each of the academic disciplines develops its own proprietary systems vernacular that isolates it from neighboring disciplines. Nanoscale science requires scientists to cut across scientific languages to unite the isolated islands of innovation.
In academic centers and government labs, nanotech is fostering new conversations. At Stanford, Duke and many other schools, the new nanotech buildings are physically located at the symbolic hub of the schools of engineering, computer science and medicine.
(Keep in mind though, that outside of the science and research itself, the "nanotech" moniker conveys no business synergy whatsoever. The marketing, distribution and sales of a nanotech solar cell, memory chip or drug delivery capsule will be completely different from each other, and will present few opportunities for common learning or synergy.)
Nanotech is the nexus of the sciences. The history of humanity is that we use our tools and our knowledge to build better tools and expand the bounds of our learning. Empowered by the digitization of the information systems of biology, the nanotech nexus is catalyzing an innovation Renaissance, a period of exponential growth in learning, where the power of biotech, infotech and nanotech compounds the advances in each formerly discrete domain. This should be a very exciting epoch, one that historians may look back on with no less portent than the Industrial Revolution.
Sunday, November 14, 2004
Clones and Mutants
“Life is the imperfect transmission of code.” At our life sciences conference in Half Moon Bay, Juan Enriquez shared some his adventures around the biosphere, from an Argentinean clone farm to shotgun sequencing the Sargasso Sea with Craig Venter. From the first five ocean samples, they grew the number of known genes on the planet by 10x and the number of genes involved in solar energy conversion by 100x. The ocean microbes have evolved over a longer period of time and have pathways that are more efficient than photosynthesis.
Clone Farms
Juan showed a series of photos from his October trip to a farm in Argentina. With simple equipment that fits on a desk, the farmer cloned and implanted 60 embryos that morning. All of the cows in his field came from a cell sample from the ear of one cow.
Some of the cows are genetically modified to produce pharmaceutical proteins in their milk (human EPO). These animal bioreactors are very efficient and could replace large buildings of traditional manufacturing capacity.
Whether stem cell research and treatment for ALS, or cloning cows, Argentina is one of the countries boldly going where the U.S. Federal government fears to tread.
Three Wing Chickens
Juan also showed a genetically engineered three wing chicken. The homeobox gene that has been modified is affectionately called “Sonic Hedgehog” (his son really likes SEGA!)
The homeobox genes are my favorites. They are like powerful subroutine calls that have structural phenotypic effects.
I recommend Juan’s book As the Future Catches You for an exploration of the economic imperative of technology education, especially literacy in the modern languages of digital code and genetic code. And for a populist description of the homeobox genes, I recommend Matt Ridley’s Genome, a very fun primer on genetics. Here is a selection:
Hedgehog has its equivalents in people and in birds. Three very similar genes do much the same thing in chicks and people… The hedgehog genes define the front and rear of the wing, and it is Hox genes that then divide it up into digits. The transformation of a simple limb bud into a five-fingered hand happens in every one of us, but it also happened, on a different timescale, when the first tetrapods developed hands from fish fins some time after 400 million years ago.”
"So simple is embryonic development that it is tempting to wonder if human engineers should not try to copy it, and invent self-assembling machines.”
One of Juan’s slides was the first hand drawn map of the Internet, circa 1969. Larry Roberts had drawn that map, and happened to be in the audience to brainstorm after the talk.
P.S. The most popular phone at our conference was the Moto Razor, Chinese edition.
P.S.S. The most popular blog photo so far (with over 12,000 visitors) is a simple message…
Sunday, October 31, 2004
Spooks and Goblins
As it’s Halloween here, I got to thinking about strange beliefs and their origins. Do you think that the generation of myths and folkloric false beliefs has declined over time?
In addition to the popularization of the scientific method, I wonder if photography lessened the promulgation of tall tales. Before photography, if someone told you a story about ghosts in the haunted house or the beast on the hill, you could chose to believe them or check for yourself. There was no way to say, “show me a picture of that Yeti or Loch Ness Monster, and then I’ll believe you.”
And, if so, will we regress as we have developed the ability to modify and fabricate photos and video?
For our class on genetic free speech, Lessig used a pre-print of Posner’s new book, Catastophe: Risk and Response. Posner relates the following statistics on American adults:
• 39% believe astrology is scientific (astrology, not astronomy).
• 33% believe in ghosts and communication with the dead.
Ponder that for a moment. One out of every three U.S. adults believes in ghosts. Who knows what their kids think.
People’s willingness to believe untruths relates to the ability of the average person to reason critically about reality. Here are some less amusing statistics on American adults:
• 46% deny that human beings evolved from earlier animal species.
• 49% don’t know that it takes a year for the earth to revolve around the sun.
• 67% don't know what a molecule is.
• 80% can't understand the NY Times Tuesday science section.
Posner concludes: “It is possible that science is valued by most Americans as another form of magic.” This is a wonderful substrate for false memes and a new generation of bogeymen.
Gotta go… It’s time to trick-or-treat… =)
Wednesday, October 27, 2004
The Photo Blog
For those of you who are not receiving the Feedburner RSS Feed of this blog, you are missing the whimsical and visual postings. So, for Halloween, I thought I’d post links to some of the interesting photos and commentary:
• Fun with: Bush, Kennedy, Gates, Jobs, Moore, and Jamis in Japanese.
• Observations from the first screening of Pixar’s new film, The Incredibles.
• Beautiful Scenes from: Estonia, the Canadian Rockies, Singapore, Montage (Beach), and The Internet.
• Odd Photos: Halloween Horses, Climbing the Dish at Stanford, Extreme Macro Zoom, Elephants, Aquasaurs and Ecospheres, the Technorati Bobsled Team, and the NanoCar spoof (which continues to fool people even this week).
• It came from TED: Visual Material Puzzles (another) and the DeepFlight submarine.
• And, of course, Rockets, Detached Heads, Funky Pink Divas and Robot Women.
An eclectic mix…. Happy Halloween!
Monday, October 18, 2004
Defining “Don’t be Evil”
Back in 1995, it was easy to rig search engine results. Some search engines would actually tell you how they parsed just the first 100 words on the page. And they would let you submit pages to be crawled for fast feedback on how page content modifications lead to search results. Stacking white keywords on a white background at the top of the page did the trick for a couple years.
Then Overture invented the pay for placement model, which Google disdained as “evil” and then adopted as its primary revenue model. Google got around their own evil epithet by clearly delineating paid search results from unpaid. This has been their holy line in the sand. From the Business Journal: "'Don't be evil' is the corporate mantra around Google…. When their competitors began mixing paid placement listings with actual search results, Google stayed pure, drawing a clear line between search results and advertising.”
So Overture and Google have made search engine results a BIG business, and several “consultants” sell advice on how to spike results, but their tricks are short lived.
So it was with some amusement, that I found a way to easily spike certain Google search results. This has worked for a few months now, and it will be interesting to see how long it lasts after this post… ;-)
A reader of this blog pointed out to me that my Blogger Profile gets the top two Google search results for IL-4 smallpox, a genetically modified bioweapon. This is when my blog had no content whatsoever in this area (it now does). My profile is also number one for genetically modified pathogen policy, over thousands of more relevant pages.
And my profile is number one for several areas of whimsy: Techno downbeat music, and Nanotech core memory boards, and Artificial life with female moths, and Viral marketing with Technorati, among others. (disclosure: we invested in Technorati and Overture). Of course, longer phrases are easier to spike, and not everything works for a top placement, but this still seems way too easy.
Why is this interesting? Well, Google owns Blogger, and they get to decide how to fold blog pages into search results. It’s not obvious how to rank a vapid Blogger profile page versus real content… or a competing blog service for that matter. And as Google offers more services like Blogger and Orkut, it will be interesting to see how they promote them in their own search results.
Every person I have met from Google is fantastic, and I don’t think this quirk is an overt strategy passed down from management (and I presume it will disappear as more people exploit it). On the other hand, this is the kind of product tying you would expect from Microsoft. And it begs the question, can a mantra to not do evil infuse into the corporate DNA and continue to drive culture as a company scales?
There's also the question of internal consistency. Thinking back to the holy line in the sand about disclosing advertising in search results, does it somehow not count if you own it?
Google has taken on the challenge of defining evil, which begs for an operational constitution. Neal Stephenson proposes one meta rule: in a climate of moral relativism the only sin is hypocrisy.
Friday, October 15, 2004
Childish Scientists
– Albert Einstein
Monday, October 11, 2004
Notes from EDAY 2004
On Saturday, IDEO mixed some fun and play with some great lectures:
• Stanford Prof. Bob Sutton: “Sometimes the best management is no management at all. Managers consistently overestimate their impact on performance. And once you manage someone, you immediately think more highly of them.” When Chuck House wanted to develop the oscilloscope for HP, David Packard told him to abandon the project. Chuck went on vacation” and came back with $2MM in orders. Packard later gave him an award inscribed with an accolade for “extraordinary contempt and defiance beyond the normal call of engineering.” When Leakey chose Jane Goodall, he “wanted someone with a mind uncluttered and unbiased by theory.” Sutton’s conclusion for innovative work: “Hire slow learners of the organizational code, people who are oblivious to social cues and have very high self-esteem. They will draw on past individual experience or invent new methods.”
• Dr. Stuart Brown, founder of the Institute for Play, showed a fascinating series of photos of animals playing (ravens sliding on their backs down an icy slope, monkeys rolling snowballs and playing leapfrog, and various inter-species games). “Warm-blooded animals play; fish and reptiles do not. Warm blood stores energy, and a cortex allows for choice and REM sleep.”
Brown has also studied the history of mass murderers, and found “normal play behavior was virtually absent throughout the lives of highly violent, anti-social men. The opposite of ‘play’ is not ‘work’. It’s depression.”
“We are designed to play. We need 3D motion. The smarter the creature the more they play. The sea squirt auto-digests its brain when it becomes sessile.”
• Michael Schrage, MIT Media Lab Fellow, defined play as “the riskless competition between speculative choices. If it’s predictable, it’s not play. The opposite of play is not what is serious, but what is real. The paradox is that you can’t be serious if you don’t play.”
“We need to treat our tools as toys and our toys as tools. Our simulations, models and prototypes need to play.”
Friday, October 08, 2004
More Things Change
I am at the World Technology Summit today. Just finished a panel on accelerating change, where John Smart made the following provocative points:
• Technology learns 100 million times faster than you do.
• Humans are selective catalysts, not controllers, of technological evolutionary development.
• 80-90% of your paycheck comes from automation.
• Catastrophes accelerate societal immunity. The network always wins.
If you want to take a deep dive into these topics with him, John is hosting Accelerating Change 2004 at Stanford, Nov 6-7. He is offering a $50 discount to readers of this blog (discount code "AC2004-J" with all caps).
Update: For those not subscribing to the Feedburner RSS feed, here are some new photos from WTS 2004 and the Awards Dinner.
Sunday, October 03, 2004
Celebrate the Child-Like Mind
Celebrate immaturity. Play every day. Fail early and often.
On Thursday, I went to a self-described "play-date" at David Kelley's house. The founder of IDEO is setting up an interdisciplinary "D-School" for design and creativity at Stanford. David and Don Norman noted that creativity is killed by fear, referencing experiments that contrast people’s approach to walking along a balance beam flat on the ground (playful and expressive) and then suspended in the air (fearful and rigid). They are hosting an open conference on Saturday, appropriately entitled The Power of Play.
In science, meaningful disruptive innovation occurs at the inter-disciplinary interstices between formal academic disciplines. Perhaps the D-school will go further, to “non-disciplined studies” – stripped of systems vernacular, stricture, and the constraints of discipline.
What is so great about the “child-like” mind? Looking across the Bay to Berkeley, I highly recommend Alison Gopnik’s Scientist in the Crib to any geek about to have a child. Here is one of her key conclusions: "Babies are just plain smarter than we are, at least if being smart means being able to learn something new.... They think, draw conclusions, make predictions, look for explanations and even do experiments…. In fact, scientists are successful precisely because they emulate what children do naturally."
Much of the human brain’s power derives from its massive synaptic interconnectivity. I spoke with Geoffrey West from the Santa Fe Institute last night. He observed that across species, synapses/neuron fan-out grows as a power law with brain mass.
At the age of 2 to 3 years old, children hit their peak with 10x the synapses and 2x the energy burn of an adult brain. And it’s all downhill from there.
Cognitive Decline by Age
This UCSF Memory and Aging Center graph shows that the pace of cognitive decline is the same in the 40’s as in the 80’s. We just notice more accumulated decline as we get older, especially when we cross the threshold of forgetting most of what we try to remember.
But we can affect this progression. Prof. Merzenich at UCSF has found that neural plasticity does not disappear in adults. It just requires mental exercise. Use it or lose it. We have to get out of the mental ruts that career tracks and academic “disciplines” can foster. Blogging is a form of mental exercise. I try to let this one take a random walk of curiosities and child-like exploration.
Bottom line: Embrace lifelong learning. Do something new. Physical exercise is repetitive; mental exercise is eclectic.
Friday, October 01, 2004
Quote of the Day
"Microsoft has had clear competitors in the past.
It's good that we have museums to document them."
- Bill Gates, today at the Computer History Museum (former SGI HQ)
At the reception, Gates mingled in front of the wooden Apple 1, with a banner over his head: “The Two Steves.”
Sunday, September 26, 2004
Transcending Moore’s Law with Molecular Electronics
The future of Moore’s Law is not CMOS transistors on silicon. Within 25 years, they will be as obsolete as the vacuum tube.
While this will be a massive disruption to the semiconductor industry, a larger set of industries depends on continued exponential cost declines in computational power and storage density. Moore’s Law drives electronics, communications and computers and has become a primary driver in drug discovery and bioinformatics, medical imaging and diagnostics. Over time, the lab sciences become information sciences, and then the speed of iterative simulations accelerates the pace of progress.
There are several reasons why molecular electronics is the next paradigm for Moore’s Law:
• Size: Molecular electronics has the potential to dramatically extend the miniaturization that has driven the density and speed advantages of the integrated circuit (IC) phase of Moore’s Law. For a memorable sense of the massive difference in scale, consider a single drop of water. There are more molecules in a single drop of water than all transistors ever built. Think of the transistors in every memory chip and every processor ever built, worldwide. Sure, water molecules are small, but an important part of the comparison depends on the 3D volume of a drop. Every IC, in contrast, is a thin veneer of computation on a thick and inert substrate.
• Power: One of the reasons that transistors are not stacked into 3D volumes today is that the silicon would melt. Power per calculation will dominate clock speed as the metric of merit for the future of computation. The inefficiency of the modern transistor is staggering. The human brain is ~100 million times more power efficient than our modern microprocessors. Sure the brain is slow (under a kHz) but it is massively parallel (with 100 trillion synapses between 60 billion neurons), and interconnected in a 3D volume. Stan Williams, the director of HP’s quantum science research labs, concludes: “it should be physically possible to do the work of all the computers on Earth today using a single watt of power.”
• Manufacturing Cost: Many of the molecular electronics designs use simple spin coating or molecular self-assembly of organic compounds. The process complexity is embodied in the inexpensive synthesized molecular structures, and so they can literally be splashed on to a prepared silicon wafer. The complexity is not in the deposition or the manufacturing process or the systems engineering.
Biology does not tend to assemble complexity at 1000 degrees in a high vacuum. It tends to be room temperature or body temperature. In a manufacturing domain, this opens the possibility of cheap plastic substrates instead of expensive silicon ingots.
• Elegance: In addition to these advantages, some of the molecular electronics approaches offer elegant solutions to non-volatile and inherently digital storage. We go through unnatural acts with CMOS silicon to get an inherently analog and leaky medium to approximate a digital and non-volatile abstraction that we depend on for our design methodology. Many of the molecular electronic approaches are inherently digital and immune to soft errors, and some are inherently non-volatile.
For more details, I recently wrote a 20 page article expanding on these ideas and nanotech in general (PDF download). And if anyone is interested in the references and calculations for the water drop and brain power comparisons, I can provide the details in the Comments.
Friday, September 17, 2004
Recapitulation in Nested Evolutionary Dynamics
I noticed the following table of interval time compression midway down the home page of
“3–4 million years ago: collective rock throwing…
500,000 years ago: control of fire
50,000 years ago: bow and arrow; fine tools
5,000 years ago: wheel and axle; sail
500 years ago: printing press with movable type; rifle
50 years ago: the transistor; digital computers”
Then I burst out laughing with a maturationist epiphany: this is exactly the same sequence of development I went though as a young boy! It started with collective rock throwing (I still have a scar inside my lip)..... then FIRE IS COOL!.... then slingshots…. and the wheels of my bike…. then writing and my pellet gun.... and by 7th grade, programming the Apple ][. Spooky.
It reminded me of the catchy aphorism: “ontogeny recapitulates phylogeny” (the overgeneralization that fetal embryonic development replays ancestral evolutionary stages) and recapitulation theories in general.
I’m thinking of Dawkin’s description of memes (elements of ideas and culture) as fundamental mindless replicators, like genes, for which animals are merely vectors for replication (like a host to the virus). In Meme Machine, Susan Blackmore explores the meme-gene parallels and derives an interesting framework for explaining the unusual size of the human brain and the origins of consciousness, language, altruism, religion, and orkut.
Discussions of the cultural and technological extensions of our biological evolution evoke notions of recapitulation – to reestablish the foundation for compounding progress across generations. But perhaps it is something more fundamental, a “basic conserved and resonant developmental homology” as John Smart would describe it. A theme of evolutionary dynamics operating across different substrates and time scales leads to inevitable parallels in developmental sequences.
For example, Gardner’s Selfish Biocosm hypothesis extends evolution across successive universes. His premise is that the anthropic qualities (life and intelligence-friendly) of our universe derive from “an enormously lengthy cosmic replication cycle in which… our cosmos duplicates itself and propagates one or more "baby universes." The hypothesis suggests that the cosmos is "selfish" in the same metaphorical sense that evolutionary theorist and ultra-Darwinist Richard Dawkins proposed that genes are "selfish." …The cosmos is "selfishly" focused upon the overarching objective of achieving its own replication.”
Gardner concludes with another nested spiral of recapitulation:
“An implication of the Selfish Biocosm hypothesis is that the emergence of life and ever more accomplished forms of intelligence is inextricably linked to the physical birth, evolution, and reproduction of the cosmos.”
Friday, September 10, 2004
Whither Windows?
From the local demos of Longhorn, it seems to me that OS X is the Longhorn preview. As far as I can tell, Microsoft is hoping to do a subset of OS X and bundle applications like iPhoto. Am I missing something?
It seems that the need to use a Microsoft operating system will decline with the improvement in open source device drivers and web services for applications.
Why worry about Microsoft operating systems as a non-user? Well, the spam viruses on Windows affect all of us. I have not had a Mac virus for at least 10 years (sure, you could joke that nobody writes apps for the Mac any more =), but my email inbox has seen the effects of the Windows worms.
And of course, I am an indirect user of Microsoft servers. And that can be another source of concern. Microsoft is a global monoculture and is therefore subject to catastrophic collapse. The resiliency of critical computer networks might suffer if they migrate to a common architecture. Like a monoculture of corn, they can be more efficient, but the vulnerability to pathogens is more polarized - especially in a globally networked world.
When will the desktop Linux swap out occur, as it did seamlessly at Apple with the XNU kernel in OS X?
Saturday, September 04, 2004
Accelerating Change and Societal Shock
The history of technology is one of disruption and exponential growth, epitomized in Moore’s law, and generalized to many basic technological capabilities that are compounding independently from the economy.
For example, for the past 40 years in the semiconductor industry, Moore’s Law has not wavered in the face of dramatic economic cycles. Ray Kurzweil’s abstraction of Moore’s Law (from transistor-centricity to computational capability and storage capacity) shows an uninterrupted exponential curve for over 100 years, again without perturbation during the Great Depression or the World Wars. Similar exponentials can be seen in Internet connectivity, medical imaging resolution, genes mapped and solved 3D protein structures. In each case, the level of analysis is not products or companies, but basic technological capabilities.
In his forthcoming book, Kurzweil summarizes the exponentiation of our technological capabilities, and our evolution, with the near-term shorthand: the next 20 years of technological progress will be equivalent to the entire 20th century.
For most of us, who do not recall what life was like one hundred years ago, the metaphor is a bit abstract. So I did a little research. In 1900, in the U.S., there were only 144 miles of paved road, and most Americans (94%+) were born at home, without a telephone, and never graduated high school. Most (86%+) did not have a bathtub at home or reliable access to electricity. Consider how much technology-driven change has compounded over the past century, and consider that an equivalent amount of progress will occur in one human generation, by 2020. It boggles the mind, until one dwells on genetics, nanotechnology, and their intersection.
Exponential progress perpetually pierces the linear presumptions of our intuition. “Future Shock” is no longer on an inter-generational time-scale. How will society absorb an accelerating pace of externalized change? What does it mean for our education systems, career paths, and forecast horizons?
Friday, September 03, 2004
Joke of the day
Those who think in binary and those who don't.
Sunday, August 29, 2004
Can friendly AI evolve?
Humans seem to presume an "us vs. them" mentality when it comes to machine intelligence (certainly in the movies =).
But is the desire for self-preservation coupled to intelligence or to evolutionary dynamics?… or to biological evolution per se? Self-preservation may be some low-level reflex that emerges in the evolutionary environment of biological reproduction. It may be uncoupled from intelligence. But, will it emerge in any intelligence that we grow through evolutionary algorithms?
And is this path dependent? Given the iterated selection tests of any evolutionary process, is it possible to evolve an intelligence without an embedded survival instinct?
Thursday, August 26, 2004
FCC Indecency & Howard Stern
I forgot to comment on the remarkably candid interview that FCC Chairman Michael Powell gave last month on the topics of broadband policy, industry transitions, regulatory philosophy, Skype and VOIP, censorship and Howard Stern. While the streaming video has been available, the transcript proliferated in the blogosphere:
Denise Howell captured the most salient parts of the broad discussion.
Marc Canter covers Powell’s further ruminations on indecency.
At minute 25:08 (and into the Q&A), I ask about the recent FCC crackdown on indecency. I had two pages of questions from Howard Stern (who has no great love for the FCC), and in a burst of recursive irony, I self-censored the indecent ones (like PBS recently). Here are some of the questions from Howard, and I only got to the first one in the interview:
“Aside from Oprah, who else will you NOT fine?”
“What makes the FCC qualified to determine what is indecent?”
“What role should religion play in determining indecency standards?”
The FCC answer points to the number of complaints as the motivation for the crackdown. This sounds like a voting system of “majority rules”…. which seems to run counter to the spirit of the First Amendment and the protection of minority voices.
Monday, August 23, 2004
The coolest thing you learned this year?
In the spirit of lifelong learning, what is the coolest new thing you learned this year?
Last year, I think it was at a dinner with Matt Ridley talking about the inter-gene warfare going on within our bodies, especially between the X and Y sex chromosomes.
For this year, I can’t seem to pick one thing. Conversations with the eponymous Mr. Smart come to mind. Here is an example of his thinking about the limitations of biology as a substrate for developing computational complexity.
Jaron Lanier is also a wonderful thinker, and when we writes for my favorite “interesting ideas” site (, it’s a potent combination. He makes an interesting counterpoint: “We're so used to thinking about computers in the same light as was available at the inception of computer science that it's hard to imagine an alternative, but an alternative is available to us all the time in our own bodies.”
Reconciling the two, perhaps biology will drive the future of intelligence and information technology – not literally, but figuratively and metaphorically and primarily through powerful abstractions.
Many of the interesting software challenges relate to growing resilient complex systems or they are inspired by other biological metaphors (e.g., artificial evolution, biomimetics, neural networks for pattern recognition, artificial immunology for virus and spam detection, genetic algorithms, A-life, emergence, IBM’s Autonomic Computing initiative, meshes and sensor nets, hives, and the subsumption architecture in robotics). Tackling the big unsolved problems in info tech will likely turn us to biology – as our muse, and for an existence proof that solutions are possible.
Friday, August 20, 2004
Quantum Computational Equivalence
An interesting comment on "Your Genome is Smaller than Microsoft Office" referenced quantum effects to explain the power of the interpreters of biological code.
I recently heard Wolfram present his notion of "computational equivalence", and I asked about quantum computers because it seemed like a worm hole through his logic… but he seemed to dismiss the possibility of QCs instead.
The abstract summary of my understanding of computational equivalence is that many activities, from thinking to evolution to cellular signaling, can be represented as a computation. A physical experiment and a computation are equivalent. For an iterative system, like a cellular automata, there is no formulaic shortcut for the interesting cases. The simulation is as complex as “running the experiment” and will consume similar computational resources.
Quantum computers can perform accurate simulations of any physical system of comparable complexity. The type of simulation that a quantum computer does results in an exact prediction of how a system will behave in nature — something that is literally impossible for any traditional computer, no matter how powerful. Professor David Deutsch of Oxford summarizes: “Quantum computers have the potential to solve problems that would take a classical computer longer than the age of the universe.”
So I wonder what the existence of quantum computers would say about computational equivalence? How might this “shortcut through time” be employed in the simulation of molecular systems? Does it prove the existence of parallel universes (as Deutsch concludes in Fabric of Reality) that entangle to solve computationally intractable problems? Is there a “quantum computational equivalence” whereby a physical experiment could be a co-processor for a quantum simulation? Is it a New New Kind of Science?
Thursday, August 19, 2004
Morpheus beats the RIAA
A new development: Morpheus just unanimously won their 9th Circuit case. The entertainment industry lawyers were so confident that they would prevail in the case that they did not have a statement ready for this scenario.
The justices actually addressed Congress and urged them not to pass anti P2P legislation so quickly. They added:
iTunes Licensing Model
I just received a call from one of my favorite musicians. He told me that when Apple sells one of his songs for 99 cents, EMI gets 66 cents and he gets 5 cents.
EMI just ported the business contract of physical distribution (which presumes manufacturing costs, breakage, inventory and other real costs). So the music label unilaterally captured 100% of the upside from moving the business online and shared none of it with the artist.
Having just finished reading Free Culture, I guess I should not be surprised by this habitual behavior. But it seems so old school.
My channel and fulfillment relationship is now with Apple. EMI provides no value to me in this modern context. Yet they take more than 10x what they share with the artist.
Tuesday, August 17, 2004
Your Genome is Smaller than Microsoft Office
How inspirational are the information systems of biology?
If we took your entire genetic code -- the entire biological program that resulted in your cells, organs, body and mind -- and burned it into a CD, it would be smaller than Microsoft Office. Two digital bits can encode for the four DNA bases (A,T,C and G) resulting in a 750MB file that can be compressed for the preponderance of structural filler in the DNA chain. Even with simple Huffman encoding, we should get below the 486MB of my minimal Office 2004 install.
If much of the human genome consists of vestigial evolutionary and parasitic remnants that serve no useful purpose, then we could compress it to 60MB of concentrated information.
What does this tell us about Microsoft? About software development? About complex systems development in general?
Sunday, August 08, 2004
Genetic Free Speech
Following the J-Curve from the downer of the prior post, there is much to be excited about.
Earlier this year, I had the wonderful opportunity to co-teach a new interdisciplinary class at Stanford with Prof. Larry Lessig. It was called “Ideas vs. Matter: the Code in Tiny Spaces” and we discussed genetics, nanotechnology and the regulatory ecosystem.
We went in with the presumption that society will likely try to curtail “genetic free speech” as it applies to human germ line engineering, and thereby curtail the evolution of evolvability. Lessig predicts that we will recapitulate the 200-year debate about the First Amendment to the Constitution. Pressures to curtail free genetic expression will focus on the dangers of “bad speech”, and others will argue that good genetic expression will crowd out the bad. Artificial chromosomes (whereby children can decide whether to accept genetic enhancements when they become adults) can decouple the debate about parental control. And, with a touch of irony, China may lead the charge.
Many of us subconsciously cling to the selfish notion that humanity is the endpoint of evolution. In the debates about machine intelligence and genetic enhancements, there is a common and deeply rooted fear about being surpassed – in our lifetime. But, when framed as a question of parenthood (would you want your great grandchild to be smarter and healthier than you?), the emotion often shifts from a selfish sense of supremacy to a universal human search for symbolic immortality.
Tuesday, August 03, 2004
Genetically Modified Pathogen (GMP) Policy
In repose to my first post requesting topics of interest, “anonymous” noted that this blog is the top result on a Google search for “IL-4 Smallpox”... a dubious and disturbing honor for what I was hoping to be a content-free blog.
Anon also asked “what do you think of DHS efforts for a realtime bio-sensor network?”
It is possible that with the mobilization of massive logistical resources around the planet, we will prevail over genetically modified and engineered pathogens (GMPs). But I would not bet on it. It would be great to have a sensor network, but with most Health and Human Services offices lacking a basic Internet connection, we have a way to go.
From what I can tell, a crash-program in antiviral development may provide a ray of hope (e.g., HDP-cidofovir and some more evolutionarily robust and broad-spectrum host-based strategies).
Most importantly, from my random walk through government labs, talks with policy planners, CDC folk and DOD Red Team members, I haven’t seen any policy bifurcation for GMPs (for detection and response). I think there should be distinct policy consideration given to GMPs vs. natural pathogens.
The threat from GMPs is much greater, and the strategic response would need special planning. For example, the vaccinations that eradicated smallpox last time around may not be effective for IL-4 modified smallpox, and in-situ quarantine may be needed. “Telecommuting” for many forms of work will need to be pre-enabled, especially remote operation of the public utilities and MAE-East &West and other critical NAP nodes of the Internet.
In evolution, pathogens do not become overly lethal to their host, for that limits their own propagation to a geographically-bound quarantine zone. Evolution may have created 100% lethal pathogens in the past, but those pathogens are now extinct because they killed all of their locally available hosts.
A custom-engineered or modified pathogen may not observe that delicate virus-host balance, nor the slow pace of evolutionary time scales, and could engender extinction level events with a rapidity never before seen on Earth. Given early truncation of the lethality branch (truncating a local maximum), evolution has not experimented with a multivariate global maximum of lethality. The pattern of evolution is small and slow incremental changes where each intermediate genetic state needs to survive for the next improvement to accumulate. Engineered and modified pathogens do not need to follow that pattern.
Sunday, June 13, 2004
Will we comprehend supra-human emergence?
Thinking about complexity, emergence and ants, I went to a lecture by Deborah Gordon, and remain fascinated by the different time scales of learning at each layer of abstraction. For example, the hive will learn lessons (e.g., don’t attack the termites) over long periods of time – longer than the life span of the ants themselves. The hive itself is a locus of learning, not just individual ants.
Can an analogy be drawn to societal memes? Human communication sets the clock rate for the human hive (and the Interet expands the fanout and clock rate). Norms, beliefs, philosophy and various societal behaviors seem to change at a glacial pace, so that we don’t notice them day-to-day (slow clock rate). But when we look back, we think and act very differently as a society than we did in the 50’s.
As I look at the progression of:
Groups : Humans
Flocks : Birds
Hive : Ants
Brain : Neurons
I notice that as the number of nodes grows (as you go down the list), the “intelligence” and hierarchical complexity of the nodes drops, and the “emergent gap” between the node and the collective grows. There’s more value to the network with more nodes (grows ~ as n^2), so it makes sense that the gap is greater. At one end, humans have some understanding of emergent group phenomena and organizational value, and on the other end, a neuron has no model for brain activity.
One question I am wrestling with: does the minimally-sufficient critical mass of nodes needed to generate emergent behavior necessitate a certain incomprehensibility of the emergent properties by the nodal members? Does it follow that the more powerful the emergent properties, the more incomprehensible they must be to their members? So, I guess I am wondering about the "emergent gap" between layers of abstraction, and whether the incomprehensibility across layers is based on complexity (numbers of nodes and connections) AND/OR time scales of operation? |
6c188325d0599f6b | All Issues
Volume 40, 2020
Volume 39, 2019
Volume 38, 2018
Volume 37, 2017
Volume 36, 2016
Volume 35, 2015
Volume 34, 2014
Volume 33, 2013
Volume 32, 2012
Volume 31, 2011
Volume 30, 2011
Volume 29, 2011
Volume 28, 2010
Volume 27, 2010
Volume 26, 2010
Volume 25, 2009
Volume 24, 2009
Volume 23, 2009
Volume 22, 2008
Volume 21, 2008
Volume 20, 2008
Volume 19, 2007
Volume 18, 2007
Volume 17, 2007
Volume 16, 2006
Volume 15, 2006
Volume 14, 2006
Volume 13, 2005
Volume 12, 2005
Volume 11, 2004
Volume 10, 2004
Volume 9, 2003
Volume 8, 2002
Volume 7, 2001
Volume 6, 2000
Volume 5, 1999
Volume 4, 1998
Volume 3, 1997
Volume 2, 1996
Volume 1, 1995
Discrete & Continuous Dynamical Systems - A
May 2014 , Volume 34 , Issue 5
Select all articles
Reaction-diffusion-advection models for the effects and evolution of dispersal
Chris Cosner
2014, 34(5): 1701-1745 doi: 10.3934/dcds.2014.34.1701 +[Abstract](2869) +[PDF](620.4KB)
This review describes reaction-advection-diffusion models for the ecological effects and evolution of dispersal, and mathematical methods for analyzing those models. The topics covered include models for a single species, models for ecological interactions between species, and models for the evolution of dispersal strategies. The models are all set on bounded domains. The mathematical methods include spectral theory, specifically the theory of principal eigenvalues for elliptic operators, maximum principles and comparison theorems, bifurcation theory, and persistence theory.
Elliptic problems with nonlinear terms depending on the gradient and singular on the boundary: Interaction with a Hardy-Leray potential
Boumediene Abdellaoui, Daniela Giachetti, Ireneo Peral and Magdalena Walias
2014, 34(5): 1747-1774 doi: 10.3934/dcds.2014.34.1747 +[Abstract](2113) +[PDF](556.4KB)
In this article we consider the following family of nonlinear elliptic problems,
$-\Delta (u^m) - \lambda \frac{u^m}{|x|^2} = |Du|^q + c f(x). $
We will analyze the interaction between the Hardy-Leray potential and the gradient term getting existence and nonexistence results in bounded domains $\Omega\subset\mathbb{R}^N$, $N\ge 3$, containing the pole of the potential.
Recall that $Λ_N = (\frac{N-2}{2})^2$ is the optimal constant in the Hardy-Leray inequality.
1.For $0 < m \le 2$ we prove the existence of a critical exponent $q_+ \le 2$ such that for $q > q_+$, the above equation has no positive distributional solution. If $q < q_+$ we find solutions by using different alternative arguments.
Moreover if $q = q_+ > 1$ we get the following alternative results.
(a) If $m < 2$ and $q=q_+$ there is no solution.
(b) If $m = 2$, then $q_+=2$ for all $\lambda$. We prove that there exists solution if and only if $2\lambda\leq\Lambda_N$ and, moreover, we find infinitely many positive solutions.
2. If $m > 2$ we obtain some partial results on existence and nonexistence.
We emphasize that if $q(\frac{1}{m}-1)<-1$ and $1 < q \le 2$, there exists positive solutions for any $f \in L^1(Ω)$.
Bistable travelling waves for nonlocal reaction diffusion equations
Matthieu Alfaro, Jérôme Coville and Gaël Raoul
2014, 34(5): 1775-1791 doi: 10.3934/dcds.2014.34.1775 +[Abstract](1860) +[PDF](435.5KB)
We are concerned with travelling wave solutions arising in a reaction diffusion equation with bistable and nonlocal nonlinearity, for which the comparison principle does not hold. Stability of the equilibrium $u\equiv 1$ is not assumed. We construct a travelling wave solution connecting 0 to an unknown steady state, which is "above and away", from the intermediate equilibrium. For focusing kernels we prove that, as expected, the wave connects 0 to 1. Our results also apply readily to the nonlocal ignition case.
Kolmogorov-Sinai entropy via separation properties of order-generated $\sigma$-algebras
Alexandra Antoniouk, Karsten Keller and Sergiy Maksymenko
2014, 34(5): 1793-1809 doi: 10.3934/dcds.2014.34.1793 +[Abstract](1410) +[PDF](432.4KB)
In a recent paper, K. Keller has given a characterization of the Kolmogorov-Sinai entropy of a discrete-time measure-preserving dynamical system on the base of an increasing sequence of special partitions. These partitions are constructed from order relations obtained via a given real-valued random vector, which can be interpreted as a collection of observables on the system and is assumed to separate points of it. In the present paper we relax the separation condition in order to generalize the given characterization of Kolmogorov-Sinai entropy, providing a statement on equivalence of $\sigma$-algebras. On its base we show that in the case that a dynamical system is living on an $m$-dimensional smooth manifold and the underlying measure is Lebesgue absolute continuous, the set of smooth random vectors of dimension $n>m$ with given characterization of Kolmogorov-Sinai entropy is large in a certain sense.
When are the invariant submanifolds of symplectic dynamics Lagrangian?
Marie-Claude Arnaud
2014, 34(5): 1811-1827 doi: 10.3934/dcds.2014.34.1811 +[Abstract](1670) +[PDF](430.0KB)
Let $\mathcal{L}$ be a $D$-dimensional submanifold of a $2D$ dimensional exact symplectic manifold $(M, \omega)$ and let $f: M\rightarrow M$ be a symplectic diffeomorphism. In this article, we deal with the link between the dynamics $f_{|\mathcal{L}}$ restricted to $\mathcal{L}$ and the geometry of $\mathcal{L}$ (is $\mathcal{L}$ Lagrangian, is it smooth, is it a graph … ?).
We prove different kinds of results.
1. for $D=3$, we prove that is $\mathcal{L}$ if a torus that carries some characteristic loop, then either $\mathcal{L}$ is Lagrangian or $f_{|\mathcal{L}}$ can not be minimal (i.e. all the orbits are dense) with $(f^k_{|\mathcal{L}})$ equilipschitz;
2. for a Tonelli Hamiltonian of $T^*\mathbb{T}^3$, we give an example of an invariant submanifold $\mathcal{L}$ with no conjugate points that is not Lagrangian and such that for every $f:T^*\mathbb{T}^3\rightarrow T^*\mathbb{T}^3$ symplectic, if $f(\mathcal{L})=\mathcal{L}$, then $\mathcal{L}$ is not minimal;
3. with some hypothesis for the restricted dynamics, we prove that some invariant Lipschitz $D$-dimensional submanifolds of Tonelli Hamiltonian flows are in fact Lagrangian, $C^1$ and graphs;
4.we give similar results for $C^1$ submanifolds with weaker dynamical assumptions.
On a functional satisfying a weak Palais-Smale condition
Antonio Azzollini
2014, 34(5): 1829-1840 doi: 10.3934/dcds.2014.34.1829 +[Abstract](1721) +[PDF](386.8KB)
In this paper we study a quasilinear elliptic problem whose functional satisfies a weak version of the well known Palais-Smale condition. An existence result is proved under general assumptions on the nonlinearities.
Lyapunov spectrum for geodesic flows of rank 1 surfaces
Keith Burns and Katrin Gelfert
2014, 34(5): 1841-1872 doi: 10.3934/dcds.2014.34.1841 +[Abstract](2410) +[PDF](893.4KB)
We give estimates on the Hausdorff dimension of the levels sets of the Lyapunov exponent for the geodesic flow of a compact rank 1 surface. We show that the level sets of points with small (but non-zero) exponents has full Hausdorff dimension, but carries small topological entropy.
A note on integrable mechanical systems on surfaces
Leo T. Butler
2014, 34(5): 1873-1878 doi: 10.3934/dcds.2014.34.1873 +[Abstract](1586) +[PDF](394.9KB)
Let $\mathfrak{S}$ be a compact, connected surface and $H \in C^2(T^* \mathfrak{S})$ a Tonelli Hamiltonian. This note extends V. V. Kozlov's result on the Euler characteristic of $\mathfrak{S}$ when $H$ is real-analytically integrable, using a definition of topologically-tame integrability called semisimplicity. Theorem: If $H$ is $2$-semisimple, then $\mathfrak{S}$ has non-negative Euler characteristic; if $H$ is $1$-semisimple, then $\mathfrak{S}$ has positive Euler characteristic.
The properties of positive solutions to an integral system involving Wolff potential
Huan Chen and Zhongxue Lü
2014, 34(5): 1879-1904 doi: 10.3934/dcds.2014.34.1879 +[Abstract](1565) +[PDF](482.7KB)
In this paper, we consider the positive solutions of the following weighted integral system involving Wolff potential in $R^{n}$: $$ \left\{ \begin{array}{ll} u(x) = R_1(x)W_{\beta,\gamma}(\frac{v^q}{|y|^{\sigma}})(x), \\ v(x) = R_2(x)W_{\beta,\gamma}(\frac{u^p}{|y|^{\sigma}})(x). (0.1) \end{array} \right. $$ This system is helpful to understand some nonlinear PDEs and other nonlinear problems. Different from the case of $\sigma=0$, it is difficult to handle the properties of the solutions since there is singularity at origin. First, we overcome this difficulty by modifying and refining the new method which was introduced to explore the integrability result establishes by Ma, Chen and Li, and obtain an optimal integrability. Second, we use the method of moving planes to prove the radial symmetry for the positive solutions of (0.1) when $R_{1}(x)\equiv R_{2}(x)\equiv 1$. Based on these results, by intricate analytical techniques, we obtain the estimate of the decay rates of those solutions near infinity.
Dynamics of Ginzburg-Landau and Gross-Pitaevskii vortices on manifolds
Ko-Shin Chen and Peter Sternberg
2014, 34(5): 1905-1931 doi: 10.3934/dcds.2014.34.1905 +[Abstract](1666) +[PDF](531.9KB)
We consider the dissipative heat flow and conservative Gross-Pitaevskii dynamics associated with the Ginzburg-Landau energy \begin{equation*} E_\varepsilon(u) = \int_{\mathcal M} \frac{|\nabla_g u|^2}{2} + \frac{(1-|u|^2)^2}{4\varepsilon^2} dv_g \end{equation*} posed on a Riemannian $2$-manifold $\mathcal{M}$ endowed with a metric $g$. In the $ε \to 0$ limit, we show the vortices of the solutions to these two problems evolve according to the gradient flow and Hamiltonian point-vortex flow respectively, associated with the renormalized energy on $\mathcal{M}.$ For the heat flow, we then specialize to the case where $\mathcal{M}=S^2$ and study the limiting system of ODE's and establish an annihilation result. Finally, for the Ginzburg-Landau heat flow on $S^2$, we derive some weighted energy identities.
Period 3 and chaos for unimodal maps
Kaijen Cheng, Kenneth Palmer and Yuh-Jenn Wu
2014, 34(5): 1933-1949 doi: 10.3934/dcds.2014.34.1933 +[Abstract](1978) +[PDF](484.9KB)
In this paper we study unimodal maps on the closed unit interval, which have a stable period 3 orbit and an unstable period 3 orbit, and give conditions under which all points in the open unit interval are either asymptotic to the stable period 3 orbit or land after a finite time on an invariant Cantor set $\Lambda$ on which the dynamics is conjugate to a subshift of finite type and is, in fact, chaotic. For the particular value of $\mu=3.839$, Devaney [3], following ideas of Smale and Williams, shows that the logistic map $f(x)=\mu x(1-x)$ has this property. In this case the stable and unstable period 3 orbits appear when $\mu=\mu_0=1+\sqrt{8}$. We use our theorem to show that the property holds for all values of $\mu>\mu_0$ for which the stable period 3 orbit persists.
An extended discrete Hardy-Littlewood-Sobolev inequality
Ze Cheng and Congming Li
2014, 34(5): 1951-1959 doi: 10.3934/dcds.2014.34.1951 +[Abstract](1840) +[PDF](350.8KB)
Hardy-Littlewood-Sobolev (HLS) Inequality fails in the ``critical'' case: $μ=n$. However, for discrete HLS, we can derive a finite form of HLS inequality with logarithm correction for a critical case: $μ=n$ and $p=q$, by limiting the inequality on a finite domain. The best constant in the inequality and its corresponding solution, the optimizer, are studied. First, we obtain a sharp estimate for the best constant. Then for the optimizer, we prove the uniqueness and a symmetry property. This is achieved by proving that the corresponding Euler-Lagrange equation has a unique nontrivial nonnegative critical point. Also, by using a discrete version of maximum principle, we prove certain monotonicity of this optimizer.
Multi-existence of multi-solitons for the supercritical nonlinear Schrödinger equation in one dimension
Vianney Combet
2014, 34(5): 1961-1993 doi: 10.3934/dcds.2014.34.1961 +[Abstract](1641) +[PDF](584.5KB)
For the $L^2$ supercritical generalized Korteweg-de Vries equation, we proved in [2] the existence and uniqueness of an $N$-parameter family of $N$-solitons. Recall that, for any $N$ given solitons, we call $N$-soliton a solution of the equation which behaves as the sum of these $N$ solitons asymptotically as $t \to +\infty$. In the present paper, we also construct an $N$-parameter family of $N$-solitons for the supercritical nonlinear Schrödinger equation in dimension $1$. Nevertheless, we do not obtain any classification result; but recall that, even in subcritical and critical cases, no general uniqueness result has been proved yet.
Convergence analysis of the vortex blob method for the $b$-equation
Yong Duan and Jian-Guo Liu
2014, 34(5): 1995-2011 doi: 10.3934/dcds.2014.34.1995 +[Abstract](1589) +[PDF](401.7KB)
In this paper, we prove the convergence of the vortex blob method for a family of nonlinear evolutionary partial differential equations (PDEs), the so-called b-equation. This kind of PDEs, including the Camassa-Holm equation and the Degasperis-Procesi equation, has many applications in diverse scientific fields. Our convergence analysis also provides a proof for the existence of the global weak solution to the b-equation when the initial data is a nonnegative Radon measure with compact support.
Analytic skew-products of quadratic polynomials over Misiurewicz-Thurston maps
Rui Gao and Weixiao Shen
2014, 34(5): 2013-2036 doi: 10.3934/dcds.2014.34.2013 +[Abstract](1715) +[PDF](511.0KB)
We consider skew-products of quadratic maps over certain Misiurewicz-Thurston maps and study their statistical properties. We prove that, when the coupling function is a polynomial of odd degree, such a system admits two positive Lyapunov exponents almost everywhere and a unique absolutely continuous invariant probability measure.
Dirichlet $(p,q)$-equations at resonance
Leszek Gasiński and Nikolaos S. Papageorgiou
2014, 34(5): 2037-2060 doi: 10.3934/dcds.2014.34.2037 +[Abstract](1910) +[PDF](498.1KB)
We consider a parametric nonlinear Dirichlet equation driven by the sum of a $p$-Laplacian and a $q$-Laplacian ($1 < q < p < +\infty$, $p ≥ 2$) and with a Carathéodory reaction which at $\pm\infty$ is resonant with respect to the principal eigenvalue $\widehat{\lambda}_1(p) > 0$ of $(-\Delta_p, W^{1,p}_0(\Omega))$. Using critical point theory, truncation and comparison techniques and critical groups (Morse theory), we show that for all small values of the parameter $\lambda>0$, the problem has at least five nontrivial solutions, four of constant sign (two positive and two negative) and the fifth nodal (sign-changing).
The Fourier restriction norm method for the Zakharov-Kuznetsov equation
Axel Grünrock and Sebastian Herr
2014, 34(5): 2061-2068 doi: 10.3934/dcds.2014.34.2061 +[Abstract](2384) +[PDF](377.0KB)
The Cauchy problem for the Zakharov-Kuznetsov equation is shown to be locally well-posed in $H^s(\mathbb{R}^2)$ for all $s>\frac{1}{2}$ by using the Fourier restriction norm method and bilinear refinements of Strichartz type inequalities.
A fast blow-up solution and degenerate pinching arising in an anisotropic crystalline motion
Tetsuya Ishiwata and Shigetoshi Yazaki
2014, 34(5): 2069-2090 doi: 10.3934/dcds.2014.34.2069 +[Abstract](1625) +[PDF](464.5KB)
The asymptotic behavior of solutions to an anisotropic crystalline motion is investigated. In this motion, a solution polygon changes the shape by a power of crystalline curvature in its normal direction and develops singularity in a finite time. At the final time, two types of singularity appear: one is a single point-extinction and the other is degenerate pinching. We will discuss the latter case of singularity and show the exact blow-up rate for a fast blow-up or a type II blow-up solution which arises in an equivalent blow-up problem.
On Hamiltonian flows whose orbits are straight lines
Hans Koch and Héctor E. Lomelí
2014, 34(5): 2091-2104 doi: 10.3934/dcds.2014.34.2091 +[Abstract](1420) +[PDF](411.2KB)
We consider real analytic Hamiltonians on $\mathbb{R}^n \times \mathbb{R}^n$ whose flow depends linearly on time. Trivial examples are Hamiltonians $H(q,p)$ that do not depend on the coordinate $q\in \mathbb{R}^n$. By a theorem of Moser [11], every polynomial Hamiltonian of degree $3$ reduces to such a $q$-independent Hamiltonian via a linear symplectic change of variables. We show that such a reduction is impossible, in general, for polynomials of degree $4$ or higher. But we give a condition that implies linear-symplectic conjugacy to another simple class of Hamiltonians. The condition is shown to hold for all nondegenerate Hamiltonians that are homogeneous of degree $4$.
On approximation of an optimal boundary control problem for linear elliptic equation with unbounded coefficients
Peter I. Kogut
2014, 34(5): 2105-2133 doi: 10.3934/dcds.2014.34.2105 +[Abstract](1841) +[PDF](562.7KB)
We study an optimal boundary control problem (OCP) associated to a linear elliptic equation $-\mathrm{div}\,\left(\nabla y+A(x)\nabla y\right)=f$. The characteristic feature of this equation is the fact that the matrix $A(x)=[a_{ij}(x)]_{i,j=1,\dots,N}$ is skew-symmetric, $a_{ij}(x)=-a_{ji}(x)$, measurable, and belongs to $L^2$-space (rather than $L^\infty$). In spite of the fact that the equations of this type can exhibit non-uniqueness of weak solutions--- namely, they have approximable solutions as well as another type of weak solutions that can not be obtained through an approximation of matrix $A$, the corresponding OCP is well-possed and admits a unique solution. At the same time, an optimal solution to such problem can inherit a singular character of the original matrix $A$. We indicate two types of optimal solutions to the above problem: the so-called variational and non-variational solutions, and show that each of that optimal solutions can be attainable by solutions of special optimal boundary control problems.
On the Stokes problem in exterior domains: The maximum modulus theorem
Paolo Maremonti
2014, 34(5): 2135-2171 doi: 10.3934/dcds.2014.34.2135 +[Abstract](1724) +[PDF](665.8KB)
We study the Stokes initial boundary value problem, in $(0,T) \times Ω$, where $Ω \subseteq \mathbb{R}^n$, $n\geq3$, is an exterior domain, assuming that the initial data belongs to $L^\infty(Ω)$ and has null divergence in weak sense. We prove the maximum modulus theorem for the corresponding solutions. Crucial for the proof of this result is the analogous one proved by Abe-Giga for bounded domains. Our proof is developed by duality arguments and employing the semigroup properties of the resolving operator defined on $L^1(Ω)$. Our results are similar to the ones proved by Solonnikov by means of the potential theory.
Symbolic dynamics for the geodesic flow on two-dimensional hyperbolic good orbifolds
Anke D. Pohl
2014, 34(5): 2173-2241 doi: 10.3934/dcds.2014.34.2173 +[Abstract](2266) +[PDF](743.6KB)
We construct cross sections for the geodesic flow on the orbifolds $\Gamma $\$ \mathbb{H}$ which are tailor-made for the requirements of transfer operator approaches to Maass cusp forms and Selberg zeta functions. Here, $\mathbb{H}$ denotes the hyperbolic plane and $\Gamma$ is a nonuniform geometrically finite Fuchsian group (not necessarily a lattice, not necessarily arithmetic) which satisfies an additional condition of geometric nature. The construction of the cross sections is uniform, geometric, explicit and algorithmic.
Asymptotic behavior of Navier-Stokes-Korteweg with friction in $\mathbb{R}^{3}$
Zhong Tan, Xu Zhang and Huaqiao Wang
2014, 34(5): 2243-2259 doi: 10.3934/dcds.2014.34.2243 +[Abstract](1531) +[PDF](434.6KB)
We consider the compressible barotropic Navier-Stokes-Korteweg system with friction in this paper. The global solutions and optimal convergence rates are obtained by pure energy method provided the initial perturbation around a constant state is small enough. In particular, the decay rates of the higher-order spatial derivatives of the solution are obtained. Our proof is based on a family of scaled energy estimates and interpolations among them without linear decay analysis.
Scattering theory for the wave equation of a Hartree type in three space dimensions
Kimitoshi Tsutaya
2014, 34(5): 2261-2281 doi: 10.3934/dcds.2014.34.2261 +[Abstract](1836) +[PDF](459.2KB)
The paper concerns a scattering problem of the wave equation of a Hartree type with small initial data with fast decay. The equation is \[ \partial_t^2 u - \Delta u = V_1(x)u+ (V_2\ast |u|^{p-1})u , \qquad t\in {\bf R}, \; x \in {\bf R}^3, \] where $p\ge 3, \; V_1(x)=O(|x|^{-\gamma_1})$ with $\gamma_1>0$ as $|x|\to\infty, \; V_2(x) = \pm |x|^{-\gamma_2}$ with $\gamma_2>0$. We prove the existence of scattering operators under almost optimal conditions on the potentials and initial data in terms of decay, using pointwise estimates. Our result generalizes the one by [14, 15] for the case $p=3$.
Weighted Green functions of polynomial skew products on $\mathbb{C}^2$
Kohei Ueno
2014, 34(5): 2283-2305 doi: 10.3934/dcds.2014.34.2283 +[Abstract](1378) +[PDF](443.3KB)
We study the dynamics of polynomial skew products on $\mathbb{C}^2$. By using suitable weights, we prove the existence of several types of Green functions. Largely, continuity and plurisubharmonicity follow. Moreover, it relates to the dynamics of the rational extensions to weighted projective spaces.
Almost every interval translation map of three intervals is finite type
Denis Volk
2014, 34(5): 2307-2314 doi: 10.3934/dcds.2014.34.2307 +[Abstract](1589) +[PDF](356.9KB)
Interval translation maps (ITMs) are a non-invertible generalization of interval exchange transformations (IETs). The dynamics of finite type ITMs is similar to IETs, while infinite type ITMs are known to exhibit new interesting effects. In this paper, we prove the finiteness conjecture for the ITMs of three intervals. Namely, the subset of ITMs of finite type contains an open, dense, and full Lebesgue measure subset of the space of ITMs of three intervals. For this, we show that any ITM of three intervals can be reduced either to a rotation or to a double rotation.
Dimension estimates for arbitrary subsets of limit sets of a Markov construction and related multifractal analysis
Juan Wang, Xiaodan Zhang and Yun Zhao
2014, 34(5): 2315-2332 doi: 10.3934/dcds.2014.34.2315 +[Abstract](1757) +[PDF](438.1KB)
Without constructing any measure and using properties of Markov partition, this paper provides a direct proof of dimension estimates for any subset of a limit set of a Markov construction. Furthermore, this paper investigate the dimensions of asymptotically conformal repellers. And the dimension spectrum of the level sets of nonadditive potentials on asymptotically conformal repellers are also obtained.
Solutions with clustered bubbles and a boundary layer of an elliptic problem
Liping Wang and Chunyi Zhao
2014, 34(5): 2333-2357 doi: 10.3934/dcds.2014.34.2333 +[Abstract](1724) +[PDF](475.6KB)
We study positive solutions of the equation $ε^2 \Delta u - u + u^\frac{n+2}{n-2} = 0$ where $ε >0$ is small, with Neumann boundary condition in a unit ball $B\subset\mathbb R^3$. We prove the existence of solutions with multiple interior bubbles near the center and a boundary layer. The method may also be used to the case $n=4$, $5$ and get the analogous results.
Vortex structures for Klein-Gordon equation with Ginzburg-Landau nonlinearity
Jun Yang
2014, 34(5): 2359-2388 doi: 10.3934/dcds.2014.34.2359 +[Abstract](1923) +[PDF](516.2KB)
By a perturbation approach, we construct traveling solitary solutions with various vortex structures(vortex pairs, vortex rings) for Klein-Gordon equation with Ginzburg-Landau nonlinearities.
On the well-posedness of Maxwell-Chern-Simons-Higgs system in the Lorenz gauge
Jianjun Yuan
2014, 34(5): 2389-2403 doi: 10.3934/dcds.2014.34.2389 +[Abstract](1603) +[PDF](402.1KB)
In this paper, we investigate the well-posedness of the Maxwell-Chern-Simons-Higgs system in the Lorenz gauge. In particular, we prove that the system is globally wellposed in the energy space. As an application, we prove that the solution of the Maxwell-Chern-Simons-Higgs system converges to that of Maxwell-Higgs system in $H^s\times H^{s-1}$($s\geq1$) as the Chern-Simons coupling constant $\kappa\rightarrow0$.
Wave speed analysis of traveling wave fronts in delayed synaptically coupled neuronal networks
Linghai Zhang
2014, 34(5): 2405-2450 doi: 10.3934/dcds.2014.34.2405 +[Abstract](1839) +[PDF](583.2KB)
Consider the following nonlinear scalar integral differential equation arising from delayed synaptically coupled neuronal networks \begin{eqnarray*} \frac{\partial u}{\partial t}+f(u) &=&\alpha\int^{\infty}_0\xi(c)\left[\int_{\mathbb R}K(x-y)H\left(u\left(y,t-\frac1c|x-y|\right)-\theta\right){\rm d}y\right]{\rm d}c\\ &+&\beta\int^{\infty}_0\eta(\tau)\left[\int_{\mathbb R}W(x-y)H(u(y,t-\tau)-\Theta){\rm d}y\right]{\rm d}\tau. \end{eqnarray*} This model equation generalizes many important nonlinear scalar integral differential equations arising from synaptically coupled neuronal networks. The kernel functions $K$ and $W$ represent synaptic couplings between neurons in synaptically coupled neuronal networks. The synaptic couplings can be very general, including not only pure excitations (modeled with nonnegative kernel functions), but also lateral inhibitions (modeled with Mexican hat kernel functions) and lateral excitations (modeled with upside down Mexican hat kernel functions). In this nonlinear scalar integral differential equation, $u=u(x,t)$ stands for the membrane potential of a neuron at position $x$ and time $t$. The integrals represent nonlocal spatio-temporal interactions between neurons.
We have accomplished the existence and stability of three traveling wave fronts $u(x,t)=U_k(x+\mu_kt)$ of the nonlinear scalar integral differential equation in an earlier work [42], where $\mu_k$ denotes the wave speed and $z=x+\mu_kt$ denotes the moving coordinate, $k=1,2,3$. In this paper, we will investigate how the neurobiological mechanisms represented by the synaptic couplings $(K,W)$, by the probability density functions $(\xi,\eta)$, by the synaptic rate constants $(\alpha,\beta)$ and by the firing thresholds $(\theta,\Theta)$ influence the wave speeds $\mu_k$ of the traveling wave fronts. We will define several speed index functions and use rigorous mathematical analysis to investigate the influence of the neurobiological mechanisms on the wave speeds. In particular, we will compare wave speeds of the traveling wave fronts of the nonlinear scalar integral differential equation with different synaptic couplings and with different probability density functions; we will accomplish new asymptotic behaviors of the wave speeds; we will compare wave speeds of traveling wave fronts of many reduced forms of nonlinear scalar integral differential equations of the above model equation; we will establish new estimates of the wave speeds. All these will greatly improve results obtained in previous work [38], [40] and [41].
2018 Impact Factor: 1.143
Email Alert
[Back to Top] |
aa423d833d1feb3c | CRC 1173
Project B3 • Frequency combs
Principal investigators
Prof. Dr. Tobias Jahnke (7/2015 - )
Prof. Dr. Christian Koos (7/2015 - )
Prof. Dr. Wolfgang Reichel (7/2015 - )
Project summary
A frequency comb is an optical signal consisting of a multitude of equidistantly spaced optical frequencies. Since their discovery by Theodor Hänsch awarded with the Nobel prize in 2005, their tremendous applicability was demonstrated in e.g. optical metrology, frequency metrology and optical communications. In this project, we aim to find a mathematical and electrotechnical framework for the generation of frequency combs with several key characteristics.
Particularly interesting comb sources are nonlinear Kerr microresonators, cf. Figure 1. These chip-scale devices allow for the generation of frequency combs in setups with a footprint comparable to a matchbox, enabling their use in small and mobile objects such as autonomous cars, servers in data-centres and handhelds. In proof-of-principle experiments, we demonstrated the application of microresonator frequency combs in optical communications at data transmission rates of more than 50 Tbits/s through a single optical fiber, [ECOC16, Nature17]. Furthermore, we performed optical distance measurements and achieved a sub-µm resolution at record sampling rates of 100 MHz [CLEO17, Science18].
Schematic of system for frequency comb generation
Figure 1. Schematic of system for frequency comb generation. A tunable continuous wave (cw) laser emits light at a frequency \(\\omega\), which is amplified by an optical amplifier. Then, the light is coupled into a microresonator on a microchip. By tuning the frequency of the laser, a frequency comb emerges in the microresonator, which can then be coupled out of the chip again for further use.
To enable this technology for a widespread use, key characteristics such as energy efficiency and potential for mass production need to be investigated. We address these challenges in an interdisciplinary team consisting of mathematicians and engineers by phrasing practical demands and answering these with mathematical techniques. We focus on the optical power conversion efficiency of a driving optical field inside the resonator and the emerging frequency comb. Furthermore, we explore the potential of different material platforms. Here, especially Silicon is of interest due to its widespread use in the semiconductor industry.
From the mathematical point of view, the field inside the resonator is approximated by a complex-valued function \(a(t,x)\), which depends on time \(t\) and intracavity position \(x\), and satisfies the Lugiato–Lefever–equation (LLE) \[\\frac{\\partial a(t,x)}{\\partial t}=\left[-1-\\text{i}\\zeta+\\text{i}d\\frac{\\partial^2}{\\partial x^2}+\\text{i}|a(t,x)|^2\\right]a(t,x)+f, \\quad a(t,x+2\\pi)=a(t,x)\] The LLE is a damped and forced nonlinear Schrödinger equation with parameters given by forcing \(f\) (amplitude of the driving field), detuning \(\\zeta\) (relative offset between frequency of the driving field and a resonance frequency of the microresonator) and second-order dispersion \(d\). When pumping the microresonator with an incoming single-frequency field, the Kerr-nonlinear term in the LLE gives rise to new spectral components, resulting in a frequency comb. Of most interest are soliton frequency comb states. These are stationary strongly localized solutions in space, which feature a broad spectrum in the frequency domain, cf. Figure 2.
Figure 2. Soliton solution and corresponding frequency comb for \(d=0.01,\\,\\zeta=80\) and \(f=23\).
In order to find soliton states analytically, several approaches for the stationary equation were used. The first approach is continuation for \(d>0\). We consider a reformulated version of the equation \[-du''+(\\zeta-\\epsilon\\text{i})u-|u|^2u+\\text{i}f=0\] on the real line. For \(\\epsilon=f=0\) the soliton solution can be calculated explicitly, and then analytically continued into the regime where \(\\epsilon>0\) and \(f>0\). This approach can also be used numerically as these solutions serve as approximations on a bounded interval. Scaling them back leads to a map of parameters where we can expect soliton solutions.
The second approach addresses bifurcation with respect to the parameter \(\\zeta\), cf. [MR17], and preprint [ECOC16]. At certain values, non-trivial solutions bifurcate from the trivial curve, cf. Figure 3.
Figure 3. Bifurcation diagrams for \(d=0.1\) and \(f=2\) (left) and for \(d=-0.1\) and \(f=2\) (right).
In our investigations, we observe the following heuristic:
For \(d>0\) the most localized solitons are at the first turning point of the last bifurcating branch (Label A).
For \(d<0\) the most localized solitons are at the second turning point of the first bifurcating branch (Label B).
Using this heuristic, we can identify parameter regimes where the most localized soliton solutions are found. Defining quality measures such as combwidth and power conversion efficiency, this enables us to determine a universal conversion efficiency map, cf. Figure 4, which is essential for designing microresonators embedded in integrated photonic systems, cf. [GTMKJR19].
Figure 4. Combwidth and power conversion efficiency for \(d>0\) and \(d<0\).
Within our project we also consider numerical time integration of the LLE via a Strang-Splitting between its nonlinear and dispersive part; cf. [JMS17]. One of our observation was that time integration follow the bifurcation branches when the detuning is swept over time.
Recently ([GMR18]), we started to discuss the effects of two photon absorption and free carrier absorption, which play an important role when considering, e.g., silicon based mircoresonator devices. Kerr frequency combs are amazing objects, with great applications and wonderful mathematical features. A lot is still to be discovered.
1. , , , , , and . Bandwidth and conversion efficiency analysis of dissipative Kerr soliton frequency combs based on bifurcation theory. Phys. Rev. A, 100(3):033819, September . URL [preprint] [bibtex]
2. . Global secondary bifurcation, symmetry breaking and period-doubling. Topol. Methods Nonlinear Anal., 53(2):779–800, June . URL [preprint] [bibtex]
3. , , , , , , , , , , , , and . Ultrafast optical ranging using microresonator soliton frequency combs. Science, 359(6378):887–891, February . URL [bibtex]
4. , , and . Strang splitting for a semilinear Schrödinger equation with damping and forcing. J. Math. Anal. Appl., 455(2):1051–1071, November . URL [preprint] [bibtex]
5. , , , , , , , , , , , , , , and . Microresonator-based solitons for massively parallel coherent optical communications. Nature, 546:274–279, June . URL [bibtex]
6. , , , , , , , , , , , and . Ultrafast dual-comb distance metrology using dissipative Kerr solitons. In Conference on Lasers and Electro-Optics, STh4L.6, May . Optical Society of America. [bibtex]
7. and . A priori bounds and global bifurcation results for frequency combs modeled by the Lugiato-Lefever equation. SIAM J. Appl. Math., 77(1):315–345, February . URL [preprint] [bibtex]
8. , , , , , , , , , and . 34.6 Tbit/s WDM transmission using soliton Kerr frequency combs as optical source and local oscillator. In European Conference on Optical Communication (ECOC), pages 415–417, December . [bibtex]
9. , , , , , , , , , , , , , and . 50 Tbit/s massively parallel WDM transmission in C and L band using interleaved cavity-soliton Kerr combs. In Conference on Lasers and Electro-Optics, STu1G.1, June . [preprint] [bibtex]
1. , , and . The Lugiato–Lefever equation with nonlinear damping caused by two photon absorption. CRC 1173 Preprint 2018/44, Karlsruhe Institute of Technology, November . [bibtex]
2. and . Stochastic Galerkin-collocation splitting for PDEs with random parameters. CRC 1173 Preprint 2018/28, Karlsruhe Institute of Technology, October . [bibtex]
1. . Continuation and bifurcation of frequency combs modeled by the Lugiato–Lefever equation. PhD thesis, Karlsruhe Institute of Technology (KIT), February . [bibtex]
2. . Terabit-rate transmission using optical frequency comb sources. PhD thesis, Karlsruhe Institute of Technology (KIT), July . [bibtex]
Former staff
Name Title Function
Dipl.-Phys. Doctoral researcher
Dr. Doctoral researcher
M.Sc. Doctoral researcher |
7cef1941e1ccf1cb | Wiener sausage
From Encyclopedia of Mathematics
Jump to: navigation, search
Let , , be the standard Brownian motion in (i.e. the Markov process with generator ) starting at . Let , denote its probability law and expectation on path space. The Wiener sausage with radius is the process defined by
where is the open ball with radius around .
The Wiener sausage is an important mathematical object, because it is one of the simplest examples of a non-Markovian functional of Brownian motion. It plays a key role in the study of various stochastic phenomena, such as heat conduction and trapping in random media, as well as in the analysis of spectral properties of random Schrödinger operators (cf. also Schrödinger equation).
A lot is known about the behaviour of the volume of as . For instance,
with the Newtonian capacity of associated with the Green's function of (cf. also Green function; Capacity), and
([a8], [a6]). Moreover, satisfies the strong law of large numbers and the central limit theorem for ; the limit law is Gaussian for and non-Gaussian for ([a7]). Note that for the Wiener sausage is a sparse object: since the Brownian motion typically travels a distance in each direction, the last two displays show that most of the space in the convex hull of is not covered.
The large deviation behaviour of in the downward direction has been studied in [a5], [a4] and [a9]. For the outcome, proved in successive stages of refinement, reads as follows:
for any satisfying and
where is the smallest Dirichlet eigenvalue of on the ball with unit volume. The optimal strategy for the Brownian motion to realize the large deviation is to stay inside a ball with volume until time , i.e., the Wiener sausage covers this ball entirely and nothing outside. This comes from the Faber–Krahn isoperimetric inequality (cf. also Rayleigh–Faber–Krahn inequality), and the cost of staying inside the ball is
to leading order. Note that, apparently, a large deviation below the scale of the mean "squeezes all the empty space out of the Wiener sausage" .
The above analysis of the large deviation behaviour has recently been extended to cover the moderate deviation behaviour. It is proved in [a2] that for ,
and a variational representation is derived for the rate function . The optimal strategy for the Brownian motion to realize the moderate deviation is such that the Wiener sausage "looks like a Swiss cheese" : has random holes whose sizes are of order and whose density varies on scale . This is markedly different from the optimal strategy behind the large deviation. Note that, apparently, a moderate deviation on the scale of the mean "does not squeeze all the empty space out of the Wiener sausage" . (a1) has also been extended to .
It turns out that the rate function exhibits rich behaviour as a function of the dimension. In particular, for it is non-analytic at a certain critical value inside , which is associated with a collapse transition in the optimal strategy.
Finally, the moderate and large deviations of in the upward direction are a complicated issue. Here the optimal strategy is entirely different from the previous ones, because the Wiener sausage tries to expand rather than to contract. Partial results have been obtained in [a3] [a1], and [a11].
More background can be found in [a10].
[a1] M. van den Berg, E. Bolthausen, "Asymptotics of the generating function for the volume of the Wiener sausage" Probab. Th. Rel. Fields , 99 (1994) pp. 389–397
[a2] M. van den Berg, E. Bolthausen, F. den Hollander, "Moderate deviations for the volume of the Wiener sausage" Ann. of Math. (to appear in 2001)
[a3] M. van den Berg, B. Tóth, "Exponential estimates for the Wiener sausage" Probab. Th. Rel. Fields , 88 (1991) pp. 249–259
[a4] E. Bolthausen, "On the volume of the Wiener sausage" Ann. Probab. , 18 (1990) pp. 1576–1582
[a5] M.D. Donsker, S.R.S. Varadhan, "Asymptotics for the Wiener sausage" Commun. Pure Appl. Math. , 28 (1975) pp. 525–565
[a6] J.-F. Le Gall, "Sur une conjecture de M. Kac" Probab. Th. Rel. Fields , 78 (1988) pp. 389–402
[a7] J.-F. Le Gall, "Fluctuation results for the Wiener sausage" Ann. Probab. , 16 (1988) pp. 991–1018
[a8] F. Spitzer, "Electrostatic capacity, heat flow and Brownian motion" Z. Wahrsch. Verw. Gebiete , 3 (1964) pp. 110–121
[a9] A.-S. Sznitman, "Long time asymptotics for the shrinking Wiener sausage" Commun. Pure Appl. Math. , 43 (1990) pp. 809–820
[a10] A.-S. Sznitman, "Brownian motion, obstacles and random media" , Springer (1998)
[a11] Y. Hamana, H. Kesten, "A large deviation result for the range of random walk and for the Wiener sausage" preprint March (2000)
How to Cite This Entry:
Wiener sausage. F. den Hollander (originator), Encyclopedia of Mathematics. URL:
This text originally appeared in Encyclopedia of Mathematics - ISBN 1402006098 |
94fee00f46bd1dd1 | The Many Worlds Theory Today
• Posted 10.21.08
• NOVA
Half a century has passed since 27-year-old Hugh Everett III published a version of his Princeton Ph.D. dissertation in a leading physics journal, introducing the scientific world to his radical theory of parallel universes. In what ways did the theory break from existing theories of the day? How has it fared in those five decades, and where does it stand today in the physics community? Journalist Peter Byrne, author of the biography The Many Worlds of Hugh Everett III, answers these and other questions in this interview.
Granting permission
NOVA: What is the Many Worlds theory?
Byrne: The Many Worlds theory basically gives physicists permission to think of the entire universe as quantum mechanical. That's what Wojciech Zurek, a renowned quantum physicist at Los Alamos lab, told me. And that is a real break with how physicists had been thinking of the universe ever since quantum theory got started in 1900. For about 50 years, until Hugh Everett came along, physicists divided the universe into two different worlds. One was the indeterministic microscopic world, where elementary particles fly around. Two was the deterministic macroscopic world, which is the world of our experience, where objects are large, where cause and effect are linked; in physics, this is called the "classical" world.
The quantum world builds the classical world. Everything in the classical, macroscopic world is composed of microscopic particles acting in unison. But in quantum physics, for 50 years, the only way that physicists could interpret or think about the work they were doing was to say that anything that happens in the microscopic world only has meaning in terms of how it is looked at as a large object. We cannot even talk about what happens in the microscopic world, because it is so indeterministic that we can never lay our fingers on what is actually going on.
Everett broke with that. He said—and he wrote the mathematics to back it up—that we can look at the entire universe as quantum-mechanical. We do not have to have an arbitrary division between the classical and the quantum, an arbitrary division that exists because people had no other way to explain the results they were getting. For many years before Everett introduced his theory, people had thought about this problem, but Everett was the first to propose a logically consistent way of removing the barrier.
Everett's argument that the universe is quantum-mechanical is logically indisputable in and of itself. It's an "interpretation," which is a fundamentally different animal than what physicists call formalism. Formalisms are mathematical devices that show you how to operate experiments but that do not need to pop up any kinds of meanings beyond simply If you do A, you get B. Interpretation tries to tell you why If you do A, you get B. Everett was telling you why macroscopic large objects emerged from this microscopic quantum world.
How did he do that?
Well, the device he used to do that—and I don't want to get too technical here, but it's hard not to use this term—is a universal wave function. A wave function is basically just a mathematical list of every possible configuration of a quantum object, like a hydrogen atom. A universal wave function lists every possible configuration of every single elementary particle in the universe. And there are a lot, so you can't actually write it down! The way you symbolize a wave function is with the Greek letter psi. It's kind of like a U with a stake going down through the middle. The first time he ever saw this symbol, Everett's son Mark said, "What's that little devil's pitchfork?" It does look like a little devil's pitchfork.
So Everett came up with this universal wave function, which is just [Austrian physicist Erwin] Schrödinger's equation for describing how elementary particles move around writ large—that is, applied to the whole universe! And it makes beautiful mathematical and logical sense. Actually, it's very much in use in physics today. However, it has consequences to it that people were and remain uneasy with, which basically is that everything that is possible happens. This assertion, which Everett backed up mathematically, solves, according to him and his supporters, the so-called measurement problem, which is kind of the "dirty little secret" that has been afflicting quantum mechanics since it was invented and then formalized in 1920s.
Dirty little secret
What is the measurement problem?
It has to do with the only other technical term I'll use if I can get away with it—the concept of superposition. Think of a superimposed photograph, a piece of photographic film that has been exposed several times so you have overlapping images. That's an analogy for superposition in quantum mechanics. If you think about an elementary particle like an electron, before you look at it, it could be at any number of positions in the device that you've got it trapped in.
Now, the wave function that describes all of the positions that that electron can be at does not say that any one of those positions is more or less real than any other position. They have an equal reality. That is, before you actually measure the electron, it could show up in any of the places the wave function allows it to. When you measure it, though—when you interact with it, when you observe it—it takes one position. We see only one position. In quantum mechanics, however, there's only a probability that it will have a certain position, so if you measure it again in exactly the same way, you might get another position.
Say the first time you measure it, the quantum-mechanical formulas pop up a 30 percent chance that the particle is at position X. The next time you do the measurement, exactly the same way, you might get a 70 percent chance that the particle is at position Y. That means that there's a 100 percent chance that it is at either X or Y. Not only is it a chance, but according to quantum mechanics, it actually is at both X and Y before you measure it. But when you measure it, it's at either X or Y, and if you measure it a million times, 30 percent of the time it's going to be at X and 70 percent of the time it's going to be at Y. This is the measurement problem nobody has ever been able to explain.
The dirty little secret.
Right. Many, including Everett, have claimed, however, that they could explain why you only get one result in our classical world when you interact with a superposed quantum object. But quantum systems composing even a gram's worth of carbon contain trillions of atoms that are quantum-mechanical objects all interacting with one another, so it's just mind-boggling to think of how many possible configurations a gram of carbon could be in.
"Mind-boggling" sounds like an understatement.
It is. If you consider the universe as a whole, and you use Everett's universal wave function to list every single possible position that every single particle in the universe could possibly have in all of time, you have this huge superposition that describes the entire history of the universe.
Now, according to Everett, you have superpositions in macroscopic objects as well as microscopic, like a gram of carbon or the tip of your pencil. Those classical objects are composed of microscopic systems that are all in superpositions, but we don't see 100 million positions for a gram of carbon or the tip of your pencil, we only see one.
The founders of quantum theory, people like Niels Bohr and Werner Heisenberg and Paul Dirac and John von Neumann and others, were faced with this problem back in the 1920s. It was inexplicable using any kind of formula that they could come up with up, so they postulated—that is to say, they decided arbitrarily—that what happens is that the wave function "collapses."
Becoming jellyfish
This is the part of the Copenhagen Interpretation that Everett had a problem with, this so-called "collapse" of the wave function from the many to the one?
Right. The Copenhagen Interpretation basically says "Don't ask, don't tell." It says, "We don't know why, so we're going to do this: We're going to say that what happens is that the wave function loses all of those other possibilities and collapses into only one possibility." They couldn't prove it; it remains a postulate. It works to actually describe the universe that we see, and it works to consider that the wave function is collapsing when you're building quantum-mechanical devices. But it doesn't make any philosophical or logical or interpretive sense.
Everett, who was around Bohr in 1954 and talking to other people around Bohr—as well as to his own mentor, the famous physicist John Wheeler, who died this summer after a long, glorious career in physics—Everett looked at this and likely thought back to something Schrödinger had said a few years before in Dublin. Schrödinger had said physicists fear that if we don't have the collapse, "We should find our surroundings rapidly turning into a quagmire, or sort of featureless jelly or plasma, all contours becoming blurred, we ourselves probably becoming jellyfish."
What he meant by that was that, if we don't collapse it, then all these possibilities are going to start propagating, and there won't be any cause or effect anymore that we can follow from one event to another, and our selves, our physical beings, will start to become duplicated, and every possible position that a human body can be in will suddenly exist in classical reality.
Schrödinger looked at that and said "Ouch," precisely, and dropped that idea and went off into mystical explanations that didn't make any sense whatsoever.
Is this where philosophers, who you've said have given a lot of thought to Everett's theory in recent years, come in?
I'll get to them in a moment, but it's important to realize that Everett, like all good physicists, did not give theories of consciousness any magical powers in quantum mechanics. Because of the intractability of the measurement problem and several other similar paradoxes in quantum mechanics, some people, especially philosophers, have been attracted to the idea that human consciousness collapses the wave function. That human consciousness is the major actor in the universe, and that without human consciousness, the universe would not exist. Physicists like Everett who are materialist and realist thought that was bunk. They think human consciousness is a quantum-mechanical system like any other quantum-mechanical system. Personally, I agree with that.
Okay. Go on.
So Everett, coming from this point of view in which consciousness was not "privileged," in which he had a paradox that had a mathematical formula backing it up that was not solved by the prevailing notion of a collapse of the wave function, started to calculate using information theory, which had just been invented in the post-war years by Norbert Wiener, Claude Shannon, and some others. We say we live in the Information Age. Well, it was pretty much born in about 1948, when Shannon and Wiener and others put forth some remarkable theories that said that information has a physical reality independent of any kind of meaning that you might want to give it. And on the basis of that analysis—that information is physical—all modern technology has come into being.
And Everett knew about this work?
Everett was very up on cutting-edge ideas like that when he was writing his thesis in 1954, 1955. He took the basic analysis, that information is physical, and developed a mathematical argument showing how data correlates within itself. That is, what happens in a superposition is that the person looking at a gram of carbon that exists in a superposition of a billion different places at once does not collapse the wave function. The Schrödinger equation never ends, including in the classical world.
In order to demonstrate the consequence of this mathematically, Everett came up with a solution showing that the observer, the human being, correlates with every possible state that that gram of carbon, that pencil tip, could be in. So before the human being looks at the gram of carbon, the carbon is in all the millions or billions or trillions of possible states, and after the human looks at the gram of carbon, he or she is in one state. In Everett's theory, what happens in between, as it were, when the human actually looks at the carbon—or a clock or any other object—is that he or she splits like an amoeba. (The act of looking, that interaction, is just exchanging energy. A person looking at a clock, for example, is an energetic interaction, with photons of light bouncing off the clock and going into the person's eye.)
So, in Everett's view, when the human correlates herself—that is, interacts, exchanging energy with the gram of carbon or a clock or whatever—she splits like an amoeba. She splits into copies of herself, one for each element in the superposition. [See Everett's draft of his never-published amoeba analogy.]
And this split is what creates the "many worlds" of his theory?
Yes. And wild as it sounds—a person splitting into numerous copies of herself—Everett's theory has not been shown to be mathematically incorrect. God knows, people have tried. They have found some mathematical gaps, but you can't fault his basic mathematical logic, which made a powerful case that every time there is an interaction anywhere in the universe above a certain size, one of the systems splits in order to accommodate all of the elements and the superpositions that are contained in the wave function that describes the observed system. In other words, the basis for having multiple universes emerges from his solution of the measurement problem.
What about Schrödinger's "jellyfish" problem?
Good question! How did Everett get around Schrödinger's fear that we would all become jellyfish, like some kind of blobs of ourselves, walking down the street as if in a photographic superimposition? How did he get away from that? Well, Everett showed mathematically that there would be no contact between the copies, that each copy when it correlated to an element of a superposition in the object it was observing would in effect then go off on a track that was a completely separate universe from the other copies that were doing the same thing, correlating with other elements of the superposition. They would all be going off within their separate universes.
So this, in a rather big nutshell, is the Many Worlds Interpretation.
Yes, that's the Many Worlds theory. (Everett, by the way, didn't call it the Many Worlds theory. He called it the "'Relative State' Formulation of Quantum Mechanics.")
Multiple theories
I'll have to be more careful. Not everyone buys his theory, though, right? The Many Worlds Interpretation has detractors.
Well, people do not necessarily agree with the mathematical conclusions that he came to. Some people I've been talking to recently, very high-powered quantum physicists and mathematicians in Europe and the United States, think that he made some elementary mistakes concerning probability in his theory. But they don't think that those mistakes are fatal to it, and so you have a situation today in which there are several competing interpretations of quantum mechanics. Everett's is one of them; it's on the front burner. There are some others, the interpretation of David Bohm, for instance, which basically is a much more classically oriented interpretation but which also makes a lot of sense when you think about it.
Do any scholars in the field today just dismiss Everett's theory outright?
Yeah, there are lots of people who just dismiss it outright. I saw a book a couple years ago by the famous science writer Martin Gardner. The title of it was Are Universes Thicker Than Blackberries?, and it was like a diatribe against any type of multiple-universe theory, of which there are many. Everett's is not the only one. There are at least four different types of multiple universes that are relevant and considered seriously in physics.
People spotted weaknesses in Everett's argument early on, however. Bryce DeWitt certainly spotted it when he was looking at it carefully in the early 1970s. Here was the problem: If all possible physical events occur, then what happens to probability? We're making a measurement that says that there's a 30 percent chance that this electron is at position X, but if we believe in a many-worlds theory, we believe that actually there's a universe in which it's at every single possible position it could ever be at. So how do we assign a probability value to it? It's a very deep question that philosophers in particular have been struggling with a lot in the last 20 years especially.
Did Everett acknowledge this weakness?
Everett claimed to have made an argument that proved that the standard probability measure came naturally out of his theory. Most people who seriously parsed his work think he made a mistake and that it doesn't emerge that naturally. People attempting to improve his theory have been trying to develop methods of showing that you can have an explanation for why we think there's probability in a universe in which everything that is possible happens. In the little branch that you think you're in, whoever "you" are, if you do a quantum measurement and you get 30 percent probability but, in fact, you live in an Everett universe, then why did you get a 30 percent probability?
Well, it's tied to the question of, "If everything exists in superpositions, why do we not see all objects existing in superpositions?" We live in a classical world. It has an arrow of time that goes in a certain direction. It has entropy, which is related to probability and information. And in order to make sense of the Everett theory, you really do need to explain why we think probability exists, at least in "our" branch.
Is this what has drawn philosophers in?
Yes. There are a lot of arguments that have been made along those lines, especially by philosophers at Oxford University. Simon Saunders, David Wallace (working with David Deutsch), and many others have kind of an Everett school over there. Last year they sponsored a conference called "Everett at 50." It was celebrating the 50th anniversary of his thesis publication, and physicists and philosophers of great renown from all over the world went there. I was fortunate enough to go, and they even asked me to talk about Everett there, which was fun.
For days they debated this question of probability in Everett, and a related weakness, which is that if you have these branches splitting off constantly every nanosecond all over the universe, going in all different directions, how does one universe, one branch, link itself to all these different states so that a coherent, single branch in which my history, say, Peter Byrne's history and life, which I remember as being singular, I remember not being born as a thousand people but as one person, how does that emerge? These are difficult questions that pertain to philosophical topics that have been discussed for centuries.
And what about the Copenhagen Interpretation. Is that still viable?
A lot of people still hold with the Copenhagen Interpretation. If there's a wave function, there's a collapse. They don't want to think about it much further than that. Some people would say that there's no measurement problem whatsoever; they just legislate it out of existence. And there are various classical theories based on random processes that claim to explain how our universe emerges from this multiplicity of possibilities. But Everett's theory is the simplest of the lot, and it does not modify the Schrödinger Equation, which is the basic law governing quantum mechanics.
So while Everett's is not the only interpretation out there, it was the one featured on the cover of Nature in July 2007, celebrating the 50th year since the publishing of Everett's dissertation, because it has had this huge impact on modern physics. [See Everett's full dissertation, published online for the first time.] Even though nobody has ever been able to prove or disprove the existence of multiple universes, it has been useful in a number of areas of physics.
Putting to use
What areas?
Well, it's been very useful in cosmology, for instance, because the universal wave function gives you a method of calculating quantum-mechanical structures in, say, the beginning of time, at the moment of the Big Bang, without having to stand outside of the universe to do these calculations.
This was another huge flaw in the Copenhagen Interpretation and its collapse-of-the-wave-function postulate, because by saying that while we can't explain what happens to the superposition, and we know that in our classical world we have only one measurement result, we have to postulate that in order to get that one measurement result, we stand outside of the quantum object. So what Bohr and the rest of them were saying is that, when you're making a measurement, you the physicist are a classical object, the measurement is a quantum system, and when you make a measurement, the classical world trumps the quantum world. What it predicates is that whenever you make a measurement of the quantum system, you have to be external to it. You have to be outside of it.
What Everett did was he showed how you could make measurements and be inside of the quantum system; that the observer, the scientist, could consider herself to be a quantum-mechanical object, correlating with another quantum-mechanical object, that could calculate probabilities—which is what quantum mechanics does; the only thing it does is calculate probabilities—without having to stand outside of the system created by observer and object observed.
Now, in cosmology this is really important, because if you want to understand the early state of the universe, the inflation state where quantum mechanics is very orderly, you can't get external to it. You've go to be able to calculate from inside the wave function that describes the entire universe. So one of the greatest uses of Everett's theory today is in cosmology, not just in a technical sense, but in an interpretive sense, because if you're a cosmologist who wants to understand the universe from an understanding also that, as a person, you're inside the universe—because how could you be outside of it?—then this gives you a perspective from which to view that whole universe that you're trying to comprehend.
Whew, my head is spinning. What other applications?
Another application of it is in quantum computation. Quantum computation relies on what are called qubits, quantum bits, a few very elementary examples of which have been created in the laboratory. Although quantum computation itself is probably a ways off, it has been experimentally demonstrated that you can keep an information-processing device, in the form of an electron for example, in superposition, and in those superpositions you can process information and, through a complicated process, pop it out at the other end.
People who are building quantum computers don't necessarily have to believe that there are multiple universes. But they are faced with working with these quantum qubits that exist in what can easily be described as multiple universes. If they're not that, then nobody has any other way of describing how they're situated. And if you're David Deutsch, who's been one of the founders of the science of quantum computation, you will look at this situation and you'll say that it's proof that Everett's theory is correct. In fact, David Deutsch has said that quantum mechanics itself is proof that there are multiple universes, although he thinks of them in a more sophisticated way than Everett did.
Anything else? How else has his theory proved useful?
Well, I have to use a third technical term—don't be scared—called decoherence. It's called decoherence because that's the opposite of coherence, and coherence is a useful term in quantum mechanics because it can be applied to that electron that's existing in a quadzillion different positions, in a superposition of multiple states. We call that a coherent state. It evolves through time. It keeps all these positions that it could be in evolving as if they were separately evolving without having to collapse or anything like that, and it is not until it interacts with objects in the quantum environment, a human being for instance, that it collapses or decoheres.
What that means is that, using Everett's analysis as a starting point, physicists like Zurek and Dieter Zeh at the University of Heidelberg, and James Hartle and Murray Gell-Mann and others, have developed a theory of quantum mechanics basically called "decoherence theory." It's not interpretation, but rather a technique that, while not exactly solving the measurement problem, does explain how the classical world can emerge from the quantum universe. Some decoherence theorists think there are multiple universes; some think that there's only one. All of them will tell you that they were inspired to go along the path of developing this theory because of having Everett's universal wave function as a useful tool.
A difficult birth
To step back a moment, initially Everett's theory wasn't well accepted when it was first published in 1957, is that right?
No, it wasn't. First of all, when his dissertation was printed in 1957, it was highly edited from his original version. All the colorful language was taken out. But physicists looked at it and a lot of them thought, "This is crazy." [Physicist Richard] Feynman went on record as saying, in essence, "Well, this is not possible because there can't be multiple universes."
However, people didn't attack his theory publicly, because it's very hard to attack Everett's logic. They did attack it privately. For instance, in 1956, before it was published, Wheeler and Everett sent a copy of the dissertation to Bohr in Copenhagen to see if he would agree that it was true. It wasn't likely that he would, because if he did agree he'd have to admit that he'd been wrong for decades about everything else.
As it happened, Bohr was pretty polite. He didn't attack it himself, but he assigned his acolytes to attack Everett, and actually for decades they took every opportunity they could to say that Everett was stupid, that his theory didn't work, that Everett didn't understand quantum mechanics, stuff like that.
Without backing it up with any solid mathematics.
No. All that they would do to back it up is to say that Everett couldn't be right because the wave function obviously collapses—but then nobody could ever prove that the wave function collapses. Nobody has ever seen a wave function collapse. On the contrary, people actually have seen almost-macroscopic systems called mesoscopic systems exist in quantum superposition. Buckyballs, for example, have been observed in superpositions. So what we have is experimental proof that large systems can exist in superpositions, just like microscopic quantum systems, and no proof whatsoever that the wave function collapses.
It seems remarkable that such brilliant people would create the wave-function collapse out of whole cloth just because they couldn't figure out anything else.
Actually, it makes perfect sense, and whole books and doctoral dissertations have been written on this, because, yeah, to the ordinary person, that would seem like a cop-out, but put yourself in their position. First of all, these European physicists had just broken with three centuries of classical physics that had been developed since the days of Isaac Newton, who said that the world is deterministic, that if you know the initial positions or conditions of any system at one point you can calculate where it's going to be after a certain passage of time.
When quantum mechanics came along, you had this problem where you could only calculate with probabilities, where you could not say that a quantum system existed in a certain position before you look at it, because you can't. All you can say is that it exists in a distribution of possible positions, and so you have the problem that the quantum-mechanical world is indeterministic. Our classical world is largely deterministic, and collapsing quantum indeterminism into classical images is the only way we can describe the quantum world, Bohr said, because we must use what he called "ordinary language."
Nonetheless, Bohr said, we have to be honest and admit that indeterminism is a basic force in the universe. We just cannot talk about it, you see, because it's inexplicable. So we have to postulate that the world we see is the only real world. It was John von Neumann who invented the mathematics of wave function collapse in the early 1930s, and Bohr went along with it.
But not everybody agreed with that.
Einstein, Schrödinger, and others wouldn't agree with that. They were attracted to a deterministic worldview that included probability. In Everett's theory, of course, everything that is physically possible happens. You have no room for probability, because everything happens. And yet the science of quantum mechanics is based on calculating probabilities.
But when the founders of quantum mechanics, including Niels Bohr, were looking at their new, beautiful theory back in the '20s, they realized that getting one world out of many was a problem, but they couldn't explain it any other way except to say what was in front of their eyes, which is we see one world. We know that the quantum-mechanical world seems to exist in many, many possible universes, although they didn't put it exactly that way. But we have to be true to our own experience and say that somehow it gets from many events to the single event. And when they invented their postulate—which was not so much a mathematical construct as it was an interpretive, philosophical construct—it allowed them to use quantum mechanics.
In fact, there's nothing wrong with the postulate in terms of hurting the ability of the physicists to do their work. It enables them to do their work. But, as Everett said when he was being attacked privately before his thesis was printed, he said the Copenhagen Interpretation is a "monstrosity," with one reality for the quantum world and another for the classical. Many, many people agreed with him over time, and many people actually agreed with him before he said it but were afraid to say it because Bohr was a very powerful figure in the history of science.
A quantum-mechanical world
When you say it doesn't hurt their ability to do their work, you mean that you don't have to understand exactly how quantum mechanics works for it to be useful?
That's right. Quantum mechanics is the most successful physical theory in the history of humankind. Something like 70 percent of all the industrial capacity in the world is based in some way on quantum mechanics. It's used in your cell phone, your computer, GPS devices, lasers, anything that's digital and electronic, even in biology. DNA, for instance, can be dealt with on a quantum-mechanical level. And the science, the mathematics of quantum mechanics, is phenomenally successful in predicting what will happen if you shoot electrons at certain targets or try to make cell phone signals error-free.
To give you an example, in your television you've got a cathode-ray tube that shoots electrons at a screen. According to quantum mechanics, there is a chance that one out of every 137 electrons that you shoot out of that tube will go where you want it to go—which means that 136 of them are just going to be lost. Quantum mechanics shows you how to set up a device so that you can get a coherent picture using only one out of every 137 electrons streaming out of a cathode-ray tube. Billions and billions of them are just streaming out every second, so that's plenty.
It's all based on probability.
Right. Basically, if you know the probability that a particle is going to behave in a certain way, even if it's only one out of every 137 times or one out of every 10 million times, you can set up devices that will capitalize on that. And we do.
However, we cannot tell you why that happens. Not only do we not really know what probability is, we do not understand the fundamental motions in the microscopic quantum-mechanical universe. We do not understand how our world emerges from this universe.
So this, again, is the measurement problem that was bugging Hugh Everett.
This was bugging Everett, and one of the reasons that he wrote his thesis was to solve it. There were other reasons, but that was one of the primary reasons, because it intrigued him; it was a paradox. This guy loved solving paradoxes and puzzles, and this was like the ruling paradox of science, and yet nobody wanted to talk about it. In fact, Bohr's Copenhagen Interpretation basically told people Thou shall not talk about it, because you're not going to be able to solve it, and if you start thinking about it, you won't get any work done. Murray Gell-Mann said something like, "Generations of physicists were raised not to ask questions about the foundations of quantum mechanics," and it's true.
I've talked to any number of experimental physicists, and these guys are not philosophically oriented. They're very interested in getting results by manipulating elementary particles in certain ways—say, to make a quantum computer—but to do this, they almost have to take a dualistic attitude towards what's going on in the elementary particle world. Don Eigler at IBM told me recently, "When I look at an electron from a distance, it's a particle. When I look it up really close, it's a wave." Electrons and all elementary particles behave dualistically—as waves sometimes and as particles sometimes—depending on the environment that they're in, and your point of view.
These are questions that experimental theorists have been essentially taught in school do not concern them. They're issues of philosophy, they're told, and there's a certain wisdom in that, because if people were puzzling over unsolvable problems, nobody would want to go into that line of work. Look at [Nobel Prize-winning mathematician] John Forbes Nash. In her biography of him, A Beautiful Mind, Sylvia Nasar says that what tipped him into schizophrenia was trying to solve the measurement problem.
Goodbye to all that
Why did Everett utterly abandon quantum mechanics and go into weapons development? My understanding from the NOVA film is that he was rebuffed by Niels Bohr and others right from the start, and it was so depressing to him that he gave up.
Well, that's true, but there were other factors involved. The long and short of it is that Everett came from a military family. His father was a colonel with tremendous logistical talents, and Hugh went to military school in high school. And his whole generation, many of them just back from World War II, were sent to college by the newly formed National Science Foundation, which was dedicating itself to educating thousands of people to be able to work in the military-industrial complex, especially in research and development of weaponry. So Hugh was sent to Princeton via the National Science Foundation.
His mentor there was John Wheeler, who was one of the inventors of the hydrogen bomb—he'd invented it the year before he met Everett—and he was a huge shaper and player in the military-industrial complex. Princeton was a center of military research, and Everett, it turns out, had a bent for doing military work. He was never that excited about working in academia, because military work, especially if you started doing it in the private sector, which he did after working at the Pentagon for a few years, paid a lot better than academic work.
Everett also didn't trust academia. Here he had come up with this remarkable idea that, 50 years later, is one of the most powerful ideas in physics, acknowledged by physicists everywhere in publications. And in his day either people attacked it viciously because they had their own kind of dogmatic pursuits to protect, or they didn't want to talk about it. I think the idea of working in academia kind of repulsed him after that.
So when did physicists start paying attention to the theory, and why? How old was Everett at that point, and was he able, before his death, to bask in the light from these guys?
John Wheeler had made Everett cut three quarters of his thesis. Wheeler had had this dream that Bohr was somehow going to approve it, so he made Everett remove his direct attacks on the Copenhagen Interpretation as well as his provocative metaphors about splitting observers and bifurcating cannonballs (and, for some reason, a whole chapter on information and probability theory).
So a lot of the explanation of things that people considered to be weaknesses in Everett's theory were cut out of the version that people read. In 1973, DeWitt published the long version, along with the short version and some other papers, including one by himself, in a book called The Many Worlds Interpretation of Quantum Mechanics. He used the phrase "many worlds," because he thought it would be provocative and catchy, and it was, and Everett was pleased.
So did Everett get involved again in quantum mechanics at that point?
Everett didn't get involved in any debates about it, but he followed what was going on from afar. He corresponded with a few people, saying he still believed his theory was true. As the 1970s rolled on and more and more interest was taken in it, DeWitt and Wheeler, who were both at the University of Texas in Austin by then, invited Everett to come down and give a seminar on his theory. (David Deutsch was a young grad student there at the time.) So Everett packed his entire family into his car and drove from Virginia down to Austin and smoked like three packs of cigarettes during the seminar and was just really, really pleased to be there.
Even after he gave up on quantum mechanics, Everett did a lot of useful work, right?
Oh yeah. Mostly he spent his time writing groundbreaking computer programs. He invented databases that gave the theoretical foundation for relational software like Oracle and PeopleSoft, stuff like that. Other people later took those ideas and found a way to make millions—but that's another story. He invented an algorithm called the Everett Algorithm that is still in use today. It's a method of maximizing use of resources. You can also use it to design anti-ballistic missile systems.
Everett also wrote one of the classic military game theory papers of all time in his first year as a grad student at Princeton. In fact, it's such a remarkable paper that when one of the founders of game theory, Harold Kuhn, put out a book 10 years ago on the greatest of all game theory papers, he included Everett's paper. Everett's game theory work, his work in logic with the algorithm that he invented, his work in quantum mechanics, his work in developing software—all these things are still impacting science and computation today.
Computers became his obsession.
Everett loved computers. I think that's one of the reasons he went to the Pentagon—they had the best computers. And he was put in charge of computers at the Pentagon when he was 27 years old. He was doing research and making policy recommendations that seriously shaped United States nuclear war strategy. For instance, he worked for the Weapons System Evaluation Group for eight years, from 1956 to 1964, and one of the things he was in charge of was writing a famous memo called WSEG #50. This memo advised President Kennedy as he was coming in on issues of strategic nuclear warfare, and basically it was the foundation of the next 15 or 20 years of nuclear weapons development and strategy. Everett was involved in designing software that would play war games, that would simulate nuclear wars and political crises, and he was deeply involved when the Cuban Missile Crisis, for instance, came up.
People turned to technocrats like Hugh Everett to design programs that would give them options. So interestingly enough, in his work as a military operations researcher, Everett's specialty was looking at alternatives in different situations. Given that his quantum-mechanical theory said that everything that is physically possible happens, and he believed in it, he also had to live with the fact that there were millions and billions of universes in which the nuclear wars that he was designing took place. No wonder he drank.
Byrne's take
You seem to have a pretty good grasp of what many of us would consider rather difficult material. Do you have a physics background?
No. In fact, I flunked out of algebra in high school. I heard about Hugh Everett from a friend of mine who is a physicist. I ended up writing a profile for Scientific American and getting a book contract with Oxford University Press to do the biography. [Editor's note: The book, The Many Worlds of Hugh Everett III: Multiple Universes, Mutually Assured Destruction, and the Meltdown of a Nuclear Family, appeared in 2010.] One of the great things that happened was that I hooked up with Everett's son Mark, and we uncovered Hugh's papers in Mark's basement, which we're still going through. It's a treasure trove.
I would imagine. What have you found down there? It must be a biographer's holy grail.
Oh, it is. It couldn't be better. I would have liked to have found evidence that Everett was doing more quantum mechanics after he left Princeton, but I think the evidence is to the contrary. He doesn't seem to have ever done any more quantum mechanics, which is a loss to physics.
Clearly you've learned a lot about quantum mechanics yourself during your research.
Yes. As far as I'm concerned, quantum mechanics should be for the masses. Basic elements should be taught in elementary school. I find it appalling that basically it's been co-opted by the military-industrial complex and the university complex, as it were, for 50 years or more. It has its own inaccessible jargon, and the experts, wonderful as they are, meet and discuss things that only they can understand.
Why do children need to know about quantum mechanics?
Well, they might want to participate in building the next round of computers, for one thing.
[laughs] My question wasn't entirely flip. I meant: Why are you for people learning about quantum mechanics so early in life?
Well, whenever it's age appropriate. I'm not an educator, and it's true my six-year-old son is in kindergarten, and I'm not teaching him any quantum mechanics or telling him about multiple universes, because I don't want him to be confused.
[laughs] Right.
What I'm saying is that there comes a time in education when quantum mechanics should be taught to kids before they graduate from high school, in my opinion, and it generally isn't. It's not made accessible in the curriculum to students in a broad way. I'm not mathematical, but I read these equations and I can follow them to a certain degree. I can follow the arguments. And I'm engaged in e-mail conferences with physicists and philosophers all over the world about this issue. They've been very helpful to me. They know I don't understand the math, but they're determined that I will understand the concepts. Between that and reading 150 books and scores of papers, I've maybe learned too much jargon, though.
You've generously kept it out of our discussion. One final question: What's your personal take on Everett's theory? Do you believe it?
Before I started looking into this, I would have thought it was crazy. Now I wouldn't be surprised if it's true, and if I had to bet on it, I'd probably do a 50-50. I mean, I'm not in a position to read quantum mechanics in its formalistic presentation in such a way that it could convince me. Some physicists and mathematicians who do quantum mechanics are convinced, and others aren't.
I think the arguments against Everett hold some water, but they're inconclusive as well, and I see that Everett's theory has had a material and positive affect on the development of science. It would be kind of crazy to say that the universal wave function is true but the rest of it isn't, because you really can't have one without the other. So I just have to say I wouldn't be surprised to find that Everett's theory is true, and I'm not going to say that it's not.
This feature originally appeared on the site for the NOVA program Parallel Worlds, Parallel Lives.
Interview of Peter Byrne conducted on August 29, 2008 and edited by Peter Tyson, editor in chief of NOVA Online |
2662b2904b932893 | Take the 2-minute tour ×
I was searching for the eigensolutions of the two-dimensional Schrödinger equation
$$\mathrm{i}\hbar \partial_t \mid \psi \rangle = \frac{\mathbf{p}^2}{2m_e}\mid \psi \rangle + V \mid \psi \rangle$$
where the potential is given by $$V(\rho, \varphi)=\begin{cases} V_1 & \rho < R \\ -V_2 & \rho \geq R \end{cases}$$
using a space representation and cylindrical coordinates, $V_i \geq 0$.
I would be happy if someone could point me to a reference or even give the solution here.
Thank you in advance
Request to close the question
As I can see in the comments, questions of this kind seem to be inappropriate.
The eigensolutions are given by something like $$\psi_m(\mathbf{r},t)=e^{\mathrm{i}(m\varphi-\omega_m t)}\begin{cases} a_m J_m (k_{m,1} \rho) & \rho < R \\ b_m K_m (k_{m,2} \rho) & \rho \geq R \end{cases}$$ where the $a_m$ and $b_m$ can be calculated from the steadiness of $\psi$ and its spatial derivative in $R$. Furthermore, $k_{m,1/2} = \frac1\hbar \sqrt{\pm\,2m_e(\hbar\omega_m - V_{1/2})}$.
I am sorry for any inconvenience.
share|improve this question
closed as off topic by David Z Jan 19 '11 at 0:18
Write $p^2 = (d/dx)^2$ in cylindrical co-ordinates. Assume $|\Psi>$ can be written as a product $ \Phi(\rho)\chi(\phi)Z(z)$ and the equation splits into three. Standard separation of variables. – user346 Jan 11 '11 at 15:06
What do you mean by "solution". Do you actually need stationary states? – Kostya Jan 11 '11 at 15:07
@space_cadet: thank you for the insight. I think I am capable of solving the system myself and it is clear that solutions will have the form $J_n(k\rho)e^{\mathrm{i} (n \varphi - \omega_n t)}$ e.g. for $\rho < 0$. But I am safe to assume the solution is already known, so a nice reference is what I am looking for :) – Robert Filter Jan 11 '11 at 15:11
At this point it seems like just a math question, really - you're basically just looking for the solution of a known differential equation. All the physics is done. – David Z Jan 11 '11 at 15:46
This is a standard homework problem for an undergrad QM course. Do we really want questions like this and answers to them on MO? – pho Jan 11 '11 at 16:37
Browse other questions tagged or ask your own question. |
a8b84bb18877b841 | 1 Digital Solution of the Mind-Body Problem Ralph Abraham, Sisir Roy <abraham@vismath.org> (Department of Mathematics, Santa Cruz, U.S.A.)
Using the concepts of the mathematical theory of self-organizing systems in understanding the emergence of space-time at Planck scale, we proposed a digital solution of the mind-body problem. This will shed new light on the interconnection of consciousness and the physical world. PL
2 The role of quantum cooperativity in neural signaling Gustav Bernroider, Johann Summhammer <gustav.bernroider@sbg.ac.at> (Neurobiology, University of Salzburg, Salzburg, Salzburg, Austria)
According to the neural doctrine (1), propagating membrane potentials establish the basis for coding and communication in the nervous system. The physical representation of information is assumed to be contained in the spatio-temporal characteristic of propagating membrane potentials as originally described by Hodgkin and Huxley (HH, 2). Despite an uncountable number of correlation studies employing HH-type signals (action potentials, APs) and brain function , the underlaying equations of motion contain coupled dynamics of channel proteins and membrane voltage that still lack a consistent theoretical background. Generally, there is no fine grained level of precision in the correlation of action potentials with higher level brain functions and there are several inconsistencies behind experimental observations and HH type predictions. Action potentials are composed from the concerted flow of ions through aqueous membrane pores provided by a family of voltage sensitive membrane proteins. In a circular type of argumentation, selective permeability determines membrane voltage and membrane voltage determines permeability. There is no ‘window’ in the chain of events that could account for two indispensable features that are observed in ‘real’ neuronal ensembles and considered to be decisive in the exploration of cognitive processes: (i) large ongoing variability to repeated sensory representations as observed in the visual cortex more than ten years ago (3) and (ii) signal onset-rapidness in cortical neurons as shown previously (4). Both phenomena cannot be explained by classical HH type models. Further, in the view of recent advances in the atomic level reconstructions and molecular dynamics (MD) simulations, the originally proposed independence of within channel states (the ‘gating particles’ in the HH model) and independent gating states between channels seems to be untenable. In the present work we introduce quantum mechanical (QM) correlations (entanglement) into the dynamics of single channels and into the temporal evolution of multiple channel states. This is justified by at least two good reasons, (i) the gating transitions within channel proteins are established at the atomic scale, involving QM action orders at least over a certain number of vibrational periods of the engaged atoms, and ii) the states of the channel are not mutually independent as assumed in the classical model. Droping the assumption of independent gating transitions, we introduce a model where sub-domains of the protein responsible for selectivity and permeation are in a short entangled state. The entanglement of gating domains implies that their probabilistic switching behaviour will be governed by some coordinaton, while each gating domain itself still appears fully random. The underlaying model parameters can be tuned from independence, attaining the classical HH behaviour, to a two, three or more particle quantum mechanical entangled version. Our results show, that even with a very moderate assumption on the strength of entanglement that could resist the breaking power of the thermal bath to which the protein is exposed, the signal onset can be several times faster than predicted by the HH model and is in accord with the observed in-vivo response of cortical neurons (4). This is a particularly important result in the view of the persistant debate about the survival time of coherent states in the brain. Further,we show that quantum correlations of channel states allow for ongoing signal variations that are observed in evoked cortical responses. (1) Barlow, H (1972) Perception, 1, 371-394. (2) Hodgkin, A.L. and Huxley, A.F (1952) J Physiol (London), 117,500-544. (3) Arieli, A, Sterkin, A, Grinvald, A, Ad Aertsen (1996) Science, 273, 1868-1871. (4) Naundorf, B, Wolf F, M Volgushev (2006) Nature, 440, 1060-1063 PL
3 Schrodinger's Cat: Empirical research into the radical subjective solution of the measurement problem. Dick Bierman, Stephen Whitmarsh <d.j.bierman@uva.nl> (PN, University of Amsterdam, Amsterdam, Netherlands)
The most controversial of all solutions of the measurement problem holds that a measurement is not completed until a conscious observation is made. In other words quantum physics is a science of potentialities and the measurement i.c. the conscious observation brings about the reality by reducing the state vector to one of the Eigen-states. In a series of experiments modeled after the famous experiment by the Shimony group we have explored the brain responses of observers of a quantum event. In about 50% of the exosures this quantum event had already been observed about one second earlier by another person. This random manipulation was unknown to the final observer. The first experiment along these lines gave suggestive evidence for a difference in brain responses dependent on the manipulation. In subsequent experiments quantum events were mixed with classical events and the results of these experiments that have been reported elsewhere were ambiguous. In a final experiment we are trying to solve the paradoxical results obtained so far. In this experiment the final observer receives detailed information about the type of event that (s)he observes. Also the experimental protocol is such that not only pre-observed events cannot be distinguished from not pre-observed events on the basis of their physical characteristic but neither on the basis of inter-event time distributions. Results will be presented at the conference. PL
4 EEG Gamma Coherence Changes and Spiritual Experiences During Ayahuasca Frank Echenhofer <fechenhofer@ciis.edu> (Clinical Psychology, California Institute of Integral Studies, Richmond, CA)
Ayahuasca is a psychedelic sacramental brew used possibly for more than a thousand years by many indigenous communities of the Brazilian and Peruvian Amazon and by several syncretic religions that originated in 20th century Brazil and that combine ayahuasca shamanism and Christianity. In the last decade, a growing number of North Americana and Europeans have combined ayahuasca shamanism with other religious cosmologies and practices. Some ayahuasca reports are similar to archetypal spiritual experiences at the core of many religions. Studies have shown that authentic non-drug induced spiritual experiences cannot be distinguished from psychedelic spiritual experiences. Religious studies have suggested that psychedelics may have inspired the formative revelations of many shamanic cosmologies, some Greek mystery religions, the Hindu Vedas, and several ancient South and Central American religious traditions. Archetypal spiritual experiences, such as experiencing mandalas, journeying to other worlds, and encountering entities, are documented in monotheistic religions, ayahuasca shamanism, and in ayahuasca reports of North Americans and Europeans. Most spiritual traditions agree that waking consciousness can be transformed to reveal a more comprehensive reality. Studying ayahuasca may provide a reliable laboratory approach to use neuroscience and systematic phenomenological methods to reveal the neural correlates of archetypal spiritual experiences. Our findings, using a multi-disciplinary approach integrating the methods of comparative religion, anthropology, and qEEG, will be presented. Recently psilocybin was reported to facilitate profoundly meaningful experiences in healthy individuals. A psilocybin clinical trial designed to facilitate spiritual experiences in terminal patients has shown initial positive results. Research with a Brazilian ayahuasca religion found that long term users of ayahuasca had overcome alcohol addiction and neuropsychological testing revealed no detrimental effects. Previous psychedelic EEG research found theta and alpha power decreased during mescaline, psilocybin, and LSD, while some individuals showed increased modal alpha frequency. It has been theorized that EEG gamma coherence “binds” different modalities of cortical information processing. Because ayahuasca reports emphasize that the sensory, affective, cognitive, and spiritual modalities of experiencing are more integrated, we hypothesized that ayahuasca would enhance gamma coherence. Our research found that after 45 minutes of ingesting ayahuasca, participants reported the most intense consciousness alterations, or “peaking”. Some reported very brilliant and unusual fast morphing visions comprised of dazzling colors, multiple layers, and exquisitely beautiful architectural structures. Some participants reported that music modulated the physiognomic aspects of the experiential display. Others experienced fear, being overwhelmed, and nausea and vomiting, all which are viewed in shamanism as bodily cleansing and healing. A few reported classical archetypal journey experiences, gaining entry to and exploring other realms of reality and communicated with intelligent entities. In eyes closed ayahuasca vs. baseline conditions, ayahuasca decreased alpha and theta power suggesting enhanced activation and information processing and enhanced gamma coherence suggesting increased “binding” of sensory, affective, and cognitive processes. Some participants showed significant coherence changes in other EEG frequencies suggesting the importance of examining individual differences in future research. Our findings suggest ayahuasca may enhance both binding and cognitive complexity exemplified in feelings of interconnectedness and meaningfulness during archetypal spiritual experiences. PL
5 Why Quantum Mind to begin with? A Proof for the Incompleteness of the Physical Account of Behavior Avshalom Elitzur <Avshalom.Elitzur@weizmann.ac.il> (Univ, Rehovot, Israel)
Should quantum mechanics be applied to the study of consciousness? For this workshop’s participants the answer is obvious, but mainstream science maintains that the burden of proof is on them. Penrose (1995) has put forward an ingenious argument that mathematical invention is non-algorithmic, but this argument failed to convince the mathematical community. This presentation presents a simpler argument of this kind. On the grounds of classical physics alone it is possible to prove that any physical description of behavior is, in principle, incomplete. Every simple analysis of a particular conscious experience, like that of a certain color or tone (a “quale“) reveals an ingredient that is not reducible to physical laws. While this is disturbing enough, worse consequences await any theory that allows these qualia to play any causal role in behavior. Chalmers (1996) has intensively studied the “zombie,” a hypothetical human being that acts only by physical laws without having qualia. He then purported to prove that such a being must manifest all the actions manifested by a conscious human, including the assertion that consciousness is not explained by physical law. This way Chalmers hoped to maintain the closure of the physical world without denying that consciousness is a genuine phenomenon. I present a logical proof that Chalmers’ argument is flatly wrong. Some form of dualism of the worst kind, namely interactive dualism, may be inescapable. I begin by showing that a zombie can never perceive a genuine contradiction between the physical mechanism underlying her perception and her immediate conscious experience. Zombies cannot – but humans do. From this difference it rigorously follows that consciousness, as something distinct by nature from any physical force, interferes with the brain’s operation. The ways out of this conclusion are very few: 1. Dismiss consciousness as illusory, due to some kind of misperception afflicting numerous thinkers and scientists. In this case, “misperception” being a physical phenomenon by the very tenets of physicalism, the burden of proof is now back on mainstream physics: Future neurophysiology must be able to point out the particular failure in the human brain’s operation which is responsible for many people’s belief that consciousness and brain mechanisms are not identical. 2. Concede that energy and/or momentum conservation laws do not always hold. This option ensures mainstream physics’ antagonism. 3. Concede that the second law of thermodynamics does not always hold. This option too is bound to be vehemently opposed by the physical community. Since option (1) is en empirical question, the entire issue is no longer confined to philosophy. The answer is bound to come from scientific research. Returning to quantum mechanics, it is striking that, despite its abandonment of many basic notions of classical physics, it has never seriously considered options (2) and (3). I propose no solution to this problem. My aim is only to show that the riddle of consciousness is much more acute than usually believed, yet it can be resolved scientifically. PL
6 Realistic Superstring Mechanisms for Quantum Neuronal Behavior John Hagelin <hagelinj@aol.com> (Physics, Maharishi International University, Fairfield, IA)
The abundance of "hidden sector" matter in the world today is a nearly inescapable conclusion of realistic superstring theories. Hidden sector matter provides a natural mechanism for macroscopic quantum coherent phenomena in biological systems, where characteristically high temperatures normally preclude such quantum behavior. String theory thus provides a plausible solution to the central challenge in quantum-mind research, namely, "how can the quantum-mechanical mechanisms one would naturally associate with consciousness possibly be supported by the human brain?" Elaboration: Many have speculated that aspects of conscious experience have their physical origin in quantum-mechanical mechanisms. The most challenging associated question has been, "How does the brain--a predominantly macroscopic organ immersed in a high-temperature, high-entropy environment--support quantum-mechanical mechanisms?" Whereas intracellular quantum-mechanisms have been proposed, it is probably essential that a complete quantum-mechanical understanding of consciousness will require quantum correlations that are inter-cellular--i.e., collective correlations among multiple neurons separated by macroscopic distances. Until now, fully viable quantum mechanisms have been elusive. We propose a plausible explanation for stable, large-scale quantum-mechanical coherence based on new physical mechanisms predicted by the superstring. All realistic string models contain "hidden sector" particles and forces, typically including a massless spin-1 "quasi-photon" and at least one light charged scalar meson. Whereas it had been previously assumed that these hidden sector particles interact only gravitationally with normal ("observable sector") fields, it now appears more likely that there is a weak electromagnetic coupling between the two worlds of matter. The hidden sector world is spatially and temporally coincident with ours, but due to its weak coupling, is only dimly observable through dedicated EM detectors currently under development. Also due to its weak coupling, hidden sector matter does not equilibrate thermally with ordinary matter, and thus the hidden sector ambient temperature is calculated to be a few degrees Kelvin--similar to the cosmic neutrino background. This has two important physical ramifications: 1) Hidden sector matter, despite its weak coupling, clings electrostatically to normal matter--especially to carbon-based biological matter. Its concentration in the cellular interior is predicted to be high. 2) Due to its low ambient temperature, hidden sector particles are expected to exhibit macroscopic quantum coherent effects, and provide a viable mechanism for short-circuiting synaptic communication and for sustaining large-scale quantum correlation among distant neurons. In this talk, we present what it currently known about hidden sector matter and its potential relevance to quantum-mechanical biological functioning, and suggest avenues of future empirical and theoretical research. We also present published experimental evidence for long-range "field effects" of consciousness, that provide empirical support for the aforementioned quantum effects, and that help to discriminate among competing quantum-mechanical models of consciousness. PL
7 Schrödinger’s proteins: How quantum biology can explain consciousness Stuart Hameroff <hameroff@u.arizona.edu> (Center for Consciousness Studies, University of Arizona, Tucson, Arizona)
Classical approaches to consciousness view brain neurons, axonal spikes/firings and chemical synaptic transmissions as fundamental information bits and switches in feed-forward and feedback networks of “integrate-and-fire” neurons. However this popular view 1) fails to account for unconscious-to-conscious transitions, binding, and the ‘hard problem’ of subjective experience, 2) forces the stark conclusion that consciousness is an epiphenomenal illusion and 3).conflicts with the two best correlates of consciousness: gamma synchrony EEG and anesthesia, both of which indicating that consciousness occurs primarily in dendrites (i.e. during collective integration - rather than fire - phases of integrate-and-fire). Gamma synchrony EEG requires dendro-dendritic gap junctions (lateral connections in hidden input layers of feed-forward network) and may require non-local quantum correlations to account for precise brain-wide coherence. Anesthetic gases selectively erase consciousness and gamma synchrony EEG, sparing evoked potentials, sub-gamma EEG, autonomic drives and axonal spike/firing capabilities. The anesthetic gases act solely by quantum London forces in non-polar pockets of electron resonance clouds within a subset of dendritic proteins. In the absence of anesthetic (i.e. consciousness), quantum superposition, coherence and non-local entanglement in these electron clouds are amplified to govern protein conformation and function, Thus anesthetic-sensitive proteins may act like quantum bits (“qubits”), engaging in quantum computation (“Schrödinger’s proteins”). Scientists since Schrödinger have suggested an intrinsic role for biomolecular quantum effects in life and consciousness. The Penrose-Hameroff Orch OR model proposes consciousness to be a sequence of gamma-synchronized discrete events, corresponding with quantum computations among entangled, superpositioned microtubule subunits in gap junction-connected dendrites (“dendritic webs”). Microtubule quantum computations self-collapse by Penrose objective reduction (OR), a proposed threshold tied to instability in spacetime geometry separations/superpositions. Thus Orch OR connects brain processes to fundamental spacetime geometry in which (according to Penrose) Platonic values are encoded. Classical microtubule states chosen with each Orch OR event can trigger axonal spikes and convey the content of conscious experience. Orch OR appears vulnerable to decoherence in the “warm, wet” brain. However evidence suggests 1) heat can pump (rather than destroy) biomolecular quantum processes, 2) quantum coherence involving proteins occurs biologically in photosynthesis, 3) quantum correlations may govern ion channel cooperativity, 4) psychoactive molecules interact with receptors by quantum correlations, 5) quantum computing occurs at increasingly warm temperatures, 6) microtubules appear to have intrinsic quantum error correction topology, and 7) “quantum protectorates” occur in regions of non-polar electron resonance clouds in proteins, membranes and nucleic acids. Further, atemporal quantum effects can account for the famous “backward time” found in the brain by Libet, and allow real-time control of our conscious actions, rescuing consciousness from epiphenomenal illusion. So what is consciousness? According to Orch OR, consciousness is a sequence of events in fundamental spacetime geometry, “ripples on the edge” between quantum and classical worlds. The spacetime events are amplified through quantum processes in non-polar electron resonance regions to causally influence biomolecular functions, perhaps connecting us to quantum gravity instantiations of Penrose Platonic values, Bohm’s “implicate order” or in some cases mystical, spiritual and/or altered state experiences. www.quantumconsciousness.org PL
8 Do quantum phenomena provide objective evidence for consciousness? Richard Healey <rhealey@email.Arizona.edu> (Philosophy, University of Arizona, Tucson, Arizona)
Kuttner and Rosenblum (2006a,b) argue that a theory-neutral version of the quantum two-slit experiment provides objective evidence for consciousness–indeed the only objective evidence. However, their description of the experiment is not theory neutral. Kuttner and Rosenblum’s argument that a particular experiment provides objective evidence for consciousness fails: their argument rests on dubious assumptions about the physical effects of consciousness for which we lack objective evidence. Reflecting on our current understanding of quantum theory is one nice way to illustrate this objection. Each of a variety of different interpretations of quantum theory rejects at least one key assumption of Kuttner and Rosenblum’s allegedly theory-neutral description. Moreover, these include interpretations within which consciousness plays no role. Perhaps none of those interpretations will prove acceptable. Quantum theory itself may one day be superseded by a superior theory. Neither eventuality would undermine my objection, which does not depend on quantum theory, under any interpretation. I suggest that if there is objective evidence for consciousness it will be manifested in a very different class of phenomena. PL
9 Quantum Mechanical Implications for Mind-Body Issues Menas Kafatos, S.Roy;K.H.Yang;R.Ceballos <mkafatos@crete.gmu.edu> (College of Science, George Mason University, Fairfax, VA)
Many authors have speculated on the importance of quantum theory to brain dynamics and even its relevance to consciousness. In particular, mind-body issues, by their very nature, imply non-classical physics apparoaches. Quantum mechanics, through the role of the observer, the measurement theory and recent laboratory evidence at the ion channel level, may have serious implications for these issues. In the present paper, we explore the relevance of Quantum Mechancis and some possible ontological as well as laboratory issues. PL
10 Principles of Quantum Buddhism Francois Lepine <info@quantumbuddhism.org> (Quantum Buddhism Association, St-Raymond, Quebec, Canada)
Science and religion have been opposed regarding consciousness since Descartes separated matter and mind: Cartesian dualism. Non-dualist approaches include scientific materialism in which matter produces mind, and idealism in which mind produces matter. On the other hand Buddhists (and neutral monists in western philosophy) believe mind and matter both derive from a deeper-lying common entity. In recent decades it has become evident that quantum physics and quantum gravity can provide a scientifically plausible accommodation of the Buddhist (and neutral monist) approach. In Buddhism the deeper-lying monistic entity is a pure Platonic wisdom of the Supreme Unified Consciousness which can give rise to matter and/or mind. In scientific terms it is the quantum geometry at the tiniest level (Planck scale) of the universe (quantum gravity), or the unified quantum field. Sir Roger Penrose proposed that Platonic forms including mathematical truth, ethical and aesthetic values (which Plato assumed to be abstract) exist as actual configurations of the Planck scale. Cosmic wisdom in Buddhist Supreme Unified Consciousness pervades the universe, involving, informing and interconnecting living and non-living beings. Planck scale quantum information encoding Platonic values – cosmic wisdom - is non-local and holographic, hence repeating everywhere, atemporally (“everywhen”) and at various scales. Buddhist Supreme Unified Consciousness manifests matter and/or mind. Quantum geometry gives rise to either matter or matter and mind, depending on whether quantum state reduction to classical states occurs via decoherence or measurement (in which case matter), or a type of threshold-based self reduction (e.g. Penrose objective reduction) giving matter and conscious mind. In Buddhism, conscious awareness in an individual – self consciousness - is a series of ripples on the universal pond of Supreme Unified Consciousness. In science, self-consciousness is a series of Penrose objective reductions, ripples in quantum geometry on the edge between the quantum world of multiple coexisting possibilities, and the classical world of definite states. In science, conscious ripples, or moments are coherently synchronized with gamma EEG brain waves, 40 or more conscious moments per second. In western philosophy these are Whitehead’s “occasions of experience”. Buddhism meditators report underlying flickering in their perception of reality, momentary collections of mental phenomena. Sarvaastivaadins described 6,480,000 "moments" in 24 hours (75 conscious moments per second), and other Buddhists as 50 per second. Meditating Tibetan Buddhist monks show highly coherent, high amplitude gamma synchrony EEG in the range of 80 per second, twice normal and more highly coherent. Samadhi is a Sanskrit word describing awareness in which sensory inputs, memory and self dissolve, a person’s consciousness becoming totally one with Supreme Unified Consciousness. Samadhi occurs during deep meditation. Scientifically, in altered states quantum brain activities may become more directly connected with the universal quantum geometry and its collective information. The Quantum Buddhism Association was founded in early 2007, and aims at providing a set of tools to develop a scientific-spiritual approach to the world, unburdened by traditional cultural ritualistic and dogmatic weight, where development of the self prevails to become a conscious scientific instrument. PL
11 A new quantum gravitational model for consciousness based in geometric algebra Javier Martin-Torres <fn.f.martin-torres@larc.nasa.gov> (Virtual Planetary Laboratory, AS&M, NASA, Hampton, VA)
A new mathematical model for Quantum Consciousness based in geometric algebra and its results are presented. Two of the basic pillars of the model are the use of: i) gravity as an Orch OR mechanism (Hameroff and Penrose, 1996) and ii) the collective electrodynamics approach developed by Caver Mead (Mead, 2000), in which electromagnetic effects, including quantized energy transfer, derive from the interactions of the wavefunctions of electrons behaving collectively. Between other processes, a new mechanism for acusto-conformational transformation (ACT) by which Micro Tubules (MT) communicate with each other, and a decoherence upper limit are proposed. The model presented establishes a theoretical basis for one of the important (and not yet explained) points in Hameroff and Penrose’s work for quantum consciousness: why the global quantum superposition is the default state. An isomorphism between mono-dimensional binary Cellular Automata and the Clifford Algebra Cl(8) and its applications to the modeling of the consciousness, together with the main implications of the proposed model will be discussed. References Hameroff, S. and Penrose, R., Orchestrated Reduction Of Quantum Coherence In Brain Microtubules: A Model For Consciousness?, In: Toward a Science of Consciousness - The First Tucson Discussions and Debates, eds. Hameroff, S.R., Kaszniak, A.W. and Scott, A.C., Cambridge, MA: MIT Press, pp. 507-540 (1996) Mead, C., Collective Electrodynamics: Quantum Foundations of Electromagnetism, The MIT Press; 1st edition (August 28, 2000). PL
12 The Neuron: no longer the atom of neural computation James Olds <jolds@gmu.edu> (Krasnow Institute for Advanced Study, George Mason University, Fairfax, VA)
Subsequent to the 1906 shared Nobel Prize of Cajal and Golgi, the neuron doctrine has been accepted as dogma to the nascent field that became neuroscience. The approximate number of 10^10 neurons in the human brain is often used to reference the immense complexity of the central nervous system, and entire sub-fields are based on the notion of the neuron as computational machine, integrating massive inputs across the dendritic tree to reach a “decision” regarding whether or not to fire an action potential. Here we put forward the notion that neuroscience has now moved substantially beyond the neuron doctrine. Neurons themselves contain multiple hierarchical levels of internal computational machinery (e.g. the Trans Golgi Network, spines, glutamate receptors, potassium channels) all of which can be said to contribute to the overall emergence of intelligent behavior and cognition. We propose that the true complexity of the human brain is far greater than has previously been accepted, and conclude that this requires a modification of the current reductionist approaches to neuroscience. Integrative neuroscience combined with approaches that have been successful with regards to other complex adaptive systems may provide a fruitful scientific direction for the field. PL
13 Minding Quanta and Cosmology Karl Pribram <pribramk@gmail.com> (George Mason University, Fairfax , VA)
The revolution in science inaugurated by quantum physics made us aware of the role of observation in the construction of data. Eugene Wigner remarked that in quantum physics we no longer have observables (invariants) but only observations. Tongue in cheek I asked whether that meant that quantum physics is really psychology, expecting a gruff reply to my sassiness. Instead, Wigner beamed a happy smile of understanding and replied “yes, yes, that’s exactly correct.” David Bohm pointed out that, were we to look at the cosmos without the lenses of our telescopes, we would see a hologram. I have extended Bohm’s insight to the lens in the optics of the eye. The receptor processes of the ear and skin work in a similar fashion. Without these lenses and lens-like operations all of our perceptions would be entangled as in a hologram. Furthermore, the retina absorbs quanta of radiation so that quantum physics uses the very perceptions that become formed by it. In turn, the higher order systems send signals to the sensory receptors so that what we perceive is often as much a result of earlier rather than just immediate experience. This influence from “inside-out” becomes especially relevant to our interpretation of how we experience the contents and bounds of cosmology that come to us by way of radiation. PL
14 Quantum jumps and explanatory gaps Paavo Pylkkänen <paavo.pylkkanen@his.se> (Consciousness Studies Programme, University of Skövde, Skövde, Sweden)
One reason why researchers ignore quantum theory in the explanation of consciousness is the mysterious nature of the theory. If we cannot make sense of the paradoxical features of quantum theory (e.g. wave-particle duality, discontinuity of motion, non-locality, collapse of the wave-function), how could we possibly hope that this theory will be of any help when trying to understand another mysterious phenomenon, namely consciousness? We thus first need a coherent interpretation of quantum theory which resolves the various paradoxes and provides us with an intelligible view of quantum phenomena. Equipped with such a view, we can then explore whether the place of mind in nature could be understood in a new, better way. If you like, we first need to close the explanatory gap in quantum theory, before we can use this theory to tackle the better known explanatory gap between matter and consciousness. In this talk I will discuss some philosophical problems of mind and consciousness in the light of Bohm’s interpretation of quantum theory which includes new notions such as implicate order and active information. This interpretation is arguably one of the best candidates for a coherent interpretation of quantum theory, although debate about these issues is ongoing. Of course, the crucial question for any attempt to make use of quantum theoretical ideas in this context is whether there are aspects of mind and consciousness that cannot be adequately explained and understood in terms of “classical” explanatory frameworks – i.e. neural and/or computational frameworks which do not make any significant appeal to quantum theory or to the New Physics more generally. There are, in fact, many aspects of mind/consciousness which pose a mystery to “classical” frameworks, but might be better understood in “quantum” frameworks. There is the problem of mental causation: if mental states are non-physical, how could they possibly affect physical processes without violating the laws of physics? If we assume that mental states are physical it becomes easier to understand their causal effect upon physical processes. But there are serious problems of conceiving of mental states (especially conscious states) as physical states, if “physical” is understood in the spirit of classical physics. There are also paradoxical aspects to the phenomenal structure of conscious experience, for example “time consciousness”, at least when one understands time in the spirit of classical physics. My proposal is that quantum theory, especially under its Bohmian interpretation, changes our key concepts (such as “physical”, “causation”, “time”, “space”, “process”, “movement”, “information”, “order”) in such a way as to open up a new and better way of understanding features such as mental causation and time consciousness. Such changes in our fundamental concepts also make it possible to tackle the hard problem of consciousness in a fresh way. References Bohm, D. & Hiley B.J. (1993) The Undivided Universe. An Ontological Interpretation of Quantum Theory. London: Routledge. Hiley, B.J. & Pylkkänen, P. (2005) “Can Mind Affect Matter via Active Information”, Mind & Matter 3(2): 7-27. Pylkkänen, P. (2007) Mind, Matter and the Implicate Order. Heidelberg: Springer. PL
15 Objective evidence for consciousness and free will in the quantum experiment Bruce Rosenblum, Fred Kuttner <brucero@ucsc.edu> (Physics, University of California, Santa Cruz, Santa Cruz, CA)
In the absence of objective, third-person evidence of conscious experience, i.e. “qualia,” one can logically deny the very existence of consciousness beyond these correlates. Consciousness has, in fact, been claimed to be no more than the behavior of a vast assembly of nerve cells and their associated molecules. However, since the origins of quantum physics in the 1920s, consciousness has been seen by some to intrude into the physical world in a manner other than by its physiological and neural correlates. In this view, objective evidence for a physically efficacious consciousness actually exists. The experimental facts, at least, are undisputed. We will illustrate what can be considered a physical manifestation of consciousness with a theory-neutral description of a quantum mechanical thought experiment that can be realized in practice. We will argue that the only escape from our conclusion must be to deny one's ability to freely (or randomly) choose behavior. Moreover, such denial of "free will" must also involve a strange and unexplained connectivity between physical phenomena. Therefore the conclusion that consciousness itself, though yet unexplained, is physically efficacious is at least as modest a hypothesis as any other. This thesis is developed in our recent book, "Quantum Enigma: Physics Encounters Consciousness," Oxford University Press, 2006. PL
16 Aspects of Cosmic Consciousness in the Non-material and Non-empirical Forms of Physical Reality. Lothar Schäfer <schafer@uark.edu> (Department of Chemistry and Biochemistry, University of Arkansas, Arkansas, AR)
The quantum phenomena have shown that reality appears to us in two domains: one is open and empirical and forms the world of seemingly separated, material things. The other is hidden and non-empirical and consists of interconnected, non-material forms. The former is the realm of actuality; the latter, the realm of potentiality in physical reality. Discovering the realm of forms places contemporary physics into the center of powerful historic traditions of spirituality, in which non-material forms were considered as primary reality and connected with a Cosmic Consciousness out of which everything is emanating. The lecture will describe some of the parallels and explore to what extent the quantum phenomena support the view that the primary reality has aspects of mind. In the quantum structure of empirical systems, the non-material forms exist as empty states, called virtual by quantum chemists. The entire universe can be considered a quantum system. Its occupied states form the visible part of reality; its empty states, the non-empirical part. Everything that is visible is the actualization of some quantum states. Everything that is possible is deposited in virtual states. Thus, the complex order in the biosphere does not emerge out of nothing and is not created by chance, as Darwinians claim, but it emerges by the actualization of virtual states whose logical order already exists in the non-empirical part of reality before it is expressed in the empirical realm. PL
17 Experiments in Retrocausation Daniel Sheehan <dsheehan@sandiego.edu> (Physics, University of San Diego, San Diego, California)
The fundamental laws of physics are time symmetric, equally admitting time-forward and time-reversed solutions. That the former are readily observed while the latter are not presents perhaps the starkest asymmetry in nature: the unidirectionality (one-way arrow) of time. Common notions of causation are tightly bound with this asymmetry, as are also the phenomena of consciousness. While causation has long been taken for granted, retrocausation (the future influencing the past) has not. Over the last few decades, however, this situation has changed as theory has begun to admit more freely this possibility and experiments -- e.g., from orthodox quantum mechanics, physiology, and parapsychology -- have begun to provide quantitative evidence for retrocausal effects [1]. In this talk, seminal experiments purporting retrocausation will be reviewed and an attempt will be made to put them into a general theoretical framework. From this more decisive experiments should emerge. [1] "Frontiers of Time: Retrocausation -- Experiment and Theory," AIP Conference Proceedings, Vol. 863, D.P. Sheehan, editor (American Institute of Physics, Melville, NY, 2006). PL
18 Whiteheadian Quantum Ontology: The emergance of participating conscious observers from an unconscious physical quantum universe. Henry Stapp <hpstapp@lbl.gov> (Theoretical Physics, Lawrence Berkeley National Laboratory, Berkeley, CA )
The inability of classical physical concepts to accomodate consciousness is noted,and is contrasted to the way that orthodox von Neumann-Heisenberg quantum theory beautifully does so. Close parallels between the detailed structure of ontologically construed relativistic quantum field theory and the ontology proposed by Alfred North Whitehead are noted, and the way that Whiteheadian philosophy accounts for the natural emergence of local pockets of participatory consciousness from a physical world initially devoid of consciousness is explained. PL
19 Quantum Ideas and Biological Reality: the Warm Quantum Computer? Marshall Stoneham <ucapams@ucl.ac.uk> (London Centre for Nanotechnology and Physics and Astronomy, University College London , London, United Kingdom)
Quantum ideas take many forms. The recognition that matter is quantised as atoms underpins the chemical industry. The recognition that charge is quantised as electrons lies at the core of microelectronics. But the several phenomena we identify as “quantum” are subtle, encompassing exclusion, tunnelling, limits to measurement, and entanglement. These ideas are less intuitive and less tangible at the macroscopic (human) scale. Yet, when our science approaches the nanoscale, there is no way to avoid quantum phenomena. Moreover, as ideas spread from the purely physical sciences to the biosciences, it appears that nature already exploits quantum behaviour even at ambient temperatures in unexpected ways, e.g., in vision and in olfaction. There are also credible ideas for condensed matter processing of quantum information even at room temperature, and some are based on soft matter. These proposals and some experiments, exploiting entanglement, rightly contradict the widely-held physicist views that quantum information processing is possible only at cryogenic temperatures. Yet it is far less clear that the brain exploits quantum entanglement. Any suggestion that similar entanglement-based mechanisms might operate in the brain still has to meet plenty of challenges, first as to the actual atomic-scale processes exploited, and secondly as to how a quantum computer might handle problems more like a brain than like an enhanced classical computer. PL
20 Why is consciousness soluble in chloroform ? Luca Turin <lucaturin@mac.com> (Physics, University College London, London, England, UK)
It is now quite clear that the target of general anaesthetic gases is protein, and there is good evidence that neurotransmitter receptors are involved. Exactly which protein(s) anaesthetic gases act on, and by what mechanism, remains to be determined. I shall describe empirical and computational evidence in support of the idea that general anaesthetics act not allosterically, but by altering protein electron chemical potential. I shall discuss the relevance of this notion to both protein electronics and redox regulatory mechanisms. PL
21 Electrodynamic signaling by the dendritic cytoskeleton: towards an intracellular information processing model. Jack Tuszynski, Avner Priel; Horacio F. Cantiello <jtus@phys.ualberta.ca> (Physics, University of Alberta, Edmonton, Alberta, Canada)
A novel model for information processing in dendrites is proposed based on electrodynamic signaling mediated by the cytoskeleton. Our working hypothesis is that the dendritic cytoskeleton, including both microtubules (MTs) and actin filaments plays an active role in computations affecting neuronal function. These cytoskeletal elements are affected by, and in turn regulate, a key element of neuronal information processing, namely, dendritic ion channel activity. We present a molecular dynamics description of the C-termini protruding from the surface of a MT that reveals the existence of several conformational states, which lead to collective dynamical properties of the neuronal cytoskeleton. Furthermore, these collective states of the C-termini on MTs have a significant effect on ionic condensation and ion cloud propagation with physical similarities to those recently found in actin-filaments and microtubules. We report recent experimental findings concerning both intrinsic and ionic conductivities of microfilaments and microtubules which strongly support our hypothesis about an internal processing capabilities in neurons. Our ultimate objective is to provide an integrated view of these phenomena in a bottom-up scheme, demonstrating that ionic wave interactions and propagation along cytoskeletal structures impacts channel functions, and thus neuronal computational capabilities. Acknowledgements: This research was supported by NSERC, MITACS, PIMS, US Department of Defense, Technology Innovations, LLC and Oncovista, LLC. PL
22 Dissipative many-body dynamics of the brain Giuseppe Vitiello, Walter J. Freeman Affiliation: Department of Molecular and Cell Biology, University of California, Berkeley CA 94720-3206 USA <vitiello@sa.infn.it> (of Physics "E.R.Caianiello", University of Salerno, Italy, Baronissi, Salerno, Italy)
Imaging of scalp potentials and cortical surface potentials of animal and human from high-density electrode arrays has demonstrated the dynamical formation of patterns of synchronized oscillations in neocortex in the beta and gamma ranges (12-80 Hz). They re-synchronize in frames at frame rates in the theta and alpha ranges (3-12 Hz) and extend over spatial domains covering much of the hemisphere in rabbits and cats, and over domains of linear size of about 19 cm in human cortex with near zero phase dispersion [1]. The agency of the collective neuronal activity is neither the electric field of the extracellular dendritic current nor the magnetic fields inside the dendritic shafts, which are much too weak, nor is the chemical diffusion, which is much too slow. By resorting to the dissipative quantum model of brain [2], we describe [3] the field of activity of immense number of synaptically interactive cortical neurons as the phenomenological manifestation of the underlying dissipative many-body dynamics such as the one responsible of the formation of ordered patterns and phase transitions in condensed matter physics in quantum field theory. We stress that neurons and other brain cells are by no means considered quantum objects in our analysis. The dissipative model explains two main features of the electroencephalogram data: the textured patterns correlated with categories of conditioned stimuli, i.e. coexistence of physically distinct synchronized patterns, and their remarkably rapid onset into irreversible sequences resembling cinematographic frames. Each spatial pattern is described to be consequent to spontaneous breakdown of symmetry triggered by external stimulus and is associated with one of the unitarily inequivalent ground states. Their sequencing is associated to the non-unitary time evolution in the dissipative model. The dissipative model also explains the change of scale from the microscopic quantum dynamics to the macroscopic order parameter field, and the classicality of trajectories in the brain state space. The dissipative quantum model enables an orderly description that includes all levels of the microscopic, mesoscopic, and macroscopic organization of the cerebral patterns. By repeated trial-and-error each brain constructs within itself an understanding of its surround, the knowledge of its own world that we describe as its Double [4]. The relations that the self and its surround construct by their interactions constitute the meanings of the flows of information exchanged during the interactions. [1] W. J. Freeman, Origin, structure, and role of background EEG activity. Part 1 & 2, Clin. Neurophysiol. Vol. 115, 2077 & 2089 (2004); Part 3 Vol. 116, 1118 (2005) ; Part 4. Vol.117, 572 (2006). [2] G. Vitiello, Dissipation and memory capacity in the quantum brain model, Int. J. Mod. Phys. B 9, 973 (1995). quant-ph/9502006. [3] W. J. Freeman and G. Vitiello, Nonlinear brain dynamics as macroscopic manifestation of underlying many-body dynamics, Phys. of Life Reviews 3, 93 (2006), q-bio.OT/0511037. Brain dynamics, dissipation and spontaneous breakdown of symmetry, q-bio.NC/0701053v1 [4] G. Vitiello, My Double Unveiled. Amsterdam: John Benjamins, 2001. PL
23 Subcellular processing related to memory and consciousness by microtubules and MAP2 Nancy Woolf <nwoolf@ucla.edu> (Psychology, University of California, Los Angeles, CA)
Among the various parts of the neuron, dendrites are arguably the best candidates for being key to higher cognitive function because they alone integrate large numbers of inputs. The neuronal membrane is the initial site of response to inputs from other neurons, but what lies beneath the neuronal membrane controls the level of synaptic response by computing new inputs relative to information stored in memory. Dendrites are enriched with microtubules and microtubule-associated proteins (MAPs); yet we do not fully know the purpose of these proteins. Accumulating evidence suggests that microtubules and MAPs play critical roles in memory and consciousness, as well as in neuronal transport. Microtubule-associated protein-2 (MAP2) is a dendrite-specific cytoskeletal protein that also acts as a signal transduction molecule, mediating internal chemical responses following synaptic release of neurotransmitters glutamate and acetylcholine. MAP2 and microtubules bind together to form a matrix that stores memory: as new memories form, MAP2 and tubulin proteolysis or breakdown occurs followed by a new subcellular architecture, structured as a modified microtubule matrix (Woolf, NJ, Progress in Neurobiology, 55:59-77,1998). Information stored in the microtubule matrix is then accessed upon the release of certain neurotransmitters, such as acetylcholine and glutamate. Acetylcholine controls the level of consciousness mainly through its muscarinic receptor resulting in downstream activation of kinases PKC and CaMKII, both of which phosphorylate MAP2 and participate in memory. Phosphorylation of MAP2 affects its interaction with microtubules, leading to possible alterations in the protein conformation of tubulin subunits and subsequently to the ability of microtubules to transport receptors, cytoskeletal proteins, and mRNA to synapses. Because of their downstream activation by neurotransmitters, microtubules are in a position to compute current synaptic inputs in the context of previous synaptic activity, and then to increase transport of certain learning-related molecules to synapses. No synapse acting in isolation can bring about a mental state of consciousness: it is instead necessary to have co-activation of a large number of synapses for conscious activity to arise. En masse transport of essential synaptic proteins by microtubules is needed to sustain enhanced synaptic activity, and it is possible that quantum level computations play a role in directing coherent transport both locally and non-locally. We have previously proposed that acetylcholine facilitates quantum computations in microtubules by phosphorylating MAP2 (Woolf NJ & Hameroff SR, Trends in Cognitive Science, 5:472-8, 2001). In this presentation, I propose that the pattern of MAP2 binding to the microtubule forms a gel-based contour which represents information stored by the learning mechanism and provides a physical basis for realizing that stored information (Woolf, NJ, Journal of Molecular Neuroscience, 30:219-22, 2006). When MAP2 is phosphorylated, this gel-based contour expands along a given microtubule and affects the propagation of information longitudinally down the microtubule, and tangentially, the contour affects the state of neighboring microtubules. In these two ways, physically activated microtubules transmit a particular pattern related to a barrage of current inputs in the context of information stored in memory resulting in a coherent response spanning multiple synapses. PL
24 The Truth-Observable: A link between logic and the unconscious Paola Zizzi <zizzi@math.unipd.it> (Mathematics, University of Padova, Padova, Italy)
In Quantum Mechanics, an external measurement of the physical state of a closed quantum system is described mathematically in terms of quantum operators, by which one defines physical observables satisfying the completeness relation: summing up the observables yields the identity. The logical meaning of the completeness relation is that the logical truth splits into partial truths, each of them corresponding to an act of measurement from outside. This is due to the physical fact that any external measurement is an irreversible process, which destroys quantum superposition. Then, an external observer can grasp only fragments of an inner, global truth. Only an internal observer would be able to achieve the global truth at once, as a whole, by making an internal measurement [1], as inside the closed quantum system, he can perform only reversible transformations, described by unitary operators U. The uniqueness and unitarity of such measurement operators allow defining a unique quantum observable that is just the identity: the truth-observable [2]. Notice that in quantum computing [3], U is a quantum logic gate. Then, in this case, an internal measurement corresponds to a quantum computational process. In the theory of a quantum-computing mind [4], we believe that there exists a deepest unconscious state that cannot be known directly from outside. We argue that it is the deep unconscious, which can achieve the "truth" as a whole; the conscious mind can grasp only partial "truths". Quantum information is processed by the unconscious and then is made available to our conscious mind as classical information. As a quantum computer is (due to quantum parallelism) much faster than its classical counterpart, the task done by the unconscious is fundamental to prepare our classical reasoning. The unconscious, endowed with global knowledge (the truth-observable), is rich enough to originate creativity. Global knowledge and creativity together is what enables us to use metalanguage, which makes us so different from (classical) computers, imprisoned in their object language. But also, the truth-observable might be placed at the heart of the logical study of the most severe mental diseases (like schizophrenia) which are very hard to be cured psychoanalytically. On the other hand, less deep unconscious states (pre-conscious) are psychoanalytically interpretable from outside. For example, subjective experiences, which cannot be directly communicated (but only interpreted) should be included in the pre-conscious, not in consciousness. In fact, a shared knowledge (in Latin: cum-scio from which derives the English consciousness) is impossible without communication. References [1] P. Zizzi, “Qubits and Quantum Spaces”, International Journal of Quantum Information Vol. 3, No.1 (2005): 287-291. [2] P. Zizzi, “Theoretical setting of inner reversible quantum measurements”, Mod. Phys. Lett. A, Vol. 21, No.36 (2006): 2717-2727. [3] M. A. Nielsen, I. L. Chuang, Quantum Computation and Quantum Information, Cambridge University Press (2000). [4] S. Hameroff, R. Penrose, “Orchestrated reduction of quantum coherence in brain microtubules: a model for consciousness”. In: Toward a Science of Consciousness. The First Tucson discussions and Debates. Eds. S. Hameroff, A. kaszniak, and A. Scott. MIT Press, Cambridge, MA (1996). PL
25 Moiré wave patterns as the own language of the brain Alexey Alyushin <aturo@mail.ru> (Philosophical Faculty, Moscow Lomonosov State University, Moscow, Russia)
My hypothesis is that the own language of the brain is the dynamical geometry of bioelectrical wave patterns of the moiré origin. The moiré effect is produced by superposing of two or more periodical structures, like hardbody or graphical lattices or oscillatory wave sets, launching them into move in relation to each other, and obtaining an emergent (called alias) structure out of this superposition in move. There are a number of regular wave oscillations in brain, comprising the whole set of wave bands. Brain oscillations correspond to sequences of frames, being the synchronized in firing, although spatially dispersed, transient constellations of neurons (F. Varela). Given the existence of several oscillatory wave structures and the corresponding flows of frames in the brain, the suggestion is due that multiple overlays of rhythmical oscillations or frame flows should produce moiré patterns within their entire manifold. The question is what might be the function of these patterns. I suggest that moiré patterns are far not the distortive noise within a system, as they are commonly approached to in the TV and photo imaging technique; and they are not just empty by-products of some master process within the brain. They themselves are driving gears of brain working, the meaning-containing and meaning-processing units. The function of the lower-order brain oscillations is to bring about and to keep active the higher-order moiré patterns. The most important thing about moiré patterns is that they are emergent structures in respect to those oscillatory patterns that underlay them. They are emergent in a sense that their structure is not contained in either of the underlying patterns; they are entities in themselves. Although with the change or fading of underlying oscillatory patterns the emergent pattern also changes or vanishes. I go further and suggest that the emergent moiré pattern might steer the underlying oscillations for the sake of its own self-sustention. It can well be so that at the early stages of the brain evolution only the lower-order oscillations were present in primitive brains providing for the basic perceptive data processing. But as the brain was developing into a more complex unit and proceeded to generate and to serve the higher mental functions, the formerly derivative and rudimentary moiré phenomena unveiled their abilities and acquired the master control. Enduring and self-sustained wave formations of the moiré origin in the brain are good candidates for being considered as the neural correlates of cognitive and mental structures, including consciousness. If we compare the moiré model with the holographic model of the brain (K. Pribram and others), the first will look advantageous for introducing dynamics. The holographic model is mostly static, dealing with distribution of wave interferences in space, whereas the moiré model stresses the temporal aspect of interaction of wave structures. As a matter of fact, it also deals with interferences, but in their temporal dynamics. Therefore, the holographic model and the moiré model could productively accompany each other. (Some visual moiré patterns will be generated and demonstrated during the presentation by means of computer simulation). C
26 What could possibly count as a physical explanation of consciousness ? the view from the inside and the Bekenstein bound. Uzi Awret <uawret@cox.net> (Falls Church, Va.)
In 1992 in the “Times Literary Supplement” Jerry Fodor laments. “Nobody has the slightest idea how anything material could be conscious. Nobody even knows what it would be like to have the slightest idea about how anything material could be conscious. So much for the philosophy of consciousness.” 20 years later in an article destined for the ‘Encyclopedia of Cognitive Science’ Ned Block claims that: “There are two reasons for thinking that the Hard Problem has no solution. 1. Actual Failure. In fact, no one has been able to think of even a highly speculative answer. 2. Principled Failure. The materials we have available seem ill suited to providing an answer. As Nagel says, an answer to this question would seem to require an objective account that necessarily leaves out the subjectivity of what it is trying to explain. We don’t even know what would count as such an explanation.” The purpose of this paper is to respond to Fodor and Block’s challenge by producing a highly speculative physical theory that can count as a possible physical explanation of consciousness. The biggest problem in attempting to conceive of a physical explanation of consciousness is not the irreducible need to sweep certain difficult issues under the carpet. That is true to some degree for any physical explanation. The problem is to conceive of the carpet. The approach taken by this paper will be to: 1)Establish the possible existence of physical singularities in the brain assumed to be created by informational self interaction and informational self collapse by taking advantage of the shifting and vague line of demarcation separating physical interaction and information theoretic communication. 2)Adopt John Wheeler and Bryce DeWitt’s ‘black hole bounce’ which allows for the possibility of a whole new universe in the singularity at the center of certain black holes. This will provide us with a ‘view from the inside’ that is completely inaccessible from an ‘outside’ that has no room for it. 3) Subject questions about the nature of that space, especially the possibility of a phenomenal nature, to a radical suspension. A radical suspension is not a temporary suspension employed for tactical reasons but a more permanent suspension of the type that physicists or mathematicians adopt in the exploration of singularities. 4)Use our knowledge of neural architecture and the physics of brains to establish the conditions that would enable the emergence of such singularities based on 1). For example, if some brain region with a volume of one cubic centimeter was made to contain more than 10exp(60) bits of information it would have to be a singularity because of the Bekenstein Bound. 5)Conceive of an experiment that is capable of verifying 4) in real brains and establish the existence of such singularities as a minimal NCC. (Neural Correlates of Consciousness.) This paper claims that if 1) through 5) are satisfied than it is possible to furnish at least one possible physical explanation of consciousness despite the radical suspension imposed by 3) precisely because singularities can be explored from the outside in the same way that physics can determine the Chandrasekhar Limit and the Schwarzschild Radius of black holes from the outside. This approach is compatible with Kant’s Transcendental Epistemology which seeks to determine the scope and limits of knowledge from the inside. (See Janic and Toulmin’s Wittgenstein’s Viena.) A mature science is one which explores its own limitations. Instead of attempting to establish the general conditions of possibility that would have to be satisfied in order to produce a scientific explanation of consciousness the paper will end with a putative token singularity based physical theory of consciousness that is capable of satisfying 1) to 5). C
27 Identifying the Interaction between the Quantum and Classical World as the Blue Print for Conscious Activity in Cognitive Vision Systems. Wolfgang Baer <baer@nps.edu> (Informaton Sciences, Naval Postgraduate School, Monterey, California)
I present a physically viable mind/body model based upon Whiteheads assumption that events called “actual occasions” are conscious and fundamental building blocks of the universe. This building block is a process connecting first person experience with its explanation and is independent of any belief system defining reality for an individual. I will select quantum theory as a physically viable reality belief and will show that in this case consciousness is identified within its measurement and state preparation cycle. I generalize this result by identifying the architecture of the interaction between the quantum and classical world is the blue print for conscious activity. According to this theory consciousness itself can be modeled by a cycle of activity required to transform a description of experience into a description of the physical reality causing the experience in any model of reality we chose to believe. It is not the specific model of physical reality but rather the activity of reading from and writing into the model that captures the essence of consciousness phenomena, and such activities can be found in all systems and from microscopic to cosmological scales. As a practical application I will then identify the conscious process in cognitive vision systems being developed to support Unmanned Aerial Vehicle operations at the Naval Postgraduate School in Monterey Ca. By recognizing the conscious process executed by man-in-the loop systems and identifying the cognitive algorithms being executed, we can automate the process by systematically transferring human to machine operations. I will conclude by presenting the results of target mensuration and vision understanding experiments utilizing sensor report to database explanation transforms that implement Whiteheads actual occasions C
28 Characteristics of Consciousness in Collapse-Type Quantum Mind Theories Imants Baruss <baruss@uwo.ca> (Psychology, King's University College, London, Ontario, Canada)
Whereas there has been considerable effort expended to develop the technical aspects of quantum mind theories, little attention has been paid to what must be the nature of consciousness for such theories to be true. The purpose of this paper is to rectify that imbalance by looking at some of the apparent characteristics of consciousness in some of the theories in which consciousness is said to collapse the state vector (Baruss, in press, for a review of such theories), on the understanding that decoherence can not entirely solve the measurement problem (Adler, 2003). Three characteristics become immediately apparent. The first is a volitional aspect of the mind that needs to be distinguished from awareness or observation (Baruss, 1986; Walker, 2000). Some insights about this notion of will can be gleaned also from evidence outside the quantum mind context that intention can affect physical systems (e.g., Jahn & Dunne, 2005). The second characteristic is the stratification of consciousness so that the experiential stream that goes on privately for a given person needs to be distinguished from a universal deep consciousness, somewhat akin to David Bohm’s implicate order (Bohm & Hiley, 1993), that might underlie ordinary consciousness. Thus, the question arises regarding quantum mind theories of the relative contributions of deliberately intentional acts that occur within one’s experiential stream (cf. Stapp, 2004; 2005) and nonconscious coordinated intentions implicit in deep consciousness (cf. Goswami, 1993, 2003; Walker, 1970, 2000). Support for introducing such stratification also comes from modelling anomalous human-machine interactions such as the M5 theory of Robert Jahn and Brenda Dunne (2001) as well as from reports of apparently direct participation in such deep consciousness (e.g., Baruss, 2003, Merrell-Wolff, 1994, 1995). Third, in transferring the notion of the collapse of the state vector from the context of observation in experimental physics to manifestation of everyday life, the temporally discrete nature of such collapse is usually retained so that ordinary waking state consciousness would actually be discontinuous. This suggests the possibility of a flickering universe (cf. Matthews, 2000) whereby physical reality, including its spatial features, arises from a pre-physical substrate, perhaps at the rate of once per Planck time. This idea is consistent with efforts to liberate quantum theory from classical restrictions (e.g., Durr, 2005; Aerts & Aerts, 2005; Mukhopadhyay, 2006) and with speculations about Planck-scale physics (cf. Ng, 2003; Ng & van Dam, 2005). Although these particularly need to be judged critically, there are also some reports of the direct apperception of the discontinuous arising of physical reality from a pre-physical substrate in altered states of consciousness (e.g., Wren-Lewis, 1988; 1994). A volitional aspect of mind, the stratification of consciousness, and discontinuity of the ordinary waking state are some of the characteristics of consciousness implicit in some collapse-type quantum mind theories. C
29 A four-dimensional hologram called consciousness James Beichler <jebco1st@aol.com> (Physics, Division of Natural Science and Mathematics, West Virginia University at Parkersburg, Belpre, Ohio)
The reality of a fourth spatial dimension is now being established in science. The fourth dimension of space is magnetic in nature and thus offers a suitable medium for the storage of memories in mind and consciousness. Consciousness also emerges as a holographic magnetic potential pattern in the fourth dimension. When the passage of time is added to the picture, consciousness becomes a holomovement in five-dimensional space-time. The magnetic potential pattern is induced in the higher dimension by the electrical activity of microtubules (MTs). Each MT is an individual quantum magnetic inductor. When successive MTs inside an axon ‘fire’ in sequence they induce a unique and complex magnetic potential pattern in the higher-dimensional extension of the three-dimensional material brain. This pattern of magnetic potential in the higher-dimensional field constitutes holographically stored memories that can be retrieved by the brain through a reverse process. The vast complexity of the different stored memory patterns constitutes the consciousness of an individual. On the other hand, MTs within different neurons, neuron bundles and neural nets also act coherently to form individual thoughts and streams of thought within the brain. Coherence is established as the inductor-MTs in individual neurons act in concert with axon wall-capacitors to form a complex of microscopic LRC (tuning) circuits. Each MT-axon wall circuit resonates with similar MTs in a complex pattern of neurons, thus establishing and maintaining coherence within the brain. C
30 Disambiguation in conscious cavities James Beran <jimberan@earthlink.net> (Richmond, Virginia)
Using information-based causal principles to work back from our conscious experience, we can develop models of how consciousness might be produced. This paper discusses one such model that can be tied to features found in cerebral cortex and possibly also in other parts of the brain. In this model, neural signals with ambiguous sensory information are received at an input level of a multi-level structure, and, in response, output neural signals, which can be thought of as disambiguated results, are provided at an output level of the structure; between or around the input and output levels is a region in which neural signals interact with conscious information to disambiguate the sensory information and obtain the results. This combination of features can be modeled as a cavity, by rough analogy to certain optical cavities. Disambiguation has mathematical similarities to separation or collapse of an entangled system (referred to herein as "disentanglement") [1], and these similarities suggest that the disambiguating interactions could include disentanglement events that affect disambiguated results. This paper compares disentanglement effects with other mechanisms that could plausibly affect disambiguation in such a cavity, such as action potentials traveling along lateral axons or electromagnetic effects resulting from action potentials. One point of comparison is whether each type of interaction is consistent with known features of cerebral cortex and other parts of the brain. Another is whether evolution could and did produce neural structures in which conscious information could have each type of interaction; this paper therefore examines mutations that might have enabled DNA to produce such neural structures. Even though we may not find a sharp evolutionary divide between our non-conscious and conscious ancestors, the emergence of such neural structures would suggest when earlier forms of consciousness emerged. [1] Bohm, D. and Hiley, B.J., The Undivided Universe, 1993. C
31 A General Quantum-Gravitational Scaling Strategy Connecting Different-Dimensional Fluxes Bernd Binder <binder@quanics.com> (Quanics.com, Salem, Germany)
The paper will present a unique view about the scaling of different-dimensional quantum fluxes and wave functions, which allows to understand and predict the geometric structure and dynamics of (neuronal) networks able to interact via local and non-local quantum-gravitational processes. It is nowadays commonly agreed that the weakness of gravity can in general be assigned to extra-dimensions (holographic principle). Further, it can be argued that an extra-dimensional interface can provide for the necessary coherence and stability (cooling) for lower-dimensional topologies and structures in a thermodynamic sense. To connect, adjust, or transform different-dimensional flux topologies it will be shown that it is the intrinsic unit scale (and not the semi-classical Planck scale) that can build the reference-bridge between the scaling laws of different fields. Therefore, defining the quantum-gravitational fields carrying this intrinsic unit scale dynamics, insures that any power law scaling with or without extra-dimensions will intersect at this scale (since any power of 1 is 1). In this manner it can be shown that different-dimensional interaction fluxes follow a general spatio-temporal scaling scheme on all scales, which can be found on the cosmic scale as Kepler’s 3rd law and on the quantum scale as Compton’s law. The necessary transformations of the general spatio-temporal scaling scheme can be quantified on a pure geometric ground, where relevant physical properties are the signal dynamics given by the spatio-temporal metric adjusted to the proper number and mass scaling encoding a closed holographic system. Finally, it will be shown that living things, brains, cells, and molecular clusters in the mid-scale are well-designed to focus, transform, and project weak extra-dimensional and non-local gravitational fluxes onto strong low-dimensional currents in (neuronal) network channels pumping, driving, and triggering local electromagnetic processes. C
32 Combining prototypes: quantal macrostates and entanglement Reinhard Blutner <blutner@uva.nl> (ILLC, University of Amsterdam, Amsterdam, The Netherlands)
Classical truth-functional semantics and almost all of its modifications have a serious problem in treating prototypes and their combination. Though some modelling variants can fit many of the puzzling empirical observation, their explanatory value is seldom noteworthy. I will argue that the explanatory inadequacy is due to the Boolean characteristic of the underlying semantics, which only allows mixing possible words but it excludes the idea of superposition crucial for geometrical models of meanings. In the main part, I will present a quantal model of combining prototypes. The model elaborates a recent proposal by Aerts & Gabora (2005) and systematically explores an orthoalgebraic approach to propositions as subspaces of an underlying Hilbert space. The quantum model is a minimalist variant of a classical possible world approach and rest on four general assumptions: (1) concepts are superpositions of linearly independent base states that conform to possible worlds; (2) typicality is represented by quantum probabilities; (3) combinations of concepts are calculated as tensor products; (4) there is a diagonalization operation involved, which leads to states that entangle the prototypical properties of the involved concepts. I demonstrate that the model can predict the basic findings on combined prototypes without further stipulations. Firstly, this concerns the existence of the “conjunction effect of typicality” (goldfish is a poorish example of a fish, and a poorish example of a pet, but it's quite a good example of a pet fish) and secondly the strength of this effect (in case of "incompatible conjunctions" such as pet fish or brown apple the conjunction effect is greater than in "compatible conjunctions“ such as red apple). In the final part, I will reflect the philosophical background and look for possible generalizations. In agreement with Aerts & Gabora (e.g. 2005), Chalmers (1995), beim Graben & Atmanspacher (2006) I suppose that the emergence of quantal macrostates does not necessarily require the reference to corresponding quantal microstates. Instead, complementary observables (traditionally restricted to quantum systems) can arise in classical systems as well. Crucial is the concept of generating partitions in the theory of nonlinear dynamical systems: a partition is generating if it divides the state space into regions prescribed by the dynamics of the system, thus permitting the definition of states that are stable under the dynamics. Complementary observables can arise in classical systems whenever the partitioning of the corresponding state space is not generating (Graben & Atmanspacher, 2006). The composition of classical systems with generating partitions can lead to a complex system with quantal characteristics. That is true for conjoined prototypes, and it’s perhaps also true for semantic systems that combine the effects of contexts and possible worlds (see Kaplan’s (1979) two-dimensional semantics of demonstratives). Interestingly, diagonalization is admitted in this case too, whereas certain other operations (“monsters”) are forbidden. Quantum Theory can explain the admission of constraints due to the unitary character of quantal evolution. C
33 toward a new subquantum integratio approac to sentient reality Robert Boyd, Dr. Adrian Klein, MDD <rnboyd@iqonline.net> (Princeton Biotechnologies, Inc., Knoxville, TN)
Recent experimental results have proved intractable to explanation by resorting to existing physics paradigms. This fact, along with certain fallacies inherent in mainstream physical-cognitive theories of mind, have encouraged the authors of this paper to transcend the currently operative limits of investigation, thus to explore the abyssal depth of the still uncharted, but highly rewarding, SubQuantum regimes. The subquantum is herein assumed to co-existentially accommodate proto-units for matter, energy and Information, which are thereby brought onto an equal ontological footing, in the subquantum domains. Devolving its argumentation and orientation from the Nobel Prize winning Fractional Quantum Hall Effect, which opened the perspective toward a further divisibility of the Quantum domain, hitherto considered as an irreducably fundamental description of nature, the hereby proposed inter-theoretic model claims to satisfy advanced scientific and philosophic requests, as reformulated for a conceptually new working hypotheses. Subquantum potentials evolving in the Prime Radiation Matrix result in organizing functions able to interfere with classical local determinacy chains, operating at the Quantum levels of randomity inherent in space-time-like matter configurations, leading to highly complex representational patterns, linked to their phenomenal correlates in macroscopically detectable systems. Our model is strongly rooted in an overwhelming experimental evidence derived from multidisciplinary contexts. Our basic understanding identifies the Quantum Potential as a superluminal SubQuantum Information-carrying aether able to interact with matter and physical forces at well defined space-time positions, injecting their Information content into our world of observables by modulating the event potential. This interaction is possible as soon as matter is defined by an n-degree entanglement state of SQ complexity. Absolute void refers to lack of matter which equals to a space-time sequence contending Information in its nascent, non-aggregative form (the Sub quantum plenum) as observed from our Space-Time perspective. It contains implicated layers of increasingly subtle pre-quantum domains, where each manifestation range may be organized into complete worlds, such as our own, each of them ranging until its own "absolute void", the transition state to the next implication level of reality. Pre-quantum tenets rely upon experimentally testable assessments. Our proposal has a strong outreach into unprecedented explanatory options for anomalous output data distribution in non-conventional exploration fields, whose statistically significant results become logically integrated into epistemologically sustainable blueprints. Our views are perfectly consistent both with conventional empirical treatment of space-time defying representational variables, and their causal primacy upon Quantum implementation systems of their content, in the integral range of their polyvalent manifestation. Detailed descriptions of mind/matter entanglement patterns are supplied, as running in the holistic superimplicative sentient reality domains, under the overarching regulation of Cosmic Harmony, underpinning a continuous creation cosmogenetic process. As our analysis addresses a pre-temporal range, the thus defined endless time vector allows ab-initio existing inherent resonance links in any SQ subtlety domain to turn into fluxes and organization effects leading to sequential entelechial self-contended worlds. These primeval harmonic SQ resonances are the very pattern of our overarching cosmic harmony just mentioned, the source of all conceivable manifestation and interconnectedness. C
34 The Big Condensation-Not the Big Bang R.W. Boyer <rw.boyer@yahoo.com> (Fairfield, IA)
R. W. Boyer Girne American University Girne, Northern Cyprus According to the consensus cosmological theory of the inflationary ‘Big Bang,’ the universe originated, presumably instantaneously from nothing, as an inherently dynamic, randomly fluctuating, quantum particle-force field that eventually congealed into stars, planets, and organisms such as humans complex enough to generate consciousness. This fragmented, reductive materialistic view is associated with a bottom-up matter-mind-consciousness ontology, in which the whole is created from combining the parts. In this view, consciousness is an emergent property of random bits of energy/matter that somehow bind into unitary biological organisms mysteriously developing control over their parts. On the other hand, the holistic perspective in Vedic science is a top-down consciousness-mind-matter ontology, in which the parts manifest from the whole. In that perspective, the origin of the universe is better characterized as the ‘Big Condensation’ rather than ‘Big Bang.’ Phenomenal existence remains within the unified field and manifests, limits itself, or condenses into subjective mind and objective matter. The holistic perspective of ultimate unity and its sequential unfoldment is contained in the structure of Rik Veda.1 Vedanta is from the experiential perspective of unity, and the sequential unfoldment of phenomenal levels of nature within unity is articulated, for example, in Sankhya and Ayurveda. The holistic perspective is more consistent with developing understanding in unified field theories, spontaneous symmetry breaking, quantum decoherence, the ‘arrow of time,’ and the 2nd law of thermodynamics, which imply the universe originated from a lowest entropy, super-symmetric, even perfectly orderly, super-unified state. The holistic perspective in Vedic science provides means for resolving fundamental paradoxes in the reductive, materialistic, bottom-up ontology? including the ‘hard problem’ of consciousness, order emerging from fundamental random disorder, life emerging from non-life, free will, and everything emerging from nothing.2 C
35 Examining the Effect of Physiolgical Temperature on the Dynamics of Microtubules Travis Craddock, Jack A. Tuszynski <tcraddoc@phys.ualberta.ca> (Physics, University of Alberta, Edmonton, Alberta, Canada)
The leading objection against theories implicating quantum processes taking place within neuronal microtubules states that the interactions of a microtubule system with an environment at physiological temperature would cause any quantum states within the system to decohere, thus destroying quantum effects. Counter arguments state that physiologically relevant temperatures may enhance quantum processes, and that isolation of microtubules by biological mechanisms, such as actin gel states or layers of ordered water, could protect fragile quantum states, but to date no conclusive studies have been performed. As such working quantum based models of microtubules are required. Two quantum-based models are suggested and used to investigate the effect of temperature on microtubule dynamics. First, to investigate the possibility of quantum processes in relation to information processing in microtubules a computer microtubule model inspired by the cellular automata models of Smith, Hameroff and Watt, and Hameroff, Rasmussen and Mansson is used. The model uses a typical microtubule configuration of 13 protofilaments with its constituent tubulin proteins packed into a seven-member neighbourhood in a tilted hexagon configuration known as an A-Lattice. The interior of the tubulin protein is taken to contain a region of two areas of positive charge separated by a barrier of negative charge and is based on electrostatic maps of the protein interior. The interior arrangement constitutes a double well potential structure within which a mobile electron is used to determine the states of an individual tubulin dimer. Dynamics of the system are determined by the minimization of the overall energy associated with electrostatic interactions between neighbouring electrons as well as thermal effects. Classically the model allows transitions for electrons with sufficient energy to overcome the potential barrier in which the new configuration lowers the system’s energy, or if the configuration raises the system’s energy, with a finite probability. Quantum mechanically the model allows the electron to tunnel through the potential barrier allowing transitions for which the system’s energy is lowered even if the electron does not possess the necessary energy to overcome the potential barrier, or for configurations that raise the system’s energy with the same finite probability as in the classical scenario. The emergence of self-organizing patterns that are static, oscillating, or propagating in time are taken as the determining factors of the system’s capability to process information. Second, to further the investigation of quantum processes taking place in microtubules, an exciton model of the microtubule is used. Tubulin monomers are taken as quantum well structures containing an electron that exists in its ground state, or 1st excited state. Following previous work that models the mechanisms of excition energy transfer in Scheibe aggregates the issues of determining the strength of excition and phonon interactions, and the effect on the formation and dynamics of coherent excition domains within microtubules are discussed. Also estimates of energy and time scales for excitons, phonons, their interactions and thermal effects are presented. C
36 Consciousness As Access To Active Information: Progression, Rather Than Collapse, Of The Quantum Subject Jonathan Edwards <jo.edwards@ucl.ac.uk> (Medicine, University College London, London, England)
The link between consciousness and quantum theory often draws on the views of von Neumann on wave function collapse. From a biological standpoint several arguments favour a different approach. Any quantum mechanical process involved needs to link in to classical biophysics and the most plausible route is through the correspondence principle (as Feynman’s QED life history of a photon scales up to classical diffraction by Young’s slits). In this scaling up, wave function collapse loses significance, the dynamics being dictated by the laws of linear progression (von Neumann type 2, rather than type 1). Moreover, wave function collapse is not required by all interpretations of QM, a widespread view being that it is neither useful nor meaningful to divide the quantum system into arbitrarily defined ‘sub-processes’. There are also severe difficulties in defining the boundaries of the ‘quantum system’ with wave function collapse or decoherence approaches. Linear progression through a physical environment (Young’s slits, brain) involves an interaction with the environment which entails access by the quantum system (e.g. photon) to what Bohm and Hiley usefully call ‘active information’ about its environment. Access to information is both an indivisible and a bounded phenomenon. Since consciousness appears to be a state of access to a rich, indivisible, yet bounded, pattern of information this makes access to active information at the quantum level an attractive explanation. In macroscopic structures the life histories of quantum systems represented by particles with rest mass, such as electrons, with wavelengths close to the size of atoms, are both too 'fine-grained' and too biologically irrelevant to be plausible as ‘quantum-dynamic subjects’ accessing the active information that would be our experience of the world. However, massless bosons such as photons and acoustic phonons, with much longer wavelengths, might be candidates. Fields or modes of large numbers of such bosons can mediate classical mechanical effects and lose nothing of their indivisibility of acquisition of information in doing so. No form of phase coherence is required for this aspect of QM to apply on a large scale. The implied identity of the ‘quantum-dynamic subject’ might upset philosophers, but that can happen with biology. Phononic modes in cell membranes may be attractive candidates for quantum-dynamic subjects because their functional wavelengths could match the micron scale at which electrical information is held in neuronal dendrites and the known piezoelectric properties of the membrane would allow coupling of electrical information (and not irrelevant ‘cell housekeeping’ processes) to the phononic mode. Recent thermodynamic reassessment of the action potential suggests that electromechanical coupling may be integral to membrane excitability. Electromechanically coupled modes are documented in neurons in the inner ear. Whether such modes can, or should, involve groups of cells is uncertain. Relevant phononic modes in cortical neurons would be at or beyond the limit of current direct detection methods but might be probed indirectly with e.g. anaesthetics or calcium levels. Standing wave modes based on local longitudinal ‘dendritic telescoping’, possibly linked to cytoskeletal microtubules, might be the most plausible. C
37 Existence and consciousness Peter Ells <peterells@hotmail.co.uk> (Oxford, UK)
Stephen Hawking (1988) wrote, “What is it that breathes fire into the equations and makes a universe for them to describe. The usual approach of science of constructing a mathematical model cannot answer the questions of why there should be a universe for the model to describe. Why does the universe go to the bother of existing?” This paper cannot answer these “What” or “Why” questions. Instead it asks, “What do we mean when we say that our universe actually exists, and how does this concept of actual existence take us beyond mere mathematical existence?” The paper considers various types of existence: experiential existence of experiential beings possessing subjective, qualitative, perceptual states (that do not necessarily amount to thinking states); Physical existence of external objects that can be inferred by collating the percepts of experiential beings; Material existence of entities obeying physical laws without reference to experiential beings; Finally mathematical existence, which is merely formal description that is logically consistent. There might not be any life elsewhere in our universe, and it is quite conceivable that, had the history of our planet been slightly different, life might never have emerged here. In these circumstances, our universe would have completed its history lifeless, and thus (according to the dominant viewpoint) only ever have contained entities with material existence. In such circumstances the problem arises that material existence, (as will be shown), collapses into mere mathematical existence. We can be very confident that we and our universe have more than mere mathematical existence, and so something must be wrong. The solution I argue for here is that all material existence must in fact be experiential existence, and so all matter is subjective and experiential in its essence. From a study of what it means for a universe actually to exist, I thus arrive at panpsychism. A dodecahedral universe is used as an example to show how conceptually simple experiential beings might be. Finally, I sketch in very general terms how the well-known, problematic characteristics of quantum theory are in harmony with panpsychism. Hawking, S. (1988), A brief history of time (London: Bantam Press). C
38 Does microbial information processing by interconnected adaptive events reflect a pre-mental cognitive capacity? Gernot Falkner, Kristjan Plaetzer; Renate Falkner <Gernot.Falkner@sbg.ac.at> (Organismic Biology, University of Salzburg, Salzburg, Austria)
We dicuss possible cognitive capacities of bacteria, using a model of microbial information processing that is based on a generalized conception of experience, from which all traits characteristic for higher animals (such as consciousness and thought) have been removed. This conception allows relating the experience of an organism to the phenomenon of physiological adaptation, defined as a process in which energy converting subsystems of a cell are conformed – in an interconnected sequence of adaptive events – to an environmental alteration, aimed at attainment of a state of least energy dissipation. In adaptive events the subsystems pass, via an adaptive operation mode from one adapted state to the next. An adaptive operation mode occurs, when a subsystem is disturbed by an environmental alteration. In this mode the environmental change is interpreted in respect to a reconstruction that appears to be useful in the light of previous experiences. Connectivity exists between adaptive events in that the adapted state resulting from an adaptive operation mode stimulates adaptive operation modes in other subsystems. When in these systems adapted states have been attained, the originally attained adapted states are no longer conformed and have to re-adapt, and so on. In this way adaptive events become elements of a communicating network, in which, along a historic succession of alternating adapted states and adaptive operation modes, information pertaining to the self-preservation of the organism is transferred from one adaptive event to the next: the latter “interprets” environmental changes by means of distinct adaptive operation modes, aimed at preservation of the organism. The result of this interpretation is again leading to a coherent state that is passed on to subsequent adaptive events. A generalization of this idea to the adaptive interplay of other energy converting subsystems of the cell leads to the dynamic view of cellular information processing in which an organism constantly observes its environment and re-creates itself in every new experience. This model of cellular information processing is exemplified in the adaptive response of cyanobacteria to external phosphate fluctuations. It is shown that adaptive processes have a temporal vector character in that they connect former with future events. One the one hand they are influenced by antecedent adaptations, so that in this respect a cellular memory is revealed in adaptive processes. On the other hand they bear an anticipatory aspect, since adaptation to a new environmental situation occurs in a way that meets the future requirements of the cell. A computer model of the intracellular communication about experienced environmental influences allowed simulating the experimentally observed adaptive dynamics, when during the simulation the program altered the parameters of the model in response to the outcome of its own simulation. Falkner R., Priewasser M., & Falkner G. (2006): Information processing by Cyanobacteria during adaptation to environmental phosphate fluctuations. Plant Signaling and Behaviour, 1, 212-220. Plaetzer K., Thomas S. R., Falkner R., & Falkner G. (2005): The microbial experience of environmental phosphate fluctuations. An essay on the possibility of putting intentions into cell biochemistry. J. Theor. Biol. 235, 540-554. C
39 Mind backward paths: from ascons to dendrites passing through quantum memories Alberto Faro, Giordano Daniela <albfaro@gmail.com> (Ingegneria Informatica e Telecomunicazioni, Univesita' di Catania, Catania, Italy)
Neural networks in the brain convey forward signals from dendrites to axons, whereas backward paths have not been identified yet. This makes it difficult to explain how the mind, an open system mutually dependent on the environment, reaches equilibrium states with the surrounding context. In a previous work the authors have proposed five hypotheses envisaging a model (i.e., the Frame Model of the Quantum Brain) in which the adaptation between self and environment is regulated by a high order cybernetics loop without entailing any “entity” in mind. This paper refines the five hypotheses proposing that quantum memories have a role in implementing the backward paths from axons to dendrites as follows: • Human activity is sustained by two quantum fields, i.e., the cortical and ordering fields produced by the vibrations of the myriad of dipoles existing at neuronal and cytoplasm level, allowing the subjects to enact each action (coded by a Humezawa’s corticon) of a scene (coded by a Faro&Giordano’s orderon) depending on the performed actions and the planned ones. Awareness of the scene is only achieved a-posteriori, when the scene has been concluded without contradicting the initial hypothesis. This extends the notion of “backward time referral”. • The orderons are classified according to their regularities by a Clustering Quantum Field (CQF) produced by the vibrations of dipoles at dendrites level. This generates an ontological space whose axes are coded by CQF particles (i.e., Faro&Giordano’s clusterons). • The problem at hand and some external representation activate selectively the mRNAs on the dendrites which on their turn activate the axons of the related neuronal groups. The excitation of the postsynaptic potentials generates a global EEG profile together with the emission of photons specific for the given input. These photons activate a set of orderons (coded by vacuum states). This explains why the received inputs address the attention towards areas of the ontological space containing scenes having some analogy with the situation hypothesized by the subject. The collapse of the activated vacuum states towards the state representing the prevailing scene produces the emission of photons that inhibit or reinforce the synthesis of the proteins on the dendrites. This loop evolves until the stimuli received and the codification of the information perceived by the self in correspondence to these stimuli are one the mirror of the other in DQBM (Dissipative Quantum Brain Model) sense. • If the subjects recognize not being experienced to deal with the current situation, a new scene and related orderon is created consciously by cross-over and mutation of relevant existing scenes. The inputs of the new scene will reactivate in future similar situations the zones of the ontological space containing the scenes originating the new one. • The external representations mediate the communication of the scenes among people in order to create conventions and rituals that are at the basis of a social life. Empirical evidences at the basis of the model and hypotheses to be tested will be pointed out, thus identifying the lines of the future work. C
40 Differentials of Deep Consciousness: Deleuze, Bohm and Virtual Ontology Shannon Foskett <foskett@uchicago.edu> ( , University of Chicago, London, Canada)
This paper will explore the relevance for the study of consciousness of the surprising relationship between David Bohm’s Implicate Order and the ontological thought of late French philosopher Gilles Deleuze. The uncanny connection between Bohm’s thought and the oft-misrepresented work of various “postmodern” philosophers such as Derrida or Lacan has been addressed most notably by mathematician and cultural theorist Arkady Plotnitsky. Plotnitsky’s work, however, stops short of looking at Deleuze and does not consider the relationship to consciousness. I would like to suggest the mutual relevance of Deleuze and Bohm for scholars of their work, but also, and more importantly, the new flexibility that their combined vision might offer for theorizing consciousness in wider disciplinary contexts and in conjunction with existing notions of consciousness in the humanities. This ability to address more prevalent conceptions of consciousness in the academic community will be in increasing demand as empirical research on consciousness matures. Fortunately, there already exists an intuitive understanding on the part of some humanities scholars of an implicit relationship between quantum theory and ideas within what can be loosely considered as “postmodern” thought. Bohm’s “holomovement” and “implicate order” express much the same ideas as the notion of intensive depth in Deleuze. Both sets of terminology describe being as a process of (en)folding and unfolding. Deleuze even uses the same descriptor, referring to intensive depth as “an implicated order of constitutive differences.” This depth corresponds to the infinite nature of the wave form of each potential particle. In a quantum field theory context, the situation is described in terms of an infinite overlapping of fields, where the field replaces the sub-atomic particle as the “ultimate, fundamental concept in physics, because quantum physics tells us that particles (material objects) are themselves manifestations of fields.” This set of all matter waves is nothing but Deleuze’s pure spatium, from which “emerge at once the extensio and the extensum, the qualitas and the quale.” Being, in its intensive depths, is drawn out, or explicated, through a motion of different/ciation that produces it as extensity. This causes intensity to appear “outside itself and hidden by quality.” For Bohm, the explicate order is also a merely limited case of the implicate order. I will argue that Deleuze’s unique concept of the Idea as a particular point of intensity within the Implicate may be a theoretical placeholder for phenomena in quantum-based models of consciousness. Finally I will discuss how Deleuze’s model contributes to Bohm’s with an understanding of what role of chance processes might play within various levels of consciousness. C
41 Intensity of awareness and duration of nowness Georg Franck, Harald Atmannspacher <franck@iemar.tuwien.ac.at> (Digital Methods in Architecture and Planning, Vienna University of Technology, Vienna, Austria)
It has been proposed to translate the mind-matter distinction into terms of mental and physical time. In the spirit of this idea, we hypothesize a relation between the intensity of awareness in mental presence and a crucial time scale (some ten milliseconds) relevant for information updates in mental systems. This time scale can be quantitatively related to another time scale (some seconds) often referred to as a measure for the duration of nowness. This duration is experimentally accessible and offers, thus, a suitable way to characterize the intensity of mental awareness. Interesting consequences with respect to the idea of a generalized notion of mental awareness, of which human consciousness is a special case, will be outlined. C
42 Overcoming Discontinuity and Dualism in Modern Cosmology Mary Fries <mfries@ciis.edu> (Philosophy, Cosmology, and Consciousness, California Institute of Integral Studies, Oakland, California)
Begun as an explanation for the stepwise emittance and absorption of energy observed in physical systems, quantum mechanics, by its very name, asserts the discontinuity of matter, a modern atomism that influences the development of current attempts to unite quantum mechanics and general relativity. The ensuing schemata of superstring theory and loop quantum gravity reinforce our tendency to objectify the foundations of an evolving reality, and while, via these ideas, we have transcended the billiard-ball notion of point-like particles, we have in no way evaded reductive abstraction. The spatiotemporal-limitations of human form justify this natural tendency toward generalization, yet this predisposition still recurrently hinders scientific progress. While formulaic abstractions do no harm in so far as we recognize them as limitations of our assumptions, in order to truly integrate quantum mechanics and relativity, we will need to overcome our expectation of subatomic happenings to mirror the behavior of macroscopic bodies. According to modern theory, spin nets or strings (depending on the model used), the supposed 'fundamental particles' of reality, form the very fabric of the universe. They do not embed themselves within space-time; they define space-time. Hence, a supposition of their discreteness implies discreteness of both time and space. Planck's contribution of a 'smallest size' and a 'smallest time', Planck length and Planck time respectively, fortifies the discretization of reality, as does Heisenberg's uncertainty principle by placing a lower limit on our capability to conduct measurement. But do a handful of constants and a threshold to our investigations justify delimiting our work by potentially premature quantification of the natural universe? History abounds with cases of simplifications of mind being finally overturned by less intuitive explanations. The redefinition of Bohr's atomic model, the discovery of cosmic inflation, and perhaps the most popularized realization of the earth as a round satellite of the sun all required significant mental reorientation to the cosmos. Quantum mechanics continues to baffle those seeking to assimilate its implications into minds predisposed to entirely different logic and causal relationships. As every abstraction is by definition a limitation, it may well be the case that, in much the same way, our attachment to quanta holds us back from an integration of the four forces. But would such a re-envisagement of the 'fundamental particles' necessarily imply a continuous universe instead? Perhaps, but while certain problems are more easily formulated from within the framework of such a dualism, it may well be the case that the much-anticipated union will occur to those who refuse to be bound, to those who come to view reality as organism, perhaps with a mixture of continuity and breaks such as black holes and the seeming origin of the universe, as a universe that favors its own direction over constructions of the human mind. Within a more accommodating model, the flexibility of the wave and the stability of the particle may be formulated in a higher-order abstraction with broader limitations and wider reconciliations wherein mind can be finally integrated as a fundamental component of reality. C
43 Modeling Consciousness in Complex Spacetime Using Methodology of Quantum and Classical Physics. Anatoly Goldstein <a_goldshteyn@yahoo.com> (Voice Center, Massachusetts General Hospital, Boston, MA)
It is argued that even if quantum mechanical formalism does not directly apply to consciousness mechanisms, the methodology used for solution of Schrödinger equation and its interpretation may be very useful for modeling of consciousness. According to I. Thompson (2002) Hamiltonian and wave function of Schrödinger equation resulting in probabilities of observation outcomes correspond to conscious activities such as intentions and thoughts resulting in actions. R. Penrose & W. Rindler (1984) indicated that "space-time geometry, as well as quantum theory, may be governed by an underlying complex rather than real structure". A geometric model of consciousness (E. Rauscher & R. Targ, 2001) shows importance of imaginary space and time coordinates in interpretation of non-local consciousness phenomena such as remote viewing and precognition. The current author is suggesting to model information dynamics of consciousness with a complex function in complex spacetime. This automatically accounts for the ability of consciousness/awareness to access imaginary coordinates of complex spacetime. Max Born formula shows how one can extract real-valued observable data from the complex-valued function that might be applicable to modeling of consciousness. Consciousness is commonly considered to be directly related to vibration processes such as brainwaves, electrical activity in neural membranes. It is suggested to model these processes with a linear combination of complex exponents (CE), similar to complex form of Fourier expansion, see K. Pribram (2003). A single CE represents a solution of classical harmonic oscillator problem in complex spacetime. If we assume that human intention focus can be in zero approximation modeled by a virtual particle that we call intenton and describe the behavior of intenton in human brain/body with a known quantum mechanical model of a particle in 3D box, we are also arriving at a solution containing CE. Group theoretic aspects of modeling consciousness-related vibrations with CE are considered. If we assume that human consciousness is supported in part by tachyons rotating around human body, then precognition may be possible due to the ability of the superluminal tachyon to cross its own past light cone (move backwards in time). This hypothesis is consistent with results of M. Davidson's (2001) numerical simulation of tachyon circular (in space) & helical (in spacetime) movement based on Feynman-Wheeler electrodynamics seemingly confirmed in its J. Cramer's (1986) version by S. Afshar (2004) experiment. Role of entropy, information, and symmetry in modeling of moral aspects of consciousness is considered. The author is suggesting a mechanism of reverse psychology (reactance) based on Faraday's law of electromagnetic induction applied to interaction of two or more minds. Following A.& A. Fingelkurts (2001), the minds in the suggested mechanism are represented by human brain biopotential fields. Based on K. Pribram's (1987) holonomic brain theory the current author suggests that neural oscillations interference may be responsible not only for the memory mechanisms of image storage/retrieval, but also potentially for the very essence of active operational function of consciousness. Specifically if we attempt to establish a correspondence between waves (characterized by frequency, amplitude and phase) and elementary ideas (e.g., an idea of a number) then we can conclude that interference of coherent waves in brain may be responsible for, or, at least, closely related to the ability of consciousness to add numbers, while interference of pi-phase-shifted brainwaves might support the conscious operation of subtraction. It remains to be seen whether a natural author's hypothesis that brain math, logic and information processing/thinking in general are based on interference of neural oscillations and on K. Pribram's storage in/retrieval from memory of resulting interference patterns. C
44 Quantum Mechanics, Cosmology, Biology and the seat of Consciousness Maurice Goodman <maurice.goodman@dit.ie> (School of Physics, Dublin Institute of Technology, Dublin 8, Ireland)
All fundamental particles and structures obey the uncertainty principle. If we ignore particles and structures traveling at close to the speed of light (c) (i.e. >0.9c) the maximum uncertainty in momentum is of order mc where m is the mass of the structure/particle. This implies there is a minimum region of space such particles and structures can be confined to without violation of the uncertainty principle. Furthermore the mass of key structures found in nature generally varies in proportion to R^2, where R is size, and not R^3 as might be expected. By assuming all fundamental particles also obey this relation a sequence of “minimum” masses (M) can be calculated, one from another using M(n+1) = h/cRn (n = 0, +/-1, +/-2…), where h is Planck’s constant. These coincide with the fundamental particle/structure masses found in nature over 80 orders of magnitude of mass. This allowed a prediction for the neutrino mass, 20 years ago, that recent experimental results agree with. The above mass sequence insists on a direct link between Biology and the cell on the one hand and the neutrino and the weak force on the other. No one can seriously buy into the notion that the millions of millions of complex molecules within a cell exchange information, and organize themselves by nearest neighbour interactions only. The “hand in glove” sine qua non of all molecular transfers of information in biology is simply not sufficient to explain overall co-ordination within and between cells. There must also be, almost instantaneous, long-range communication to prevent chaos. Quantum coherence is an attractive candidate here. The range (r) at which quantum coherence ceases is given by r = h/(3mkT)^0.5, where m is the mass of the particles involved, T is the absolute temperature and k is Boltzmann’s constant. The lightest particle associated with chemical processes is the electron and this limits r to less than 10^-8 m. for all electromagnetic processes at room temperature. This is too short for cellular and intercellular communication and information transfer. The equivalent range (r) for neutrinos at room temperature is less than 10^-4 m, which is the scale on which neurological processes occur. Therefore, if quantum effects are at the root of consciousness, in the mind, then they are more likely to relate to the neutrino and weak force rather than the electron and the electromagnetic force. Neutrino’s would also provide the two necessary characteristics of the substrate for quantum computation i.e. insulation from the cell sap (electromagnetic processes) to allow for quantum entanglement and, the possibility of intercellular continuity to allow for multicellular quantum coherent states. While the input/output signals to/from the mind are clearly electromagnetic processes the “processing” of these signals could conceivably be based on the half spin “quantum bit” neutrino. The linchpin between the electromagnetic inputs/outputs and the processing in the mind would be spin. In short, the mind may exhibit consciousness as a result of the weak force and neutrino and not the electromagnetic force and the electron. C
45 Time Reversal Effects in Visual Word Recognition Anastasia Gorbunova, Gorbunova, Anastasia A.; Levin, Samuel. <gorbunov@email.arizona.edu> (Psychology, University of Arizona, Tucson, AZ)
The present study investigated time-reversal effects in visual word recognition using a traditional technique called lexical decision with masked priming. In this paradigm the subject is presented with strings of letters of various durations on a computer screen. The first string is a forward mask (usually a sequence of non-linguistic symbols such as hash-marks), which is followed by the target letter sequence. The subject's task is to decide whether the target letter sequence is a word or not. A prime, usually related (e.g. one letter different from the target) or unrelated (e.g. all letters different from the target), is presented briefly after the forward mask and before the target. The subject is usually unaware of the prime. In this type of experiments, it has been shown that presentation of a related prime facilitates the processing of the target thereby producing faster reaction times when compared to trials where the target is preceded by an unrelated prime. The current study attempted to move beyond conventional applications of this paradigm by introducing a post-prime that followed the target in addition to the common pre-prime that precedes the target. The latter addition was aimed at exploring some of the current ideas of time and retro-causation by comparing the amount of priming obtained in the following conditions: (i) a 50 ms either identical or unrelated pre-prime with a dummy post-prime (presented as a row of x's), (ii) a 30 ms identical pre-prime with either a 30 ms identical or a 30 ms unrelated post-prime, (iii) a 30 ms unrelated pre-prime with either a 30 ms identical or a 30 ms unrelated post-prime, and (iv) a 50 ms either identical or unrelated post-prime with a dummy pre-prime. Additionally, half of the words in this experiment were emotional (e.g. murder) and the other half were neutral (e.g. garden). This was done to test whether emotional words would produce more priming either in the pre-prime, the post-prime, or both conditions, than neutral ones. The results of this study are intended to shed light on the influences of emotional states on visual word recognition, as well as provide evidence for small-scale temporal reversal effects in conscious and unconscious processes. C
46 Integral Aspects Of The Action Principle In Biology And Psychology: The Ultimate Physical Roots Of Consciousness Beyond The Quantum Level Attila Grandpierre <grandp@iif.hu> (Konkoly Observatory of the Hungarian Academy of Sciences, Budapest, Zebegeny, Hungary)
During the last centuries it became more and more clear that the highest achievement of modern physics is its most fundamental law, the action principle. The action principle itself is not understood, its physical content is obscure, and its integral character is ignored. Here we consider the nature of action and found it having a biological nature. We point out that the action principle usually takes a minimum value in physical systems, while in biological organism it usually takes its maximal value. Therefore, we could recognize in the already established action principle’s most general form the first principle of biology. We show that biological organisms employ first its maximum version and determine the biological endpoint using the maximal form, and when the endpoint is determined on a biological basis, the realization of the physical trajectory occurs on the basis of the minimum version. We demonstrate that it is the till now ignored integral character of the action principle which serves as the ontological basis of the unity of living organisms, offering a wide variety of physical processes not considered yet because of their biological and teleological nature. We found a new interpretation of the classic two-slit experiment of quantum mechanics, offering a new, causal interpretation of quantum physics that connects it on a fundamental way with biological processes. We show that the biological form of the action principle acts in the realm beyond quantum physics and represents a new frontier of science. It offers integral principles and quantitative methods to determine biological equations of motion of living organisms, therefore making it possible to extend the range of modern science and develop a real theoretical biology. We present fundamental equations of biology, numerical methods and examples, propose new experiments, and presents experimental predictions. We derive from the biological principle such fundamental life phenomena as self-initiated spontaneous macroscopic activity, regeneration, regulation, homeostasis, and metabolism. We present detailed evidences on the concrete physical aspects of elementary consciousness of quanta, like instantaneous quantum orientation of quanta in their environment, behaving “as if” they “know” about the whole situation, having collective memory, and show ability of learning. Clarifying the concrete physical aspects of consciousness, science becomes able to approach consciousness and self-consciousness on a mathematical, physical and biological basis. In this way, it seems we can enter to a new era of quantitative biology and psychology above the molecular level, based on biology meeting physics below the quantum level. C
47 Neuro-quantum associative memory for letter-strings and faces Tarik Hadzibeganovic, Chu Kiong Loo (Faculty of Engineering and Technology, Multimedia University, Melaka, Malaysia) <ta.hadzibeganovic@uni-graz.at> (Language Development & Cognitive Science, University of Graz, Graz, Austria)
We present an integrative, two-stage complex-valued neuro-quantum hybrid model of face-specific and letter-string-specific neural activations, consistent with the recent report of Tarkiainen, Cornelissen, and Salmelin (2002). In the first stage, at about 100 ms following the stimulus onset, the low-level visual feature analysis in the occipital cortex (V1) is represented by the natural production of Gabor-like receptive fields. This processing stage was, as showed by Tarkiainen et al. (2002), common to both the analysis of letter-strings (words) and faces. In the second stage, about 150 ms after the stimulus presentation, we show that the object-level analysis in the inferior occipito-temporal cortex is representable by the Hebbian-like multiple self-interference of the resulting, quantum-implemented Gabor wavelets (Perus, Bischof, & Loo, 2005). With some differences in hemispheric distribution, both letter-strings and faces activate largely overlapping areas in the inferior occipito-temporal cortex, with practically identical onset and peak latencies (Tarkiainen, 2003). We reflect on these equalities in activation and the corresponding processing similarities of words and faces with our quantum associative network model by obtaining similar face and letter-string reconstruction (recognition) quality functions. Our modeling results argue in favor of a quantum-like nature of conscious visual information processing in the human brain. C
48 A steady state EEG phase synchrony model of consciousness: insights from transcendental meditation practice Russell Hebert, Rachel Goodman; Fred Travis; Alarik Arenander; Gabriel Tan <tmeeg@aol.com> (Neuroscience, Maharishi University of Management, Houston, Tx)
This presentation adopts these perspectives: that a fully developed consciousness theory is compatible with quantum field theory, that the theory of consciousness must be holistic (non-reductionistic); it must include a concept of the “self”; it must address the origin of consciousness and it must resolve the “binding” problem. In the presented research (Hebert et al., 2005) two approaches have been taken: subjective and objective. The subjective, theoretical approach is derived from Maharishi Vedic Science, an ancient model of consciousness with modern applications. The objective approach involves research utilizing EEG alpha phase synchrony analysis. Maharishi Vedic Science describes consciousness as inner and outer. The inner (transcendental) value explains consciousness as an unbounded field underlying and informing human experience. When the individual accesses this state, it is called self-referral consciousness, or below as “unified wholeness”. When the individual experiences the perception of thoughts and objects, this type of conscious awareness is termed object-referral consciousness (or below as “unified diversity”). Both the “ground state” of the universe in quantum physics and the properties of the self-referral state of consciousness are described as: unmanifest, de-excited, holistic, unified and field-like (see Hagelin, this volume). Hagelin states that the ground state of the universe is also comprised of resonant vibrational modes which can also be referred to as standing waves. Both from the research conducted, and the theoretical background we conclude that alpha standing waves may connect individual consciousness to the quantum level of Nature’s functioning. In line with this idea, Chris King (Tuszynski, ed., 2006) suggests a plausible link “between EEG phase coherence in global brain states and anticipatory boundary conditions in quantum systems…” (p.407). New research has shown that the phase behavior of alpha controls global cortical excitability ((Klimesch et al., 2007). Our study agrees with this hypothesis. We suggest further however that global and instantaneous shifts of excitability can only occur in stationary environments. Alpha standing waves found in our study are the epitome of the globally de-excited cortex, a “ground” state of consciousness corresponding to John’s (2001) field theory postulations. This, in relation to quantum physics, is a possible description of the origin of consciousness. Recent developments agree with our proposal that alpha phase synchrony may also provide the solution of the binding problem. Palva and Palva (2007) suggest that alpha-gamma cross-frequency phase synchrony (“unified diversity”) orchestrates the creation of each “snapshot” of discrete perception. The emerging picture is that changing modes of alpha regulate perceptual frames within the boundaries of time and space (the binding problem) and that alpha, as well, frames the timeless infinity of self-referral consciousness described as “unified wholeness”. Palva and Palva (2007) “New Vistas for alpha band oscillations” Trends in Cognitive Neuroscience 34(4), 150-8. Hebert et al., Enhanced EEG alpha phase synchrony during Transcendental Meditation. Signal Processing Journal(2005)85, 2213-2232 Klimesch et al (2007) “EEG oscillations: the inhibition-timing hypothesis” Brain Research Reviews 53(1) 63-88 E.R.John, 2001 “A field theory of consciousness” Cons. and Cogn 10, 184-213 King, In “The Emerging Physics of Consciousness” (Tuszynski, ed., 2006 Springer, Berlin) C
49 The Role of Consciousness as Universal (Classical) and Contextual (Quantum) Meaning-Maker Patrick Heelan <heelanp@georgetown.edu> (Philosophy, Georgetown University, Washington, DC)
Thesis: Human consciousness is the Governor of Mental Life {1} through its function of constituting the world of human experience by meaning-making or – to use Husserl’s term - intentional constitution. The forms of meaning-making are syntheses of experience through the formal modeling of individual perceptual objects under a categorial description. These formal models are extensional (space-like) symmetries based on a group-theoretic similarity of common qualitative (meaningful intensional) features that fulfill the same kind of cognitive model as characterizes quantum physics, namely, Hilbert Space. Individual perceptual objects are recognized interpretatively on the basis of common meaningful qualitative features organized in a group-theoretic synthesis of a manifold of profiles, that are then accepted by the perceiver as having a common categorial description named in language. Having a common categorial description is for something to be recognized as belonging to a symmetry group of particular exemplars. Both individual and categorial descriptions involve group-theoretic ways of organizing the interpretation of the flowing inputs from the sensory field in a constructed synthesis that functions in sustaining and developing the quality of human life. As such, both individual and categorial syntheses serve human life, and do so through the organization of human decision-making and activity, some under universal (classical) group-theoretic symmetries and others under contextual (quantum-like) group-theoretic symmetries. As in the quantum theory; part of this process is unconscious and part is dialogical, social, deliberate, and linguistic (in the sense known as systemic functional linguistics, Tomasello, Halliday, Thibault, et al.). Karl Pribram’s notion of a Windowed Fourier transformation within the dendritic fibers could well be the quantum neurological aspect of this process (2). Notes: (1)This term is used by Donald, Merlin, A Mind So Rare, Chap. 3 (New York: Norton, 2001); Pribram calls it ‘central processing complement.’ In Pribram, K. Brain and Perception (Hilsdale, NJ: Erlbaum,1991), p. 96. (2)Pribram, K. (1991) Brain and Perception; Holonomy and Structure in Figural Procession (Hillsdale, NJ: Erlbaum), pp. 26-27. C
50 Experimental Approach to Quantum Brain: Evidence of Nonlocal Neural, Chemical, Thermal and Gravitational Effects Huping Hu, Maoxin Wu <hupinghu@quantumbrain.org> (Biophysics Consulting Group, Stony Brook, New York)
Many if not most scientists do not believe that quantum effects play any role in consciousness. Thus, to gain credibility and make real progress, any serious attempt at a quantum brain should also stress experimental work besides theoretical considerations. Therefore, we has recently carried out experiments from the perspective of our spin-mediated consciousness theory to test the possibility of quantum-entangling the quantum entities inside the brain with those of an external chemical substance. We found that applying magnetic pulses to the brain when an anesthetic was placed in between caused the brain to feel the effect of said anesthetic as if the test subject had actually inhaled the same. Through additional experiments, we verified that the said brain effect was indeed the consequence of quantum entanglement. These results defy common belief that quantum entanglement alone cannot be used to transmit information and support the possibility of a quantum brain. More recently, we have carried out experiments on simple physical systems and we have found that: (1) the pH value of water in a detecting reservoir quantum-entangled with water in a remote reservoir changes in the same direction as that in the remote water when the latter is manipulated under the condition that the water in the detecting reservoir is able to exchange energy with its local environment; (2) the temperature of water in a detecting reservoir quantum-entangled with water in a remote reservoir can change against the temperature of its local environment when the latter is manipulated under the condition that the water in the detecting reservoir is able to exchange energy with its local environment; and (3) the gravity of water in a detecting reservoir quantum-entangled with water in a remote reservoir can change against the gravity of its local environment when the latter was remotely manipulated such that, it is hereby predicted, the gravitational energy/potential is globally conserved. These non-local effects are all reproducible, surprisingly robust and support a quantum brain theory such as our spin mediated consciousness theory. Perhaps the most shocking is our experimental demonstration of Newton's instantaneous gravity and Mach's instantaneous connection conjecture and the relationship between gravity and quantum entanglement. Our findings also imply that the properties of all matters can be affected non-locally through quantum entanglement mediated processes. Second, the second law of thermodynamics may not hold when two quantum-entangled systems together with their respective local environments are considered as two isolated systems and one of them is manipulated. Third, gravity has a non-local aspect associated with quantum entanglement thus can be non-locally manipulated through quantum entanglement mediated processes. Fourth, in quantum-entangled systems such as biological systems, quantum information may drive such systems to a more ordered state against the disorderly effect of environmental heat. We urge all interested scientists and the like to do their own experiments to verify and extend our findings. C
51 Consciousness, Coherence and Quantum Entanglement James Hurtak, AFFS, Basel, Switzerland; Prof. Desiree Hurtak, SUNY-Purchase College, New York <affs@affs.org> (AFFS, Wasserburg , GERMANY)
Coherence as a universal, organizing principle that opposes the increase of entropy, is present throughout the basic field properties of our natural system. Coherence can be applied not only to local, but nonlocal, atemporal interactions. Understanding a coherent system would help to examine the number of quantum entanglement measures that quantify the total state as has been demonstrated by studies on photons, atoms and electrons (Chou, 2005; Bao, 2003). An explanation of the basic coherent properties can also be applied to the behavior of living systems and not only to the physics of matter. Here both the biological and the psychological experience are effected. For the biological experience we see how there exists a high degree of coherence of a quantum state in the order of living systems, because otherwise any mass movement within the environment would create, instead, “increasing” random effects. Regarding the psychological experience which includes cognition, memory, intention, intuition, perception and reasoning, we see coherence working as a “stream” of consciousness flow which manages and focuses life through linear adaptability and the organization of thoughts, events, and actions. However, to apply quantum entanglement in a living coherent systems, we need to address both the “mind-body” problem and that of “bioentanglement”. The latter claims that quantum entanglement only becomes applicable to particles that have previously interacted, that is, for neurons to be entangled, there must be some prior physical interaction in the brain. No doubt, the structural world comprises various fields and waves structures. The brain process, as it is, with neurons, dendrites and molecules (Hameroff, 2006), merely plays an overlapping role, along side quantum entanglement which exists throughout nature. The brain exists in its own coherent-entangled field within the larger space-time. Because there is an interaction of structures by forces, in essence there is an exchange of virtual particles that works with the stream of consciousness playing out in our physical existence. This paper will examine recent research and models of entanglement as they apply to coherence (and decoherence) in the nature of biological and psychological systems. Chou, CW, et al. (2005) “Measurement-induced entanglement for excitation stored in remote atomic ensembles” in Nature. 2005; 438(7069):828-32. Jiming Bao, et.al (2003) “Optically induced multispin entanglement in a semiconductor quantum well.” in Nature Materials 2, 175–179. Hameroff, Stuart (2006) “Consciousness, Neurobiology and Quantum Mechanics: The Case for a Connection” in The Emerging Physics of Consciousness, edited by Jack Tuszynski, Springer-Verlag, pp. 206-215. C
52 Quantum stochasticity and neuronal computations Peter Jedlicka <jedlicka@em.uni-frankfurt.de> (Institute of Clinical Neuroanatomy, J.W. Goethe-University, Frankfurt, Germany)
The nervous system probably cannot display macroscopic quantum (i.e. classically impossible) behaviours such as quantum entanglement, superposition or tunnelling (Koch and Hepp, Nature 440:611, 2006). However, in contrast to this quantum ‘mysticism’ there is an alternative way in which quantum events might influence the brain activity. The nervous system is a nonlinear system with many feedback loops at every level of its structural hierarchy. A conventional wisdom is that in macroscopic objects the quantum fluctuations are self-averaging and thus not important. Nevertheless this intuition might be misleading in the case of nonlinear complex systems. Because of a high sensitivity to initial conditions, in chaotic systems the microscopic fluctuations may be amplified upward and thereby affect the system's output. In this way stochastic quantum dynamics might sometimes alter the outcome of neuronal computations, not by generating classically impossible solutions, but by influencing the selection of many possible solutions (Satinover, Quantum Brain, Wiley & Sons, 2001). I am going to discuss recent theoretical proposals and experimental findings in quantum mechanics, complexity theory and computational neuroscience suggesting that biological evolution is able to take advantage of quantum-computational speed-up. I predict that the future research on quantum complex systems will provide us with novel interesting insights that might be relevant also for neurobiology and neurophilosophy. C
53 Consciousness as a quantum-like representation of classical unconsciousness Andrei Khrennikov <Andrei.Khrennikov@vxu.se> (International Center for Mathematical Modeling in Physics, Economy and Cognitive Science, Vaxjo University, Vaxjo, Sweden)
We present a quantum-like (QL) model in that contexts (complexes of e.g. mental, social, biological, economic or even political conditions) are represented by complex probability amplitudes. This approach gives the possibility to apply the mathematical quantum formalism to probabilities induced in any domain of science. In our model quantum randomness appears not as irreducible randomness (as it is commonly accepted in conventional quantum mechanics, e.g., by von Neumann and Dirac), but as a consequence of obtaining incomplete information about a system. We pay main attention to the QL description of processing of incomplete information. Our QL model can be useful in cognitive, social and political sciences as well as economics and artificial intelligence. In this paper we consider in a more detail one special application -- QL modeling of brain's functioning. The brain is modeled as a QL-computer. Our model finely combine classical neural dynamics in the unsconscious domain with the QL-dynamics in the consciousness. The presence of OBSERVER collecting information about systems is always assumed in our QL model. Such an observer can be of any kind: cognitive or not, biological or mechanical. Such an observer is able to obtain some information about a system under observation. In general this information is not complete. An observer may collect incomplete information not only because it is really impossible to obtain complete information. (We mention that according to Freud's psychoanalysis human brain can even repress some ideas, so called hidden forbidden wishes and desires, and send them into the unconsciousness.) It may occur that it would be convenient for an observer or a class of observers to ignore a part of information, e.g., about social or political processes. In the present QL model of brain's functioning the brain plays the role of such a (self)-observer. [1] A.Yu. Khrennikov, Quantum-like brain: Interference of minds. BioSystems 84, 225--241 (2006). C
54 Process-Philosophy and Mental Quantum Events Spyridon Koutroufinis <koutmsbg@mailbox.tu-berlin.de> (Philosophy, Technical University of Berlin (TU-Berlin), Berlin, Germany)
The paper investigates the usefulness of the ideas of Alfred North Whitehead for a natural philosophy of organismic processes in general and for the dynamics of the nervous system in particular. Taking the physics of non linear dynamic systems and basic considerations of the philosophy of consciousness as a starting point, we expound fundamental principles and concepts of Whitehead’s process philosophy. Using these principles, the possibility of integrating modern system theoretical methods and findings into a new theory of mental and neural events is elaborated in a way that avoids vitalism and reductionism. C
55 Memory and Time: Spatial-Temporal Organization of Episodic Memory Analyzed from Molecular Level Perspective Michael Lipkind <lipkind@macam.ac.il> (Unit of Molecular Virology, Kimron Veterinary Institute, Bet Dagan, Israel)
The human episodic (biographical) memory including remembrance, storage and retrieval can be represented as a spatial-temporal arrangement of neural correlates of a current stream of perceived and memorized events accumulated in the brain during an individual’s lifetime and constituting the bulk of an individual’s “I”. While the spatial part of the arrangement is in principle conceivable, any hypothetical mechanism of the temporal part is unimaginable, yet during recollection we know what occurred earlier and what occurred later. The existing theories of neural correlates of memorization are based on two analytical levels: the level of circuits of inter-neuronal connections and the level of intracellular molecular substrate of the brain cortex neuronal massifs. The former looks incompatible with the idea of temporal arrangement of memorized events: any current temporal “assortment” of such events in principle cannot correlate with combinations of rigid anatomical inter-neuronal connections. As to the molecular level, the idea of both the spatial and temporal organizations of the episodic memory does not seem inconceivable. Hence, the temporal chain of currently memorized events, each one interconnecting with the previously memorized events to be further connected with those to be memorized in future, must relate to an integral continuum of the brain intracellular molecular substrate. However, the mechanism of such temporal arrangement remains obscure: What (“Where”) on the intracellular level is that “magic” time axis, according to which the multiple currently memorized events are “strung” (threaded, saved, stored)? Within the existing physical-chemical concepts, the problem seems to be unsolvable. The situation could lead to the assumption that the apprehended temporal succession of memorized events results merely from their mental confrontation and systematization, suggesting that any existence of a genuine temporal arrangement of the currently memorized events is an illusion. The suggested way out of the deadlock is based on the idea of an integral field as a carrier of the memorization. Since the concept of field is compatible with the time parameter, it can be employed as a competent dynamic correlate of the current temporal memorization. Accordingly, memorization of any particular event is correlated with respective change of the field “configuration” expressed as a dynamic state determined by the field parameters’ values. However, if the postulated field is grounded on any known physical fields, e.g. electromagnetic, it must originate from the physical-chemical properties of the brain molecular substrate as its source. Since such “circular”, evidently tautological conclusion has no causal value, a concept of an autonomous field irreducible to the established physical fundamentals is suggested as a correlate of memorization. Published models of the autonomous fields as carriers of consciousness (Libet, Searle, Sheldrake) were criticized as tautological, metaphoric, or esoteric (Lipkind, 2005). The suggested theory of memorization based on the theory of irreducible biological field by Gurwitsch (1944) was elaborated (Lipkind, 2003, 2007), the present communication being its further development. Thus, the episodic memory (biographical events) and semantic memory (individual’s store of knowledge) are represented by molecular “traces” left by afferent to-be-perceived stimuli projected upon the brain’s autonomous field-determined intracellular molecular continuum. C
56 Cortical Based Model of Object-recognition: Quantum Hebbian Processing with neurally shaped Gabor wavelets. Chu Kiong Loo, Mitja Perus <ckloo@mmu.edu.my> (Faculty of Engineering and Technology, Multimedia University, Bukit Beruang, Melaka, Malaysia)
This paper presents a computationally implementable of cortical based model of object recognition using quantum associative memory. The neuro-quantum hybrid model incorporates neural processing up to V1 of the visual cortex, which imput arrives from the retina with the intermediation of the Lateral Geniculate Nucleaus. The initial image is lifted by the simple cells of V1 to a surface in the rototraslation group followed by quantum associative processing in V1, achieving together an object-recognition result in V2 and ITC. Results of our simulation of the central quantum-like parts of the bio-model, receiving neurally pre-processed inputs, are presented. This part contains our original simulated storage by multiple quantum interference of image-encoding Gabor Wavelets done in a Hebbian way. C
57 Why panpsychism falls into dualistic metaphysical framework? Jaison A. Manjaly <jmanjaly@gmail.com> (Centre for Behavioral and Cognitive Sciences, University of Allahabad, Allahabad, UP, India)
Galen Strawson (2006) claims that real physicalism entails panpsychism. This paper aims to assess the ontological merits and demerits of this claim. I argue that although there are certain explanatory advantages for pansychism over emergentism, it does not contribute anything novel to strengthen the physicalsitic thesis. For, the concept of panpsychism is rooted in a metaphysical misconception of ‘experience’. I further show that, because of this misconception, panpsychism cannot be held without falling into a dualistic metaphysical framework. Moreover, Strawson’s version of panpsychism brings back the burdens of causal interaction and non-Cartesian substance dualism. C
58 The Subject of Physics Donald Mender, NA <solzitsky@aol.com> (Psychiatry, Yale University, Rhinebeck, NY)
Physicists today embrace theoretical parsimony and experimental accuracy as guides toward progress in the understanding of natural objects. Yet, beyond these criteria, it is also historically true that large paradigmatic leaps forward at the foundations of physics have repeatedly entailed reevaluations of the human subject's place within nature. In particular, revolutionaries have transformed the physical sciences by knocking the subjective center of orthodox perspectives off balance in some unexpected new way, rather than by merely altering the objects under scrutiny. Copernicus simplified astronomy by uprooting Ptolemaic astronomers from their geocentric ground; Einstein relativized the motion of a light source by democratizing the sensorium of the physical observer; Heisenberg captured the phenomenology of the subatomic microcosm by injecting jitter into an experimenter's act of measurement. Hence it may make sense to look for future foundational advances, for example in the quest to unify quantum mechanics and general relativity, via even more radically "decentered" shifts of the scientific subject's anchor within nature, rather than in more and more baroque revisions of yet undetected physical objects, such as transformations of particles into strings and branes, of classical space-time into a topological weave of "loops," of bosons and fermions into bosinos and sfermions, and of phase transitions into Higgs fields. Instead, a more productive route toward the next synthetic breakthrough in physics may be to decenter the very plurality of the physical observer, beyond the statistical influence of second quantization on connections merely among wavefunctional objects. Specfiically, the structure of quantum gravitational operators may morph to include not only linearly independent individual acts of measurement implied by the superpositional probabilities of path integration, but also fungibly collective and frangibly fragmented measuring agencies instantiated respectively through Bose-Einstein and Fermi-Dirac statistics embedded intrinsically within relationships among the operators themselves. Such a "decentered" perspective on quantum gravitational measurement could offer several potential advantages. First, its locus on the observer's side of the measurement "cut" could replace supersymmetrical partners in the objective domain, offering an explanation if bosinos and sfermions are not found in future high-energy accelerator experiments. Second, provision of differing statistically "inertial" (i. e. equilibrated) reference frames for a diverse multiplicity of observing subjects could obviate any need for spontaneous symmetry breaking as an explanation for departures from invariance should Higgs particles fail to manifest themselves. Third, nonlinearizing effects on the probability sums of perturbative series could serve as a natural improvement upon renormalization procedures. Fourth and finally, a "decentering" of pluralities applicable to the quantum-gravitational observer might offer new ways of understanding scientific subjectivity per se in terms of polysemy across a range of collective, individual, and component properties relevant to gravitonic processes in the measuring agent's brain. A hermeneutic expansion of the Penrose-Hameroff hypothesis might thus ensue. Empirical testing of such an enhanced theoretical perspective might follow from detailed predictions of emergent resonances among multiple acts of quantum gravitational measurement. C
59 The origin of non-locality in consciousness Ken Mogi <kenmogi@csl.sony.co.jp> (Fundamental Research Laboratory, Sony Computer Science Laboratories, Shinagawa-ku, Tokyo, Japan)
Quantum mechanics, being an inseparable element of reality, naturally enters into the consideration of every phenomenon that occurs in the physical universe. As far as consciousness is an integral part of the reality as we understand it, quantum mechanics needs to be ultimately involved either directly or indirectly in its origin. In particular, the apparent non-locality and integrity in the phenomenology of consciousness and its physical correlates is suggestive of a quantum involvement. Here I examine the nature of non-locality in the physical correlates of consciousness and its relation to quantum mechanics. The concept of the neural correlates of consciousness (Crick and Koch 2003), when pursued beyond the currently prevalent role as a practical framework in which to analyze neuropsychological data, logically necessitates a non-trivial emergence through the mutual relation between physical entities and events that constitute cognitive processes in the brain (Mach's principle in perception, Mogi 1999). Since from this standpoint the spatio-temporal histories sustaining the cognitive processes, including, but not necessarily restricted to, the action potentials of the neurons are the essential correlates of consciousness, non-locality becomes a logical necessity in the ingredients of consciousness. Non-locality has been known to be an essential property of quantum mechanics since its early period (e.g., Einstein, Podolsky, & Rosen 1935). However, the combination of high temperature and large number of degrees of freedom involved in brain activities are usually regarded as definitely precluding any possible quantum effects. However, there exists possible routes of quantum involvement in macroscopic and "warm" phenomena such as brain processes. The key is in the fact that macroscopic objects, although ostensively obeying equations of Newtonian dynamics, rely on quantum effects for the very stability that makes them classic objects in the beginning. Analysis of an information processing system usually starts from the assumption that its essence can be captured by following those parameters explicitly covarying with the information the system supposedly handles. Quantum mechanical effects hardly enter the picture when only explicitly varying parameters are considered. On the other hand, the implicitly sustaining structures that do not covary with the processed information can contribute to the phenomenal aspects of information, such as qualia and self-awareness. The ubiquitous role of metacognition, the origin of subjective time, and the way spatio-temporally distributed activities are "compressed" into percepts in conscious experience, are discussed in the context of the implicit and explicit in cortical information processing. References Einstein, A., Podolsky, B., and Rosen, N. (1935) Can quantum-mechanical description of physical reality be considered complete? Phys. Rev. 47 777-780. Mogi, K. (1999) Response Selectivity, Neuron Doctrine, and Mach's Principle. in Riegler, A. & Peschl, M. (eds.) Understanding Representation in the Cognitive Sciences. New York: Plenum Press. 127-134. Crick, F. and Koch, C. (2003) A framework for consciousness. Nat. Neurosci., 6, 119-126. Taya, F. and Mogi, K. (2004) The variant and invariant in perception. Forma, 19, pp.25-37. C
60 Teleological mechanism for the simulation argument James Nystrom <jnystrom@shepherd.edu> (Computer Science, Math and Engineering, Shepherd University, Shepherdstown, WV)
I begin the talk by providing an overview of Bostrom’s now seminal 2003 paper “Are You Living in a Computer Simulation?”. Herein I summarize Bostrom's simulation argument (where one possibility is that we are living in a simulation – specifically as part of an ancestor simulation created by a posthuman society). I take issue with Bostrom's functionalist position on Mind and present a modified simulation disjunction (MSD) wherein I utilize a dualism close in concept to a funda-mentalism of the Penrose-Hameroff variety. Here I eschew Bostrom's ancestor simulations as a type of functionalist masquerade. However, I do maintain the possibility that we are living in a (complete Universe) simulation, created by posthuman simulators (PHS). I note that if we are in a simulation without a functionalist model of Mind, we need structures in the simulation that can support and/or capture Mind activities (e.g., a brain). Here Mind takes on a Gnostic characteristic, in that Mind itself would need to fall down (if you will) from some non-spatio-temporal habitation (a Richard Rorty term) as in the supposed doings of a Gnostic Demiurge. This model of Mind is similar to Plato's Divine Mind or Huxley's Mind-at-Large, and similar to Penrose's use of an underlying Platonic reality (a so-called basic level of Universe). In the third (and last) part of the talk I take the assumption that we are living in a complete Universe simulation. I posit a query concerning how our supposed PHS could implement algorithmic control of a Universe. I need provide background asides before I answer this query. The first aside is (I) a discussion of Universe as a computation in terms energy interactions which take fundamental activity of Universe to be operating near Planck lengths and Planck time. I introduce the terms Negative Universe (a R. Buckminster Fuller term) and reality flux. Here Negative Universe is akin to Penrose's Platonic and Mental worlds, and reality flux describes the ensembles of virtual photons and anti-particles, some of which seemingly pass in and out of existence. Another aside (II) compares casual and teleological effects. I use physically-based arguments, and suggest that the typically arbitrary adoption of the causal viewpoint for most process in Universe is in fact an observation selection effect resulting from an immersion in a forward progression of time. I also (III) review the classic dualism (of mind and matter) and compare this to Penrose-Hameroff funda-mentalism. As a result of this aside, I take Mind as something that resides partially in Negative Universe. The last aside (IV) presents Gravity as an instantaneous most economical relationship of all energy events (as R. Buckminster Fuller did), and this then places the Gravity (calculation/update) in Negative Universe. I can now answer the query and propose mechanisms with which PHS could computationally steer a Universe (such as ours). Since Gravity and Mind have both been surmised to contain a non-spatio-temporal essence (in Negative Universe), I suggest that PHS could in fact use both Gravity and Mind as teleological control mechanisms for a Universe simulation. C
61 Entropy Reversal and Quantum-Like Coherence in the Brain Alfredo Pereira Jr., Polli, Roberson S. <apj@ibb.unesp.br> (State University of São Paulo (UNESP), Botucatu, São Paulo, Brasil)
Quantum-like macro-state coherence can be generated in the living brain by means of molecular mechanisms that induce local entropy reversal (at the cost of increasing environmental entropy). The idea that entropy reversal can locally increase (bio)physical organization derives from conjectures by Maxwell, Schrödinger and Monod. Contemporary models of the Ion-Trap Quantum Computer (ITQC) can be viewed as belonging to the "Maxwell Demon" family of systems, since: a) the movements of the ions are controlled to produce physical organization; b) external energy (the laser) is used to transfer information to the system; and c) the system’s activity (phonon modes related to spin values of different electronic configurations) support the performance of reversible operations. Analogously, in the living brain, biological mechanisms - as neuronal membrane channel gating - control the movement of ions. Astroglial cells, being responsible for the distribution of free energy (in the form of glucose) from arterial blood to neurons, and actively participating in tripartite synapses, may also be involved in an entropy reversal process. We propose that calcium ion populations trapped in the astrocytic syncytium, while interacting with neuronal electric fields, operate as a large-scale ITQC, with an architecture similar to the model presented by Kielpinski, Monroe and Wineland (2002). On the one hand, contemporary schemes for ITQC with hot ions (Poyatos, Cirac and Zoller, 1998; Molmer and Sorensen, 1999; Milburn, Schneider and James, 2000; Kielpinski et al., 2000) reveal that multimodal phonon patterns compose complex coherent states. On the other hand, empirical results from brain science indicate that astrocytes participate in the sustaining of neuronal excitation (Haydon and Carmignoto, 2006) and onset of oscillatory synchrony (Fellin et al., 2004), both functions closely related to conscious processing. Calcium waves in the syncytium are also a medium for large-scale integration (Robertson, 2002). This integration possibly includes inter-hemispheric communication by means of cerebrospinal fluid (a possibility based on the proposal made by Glassey, 2001). In conclusion, we suggest that the brain’s hot, wet and noisy ITQC, composed of a calcium ion population trapped in astrocytes and interacting with neuronal electric fields, can embody complex patterns that compose the contents of consciousness. FELLIN T et al.(2004) Neuronal Synchrony Mediated by Astrocytic Glutamate Through Activation of Extrasynaptic NMDA Receptors. Neuron 43(5): 729-43. GLASSEY G(2001) The Neuroglial Cell-Neuropeptide Highway. Published online: http://www.healtouch.com/csft/highway.html HAYDON PG CARMIGNOTO G(2006) Astrocyte Control of Synaptic Transmission and Neurovascular Coupling. Physiol Rev. 86(3): 1009-31. KIELPINSKI D et al.(2000) Sympathetic Cooling of Trapped Ions for Quantum Logic. Physical Review A 61, 032310, p. 1-8. KIELPINSKI D MONROE C WINELAND DJ(2002) Architecture for a Large-Scale Ion-Trap Quantum Computer. Nature 417: 709-711. MILBURN GJ SCHNEIDER S JAMES DFV(2000) Ion Trap Quantum Computing With Warm Ions. Fortschritte der Physik 48: 801-810. MOLMER K SORENSEN A(1999) Multiparticle Entanglement of Hot Trapped Ions. Physical Review Letters 82 (9): 1835-1838. POYATOS JF CIRAC JI ZOLLER P(1998) Quantum Gates With “Hot” Trapped Ions. Physical Review Letters 81, 1322-1325. ROBERTSON JM(2002) The Astrocentric Hypothesis: proposed role of astrocytes in consciousness and memory formation. Journal of Physiology-Paris 96: 251-255. C
62 Neurons react to ultraweak electromagnetic fields Rita Pizzi, D. Rossetti; G. Cino; A.L. Vescovi; W. Baer <pizzi@dti.unimi.it> (Department of Information Technologies, University of Milan, Crema, CR, Italy)
Since 2002 our group has been concerned with the direct acquisition of signals from cultured neurons. During the first experiments we noticed anomalies in the electrical signals coming from separate and isolated neural cultures that suggested that either neurons were extremely sensitive to classical electromagnetic stimulation or some form non-classical communication between isolated systems was occurring. We improved our experimental setup in order to further explore this phenomenon and eliminate possible experimental errors that might bias our results. Our last experiment was consisted of three MEA (Microelectrode Arrays) basins, one filled with human neurons and the others with control liquids. Each basin was in turn irradiated with a laser beam while the other basins were shielded by means of a double opaque Faraday cage. In all cases we found a sharp spike in the electrical activity coming from the neural basin simultaneous to the laser emission, but no activity was present in the two control basins with or without shieldings. To eliminate the possibility of electromagnetic coupling the hardware system was designed with special electronic devices and photo-couplers to avoid any kind of interference between circuits and MEAs. Several tests were performed by means of both oscilloscope and spectrum analyzer to ascertain the absence of cross-talk and induction phenomena. During one of the experiments we substituted the laser with a dummy load in order to simulate the current absorption equivalent to the one generated by the laser and we found the same peak was present. Upon further investigation we concluded that the phenomenon could be due to an electromagnetical field coming from the laser supply circuit that was too weak to be detectible with our measure instruments. Neurons appear to receive and amplify an electromagnetic spike whose value through the air, before reaching the Faraday shielding, is less than 70 microGauss and under the sensitivity of our oscilloscope (2 mV). It must be stressed that in order to cause a neuron spike using a direct electrical stimulation inside the cell, a 30 mV pulse is necessary. The value of the electric and magnetic field under the double Faraday cage is under the sensitivity of our instrumentation but is estimated to be at least one order of magnitude less. We believe the neurons are the active receiving element because the MEA control circuit and the activation circuit are completely separated, the MEA basins are connected to the ground, their shape is not suitable to act as antenna and the spikes observed in the neural basin are never present in the other control basins. Though the exact mechanism for the observed neural response has not been identified we can at the moment hypothesize that neurons act as antennas for extremely weak electromagnetic fields. The neural reactivity may be due to the presence of microtubules in their cellular structure. Microtubules are structurally similar to carbon nanotubes, whose tubular shape makes them natural cavity antennas. New analyses with more sensitive instruments, and a mu-metal cage to avoid magnetic fields, are underway to further investigate the nature of this extreme neural sensitivity. C
63 The Mind’s Image of the World, the Classical Physics of Motion, and the Quantum Physics of the Brain Arkady Plotnitsky <plotnits@purdue.edu> (Theory and Cultural Studies, Purdue University, W. Lafayette, Indiana)
This paper takes as its point of departure Alain Berthoz’ argument for the significance of physical movement in our understanding of the brain’s functioning. According to Berthoz, perception is not only an interpretation of sensory messages but also an internal simulation of action, thereby making perception and action irreducibly intertwined. The fact that every moving body must follow the laws of classical mechanics compels the brain to invent strategies to make complex mechanical calculations, and, hence, to internalize the basic laws of geometry and kinematics. Indeed, the whole conceptual structure of, first, Euclidean geometry and then of classical physics (including kinematics), or our physical-mathematical image of the world, may be seen as arising from this classical-like phenomenal image (a thought image) created by the brain and its capacities of both remembering the past and predicting the future. Berthoz also links the brain’s functioning, as grounded in motion, to the Bayesian theory of probability. The latter deals with predictions concerning the outcome of individual events on the basis of the available information and, hence, conceptually memory, rather than on statistical inferences based on frequencies of repeated events. Berthoz speaks of “a memory for prediction.” Thus, our interaction with the world is defined by taking chances and our success in the world by taking our chances well. Berthoz argues that, by focusing primarily on the connectivities within the brain, current neurobiological and neurophysiological theories by and large fail to take into account these, motion and environment oriented, workings of the brain, which he believes to be primary and fundamental to its development and functioning, or evolutionary emergence. Our biological constitution appears to be especially suited for creating the classical image of the world and succeeds in the world by working with this image. This, however, does not mean that either the world or the brain need themselves be seen as classical physical systems. The ultimate aim of this paper is to explore potential interconnections between Berthoz’s theory and Umezawa’s and Vitiello’s quantum-theoretical approaches to the brain, based on the understanding of the brain as a dissipative quantum system, continuously interactive with environment—the world. Although along somewhat different lines, both Berthoz and Vitiello argue that the brain creates a certain image of the world in our mind. By so doing, the brain enables the body to interact with and to live in the actual world, whose ultimate constitution appears to be quantum and may, ultimately, be beyond the brain’s (classical) image of it and possibly beyond any conception our mind can form. The question broached by this paper is why the physical machinery of the brain that creates the classical physical image of the world in order to interact, most especially probabilistically or by taking our chances well, with the actual world might need to be physically quantum. In other words, the question is why the physically quantum doubling of the world and the brain may be necessary to create the classical image of the world and of the mind itself. C
64 Human Biocatalysis and Human Entanglement. How to Fill the Gap between Quantum and Social Sciences? Massimo Pregnolato, Paola Zizzi <maxp@pbl.unipv.it> (Pharmaceutical Chemistry, University od Pavia, Pavia, Italy)
In complexity science, entanglement is what exists before order emerges. The role of quantum entanglement as the precursor to emergent order is much discussed in physics [1]. For instance, Gell-Mann [2] defines an entanglement field as a 'fine-grained structure of paired histories among quantum states'. The notion of the primordial pool which existed before the origin of life is also much discussed in biology [3]. According to Christopher Davia [4] the evolution of life is the evolution of catalysis. Indeed, the biosphere, taken as a whole, may be considered a macroscopic process of catalysis. From the evolution of catalysis, from specific to non-specific, Man has emerged, the most non-specific catalyst on Earth. McKelvey has found that an understanding of entanglement from quantum theory can throw useful light on the nature of ties among people [5,6] and their impact on emergent order in organisations. In terms of human behaviour, he explained that a high correlation between the paired histories of people would mean they think in similar ways; a low correlation would mean they go in different directions. We define Human Biocatalyst (HB) a human being able to catalyze human relationships in a selective way. A HB selects people with high relative affinity and catalyzes reactions between them through the communication. The products of these interactions could be a tangible human-human like-entanglement. Dean Radin has done extensive work on the idea of Human Entanglement. He describes experiments that shown a non-local connection between human beings when they ‘think’ of each other [7]. Entanglement, when included in quantum games [8], makes (somehow) everybody win. Entangled quantum strategies are such that all players cooperate, and classical egoism (destructive) is replaced by quantum altruism (constructive). Entanglement might explain some forms of telepathy, actually quantum pseudo-telepathy [9] between “quantum-minded” players who play a quantum game. We think that Basic logic [10] could be a good starting point towards a deeper understanding of the Quantum world also because it is the only logic which can accommodate the new logical connective @ = “entanglement”[11]. One of our dearest hopes is that Basic logic, once applied to the study of the deepest levels of the unconscious, might be useful for the care of some mental diseases, like schizophrenia, which are still wayward with respect to usual psychotherapy. The Quantumbionet will be presented. The network will include well-known intellectuals, teachers and laboratories supporting the development of sciences and aimed to play an active role on the international stage for human health and wellness enhancement. The network will be the bridge between science and human behaviour. C
65 Whitehead’s tri-modal theory of perception in the light of empirical research Franz Riffert, <Franz.Riffert@sbg.ac.at> (Education, University of Salzburg, Salzburg, Austria)
Whitehead has developed a bold theory of perception based on the concepts of his process philosophy (Whitehead 1978). According to him it is one of the shortcomings of modern philosophy not to shed any light on the sciences. In elaborating his theory of perception he showed how such a fertile interchange between sciences (psychology) and philosophy (process metaphysics) might be possible and what new perspectives follow from it. Whitehead’s theory of perception is tri-modal i.e. there are three different modes of perception which are related “genetically”. The most basic and most primitive of these three modes is ‘causal efficacy’ which is a form of immediate and rich albeit vague grasping of one’s surrounding. It is best conceived in neuro-physiological and/or sensory-motor terms and is connecting the perceiver directly with his/her environment. Based on this primitive mode and elaborated by abstraction and attention the second mode of perception is developed: the mode of ‘presentational immediacy’. In this more advanced mode of perception certain aspects of the rich content of the mode of ‘causal efficacy’ are abstracted and highlighted. These specific aspects are given in a clear and distinct way as sensa such as exact spatial and temporal relations, distinct forms and colours. The most advanced mode of perception, the mode of our everyday perception, is generated by integrating the two more primitive perceptive modes; one of these two more primitive modes acts as symbol while the other one takes the role of the designate; therefore Whitehead termed this mode “symbolic reference”. In this mode the feature of consciousness is introduced since according to Whitehead it is the subjective feeling of the contrast of what might be (symbol) and what is in fact the case (designate). Some of the features of Whitehead’s philosophical theory of perception can be tested empirically: First one may look for evidence in the neuro-sciences as well as in psychology in favour of its tri-modal character. Second the general tendency of perception from to distinct apprehension which finally is accompanied by consciousness can be tested against the body of research results in psychology of perception. Finally Whitehead’s claim that a primitive mode of perception does exist can be examined because he has described the characteristics of this perceptive mode; they can be compared with psychological evidence. Micro genetic (Werner 1956; Bachmann 2001) and percept genetic research (Smith 2000) is dealing with perception much in the same way as Whitehead. Results confirm Whitehead’s position concerning a general tendency from vague to distinct information processing in perception. The tri-modal character of Whitehead’s theory finds support in Anthony Marcel’s well-known tachistoscope experiments which are presented in his paper ‘Conscious and Unconscious Perception: Experiments on Visual Masking and Word Recognition’ (1983). Victor Rosenthal in a micro genetic experiment on reading (2005) speculates about two distinct neuronal pathways in the brain: one processing available information quickly but in a crude way, while the other one processes information in a detailed way but much slower. This also to some extent supports Whitehead’s position. C
66 Dynamic Geometry, Bayesian approach to Brain function and Computability Sisir Roy <sisir@isical.ac.in> (physics and applied mathematics, indian statistical institute, kolkata, w.b., india)
Recently, the present author along with his collaborators introduced the concept of dynamic geometry towards understanding brain function. This is based on the idea of functional geometry as proposed by Pellionisz and Llinas. This interpretation assumes that the relation between the brain and the external world is determined by the ability of the Central Nervous System (CNS) to construct an internal model of the external world using an interactive geometrical relationship between sensory and motor expression. This approach opened new vistas not only in brain research but also in understanding the foundations of geometry itself. The approach named tensor network theory is sufficiently rich to allow specific computational modelling and addressed the issue of prediction, based on Taylor series expansion properties of the system, at the neuronal level, as a basic property of brain function. It was actually proposed that the evolutionary realm is the backbone for the development of an internal functional space that, while being purely representational, can interact successfully with the totally different world of the so called “external reality”. Now if the internal space or functional space is endowed with stochastic metric tensor properties, then there will be a dynamic correspondence between events in the external world and their specification in the internal space. We shall call this dynamic geometry since the minimal time resolution of the brain, associated with 40 Hz oscillations of neurons and their network dynamics is considered to be responsible for recognizing external events and generating the concept of simultaneity. In this framework, mindness is considered as one of the several global physiological computational states (functional states) that the brain can generate. Since, computation and information processing are accepted terms in neuroscience, it is necessary to clarify the meaning of computation and information measure. The functional states are considered to be internal states related to the metric property associated to CNS. In fact they are being generated due to intrinsic properties of neurons. It indicates that Bayesian decision theory and Fisher information might play significant roles in understanding brain function. It is found that CNS does not compute rather optimizes the behaviours. This optimization of behaviours is similar to “computation capacity” for digital machine as proposed by Toffoli. This perspective will shed new light on the issue of computability vs. non-computability of brain. C
67 Neural Correlates and Advanced Physics David Scharf <dscharf108@gmail.com> (Physics, Maharishi University of Management, Fairfield, IA)
Although researchers are daily uncovering new information about the brain—from an increasingly exhaustive mapping of its neural pathways to a more thorough and detailed understanding of the correlations with conscious experience and cognitive faculties—still, at its current stage of development, neuroscience is not yet in a position to provide a comprehensive analysis of the microphysical underpinnings of conscious experience. The program for the neural correlates of consciousness does not claim to provide such a comprehensive microanalysis; instead, it offers to outline a global view of both the broad features and logical constraints of such a microanalysis. This program embodies two explicit assumptions: (1) that conscious experience supervenes on its neural basis, where supervenience implies that if the physical basis is present, then the corresponding conscious experience will occur, and (2) that the conscious experience is dependent on the physical. This second assumption casts the neural correlates program in expressly physicalistic terms. Also, a third, usually unstated, assumption is not harmless: Discussions of the neural correlates of consciousness take for granted that (3) these correlates are governed by classical physics—that any effects of advanced physics will be insignificant, will average out, or will otherwise not affect the brain’s determination of conscious experience. Unfortunately for those who take this route, assumptions (2) and (3) lock the researcher in a pernicious dilemma. Let’s suppose for a moment that these radical physicalists were right. Then a particular configuration of neurons firing (or other correlates) would determine any given conscious experience or mental activity. Naturally, this presents a burden of explanation: Given the dependency on the physical, how is it that mental content is internally coherent and intelligible, and how is it that (ordinarily) our mental representations accurately reflect the external world? A pointed way to frame the dilemma is to note that the logical and scientific train of reasoning leading to the neural correlates program itself would be determined by the underlying neural correlates, thus calling into question its own justification. This is a similar bind that Hilary Putnam and others identified as arising from the brain-in-a-vat scenarios, and which led to Putnam’s wholesale rejection of the neural correlates program—with its mind-brain dependence relation. But, as we see things, there are better alternatives to be had than Putnam’s conclusion. Successfully explaining—or at the very least allowing for—the internal coherence and external reliability of consciousness, in the context of a neural correlates program, fundamentally depends on the parameters of the specific type of physicalism we adopt. This is where advanced physics may come to the rescue. Indeed, certain aspects of consciousness that are incompatible with a physicalism based on classical physics may be not only consistent with, but explainable in terms of, a physicalism grounded in advanced physics. C
68 Quantum Theory, the Dream Metaphor and the Meta-Brain Model Thomas Schumann <tschuman@calpoly.edu> (Physics, California Polytechnic State University , San Luis Obispo, California)
We argue from the quantum double-slit experiment, from the evolution of emotions and other issues that the mental world influences the physical just as the physical influences the mental. From analogy with electro-magnetism (changing electric field produces changing magnetic field and vice-versa) that the mental and physical worlds are really one entity. From this comes the dream metaphor in which the mental and the physical are the same; this fits the quantum theory of measurement in which an observable of a system becomes "real" only when it is observed (the system is no longer in a superpostion of possible values for the observable). With the associated model of the "meta-brain" we derive intuitively the disturbance of a system when it is observed, the non-commutation of observables and, using the Einstein-Podolsky-Rosen situation, we derive the observer dependent nature of the wave function. The wave function is mental and thus physical as well. We discuss, in the context of the dream metaphor the "filling in of history by observation" associated with Wheeler's "delayed choice" thought experiment. We require a "recursion principle" by which the meta-brain produces the dreams or streams of consciousness which produce brains which produce the streams of consciousness. The meta-brain contains the non-local hidden variables which determine the content of the "dreams" or streams of consciousness. We discuss the anthropic principle within the "recursion principle" and eliminate from the multi-verse all (dream) universes which cannot produce a brain. We also consider the concept of a wave function for an entire universe to be meaningless in this context as an individual cannot observe the whole universe. That results, at least in part, because of the limit on the speed of information transfer (the speed of light). C
69 Overlap with the different QUA Francis Schwanauer <franz@gw-in.usm.maine.edu> (Phiulosophy, USM, Portland, Maine)
ABSTRACT: Renewed efforts to gauge the informative aspect in quantum effects has finally identified graviton and photon as the lowest promulgative degree of about-ness in quantum-interference. What makes the “built-in proof” of these rest-mass-less particles convincingly informative, is the fact of their being shared by overlapping parent particles. This most recently detected shortcut between presentation and representation, quantum-inference and quantum-causation, or sameness between showing and telling, reduces the new grammar of quantum interaction to such elemental laws as acceptable proximity, limits to collapse, and/or expansion, between the sufficiently “different” qua the other, and the elegant sharing and seamless transference of energy between spatial and temporal neighborhoods respectively. This, however, turns inertial frames into the axiomatic monopoly of consciousness, which not only dominates what implies in quantum-inference, but also what conditions in quantum-causation. If, therefore, conscious quantum-interference (qua quantum-information, transfer, etc.) holds, then the grip of consciousness becomes no less pervasive than that of a gravitational field on both the included and the neighboring phenomena. Though still proportional or restricted to its inertial frame as parent particle or self-inclusive superposition, it becomes the active agent behind the manipulation of its representational apparatus and the authentic origin of synchrony. This is shown both by its capacity to hurl never less than 2 such items as positive mass particles in the form of classical waves in different directions within the two halves of its very brain at the speed of light (cp. the Yang-Mills theory), and its ability to coordinate unheard-of extremes, not-withstanding contrary alternatives (cp. Feynman’s quantum weirdness), for a final choice and decision procedure on the promulgation of matter and/or anti-matter to suit its long run purposes. In short, if quantum-coherence between the sufficiently “different” by way of overlap holds, so will quantum-interference together with its more or less distant echo, the synthetic nature of quantum effects. C
70 Causality, Randomness, and Free Will Richard Shoup <shoup@boundary.org> (Boundary Institute, Saratoga, CA)
The experience of free will has often been regarded as a hallmark of consciousness, yet its meaning and very existence have been debated for millenia. In this talk, we explore the complex relationship between free will, determinism, causality (both forward and backward), and quantum randomness. The latter, a deep and central assumption in quantum theory, is associated with measurement interactions. From an analysis based on quantum entropy, it is proposed that quantum measurement is properly understood as a unitary three-way interaction, with no collapse, no fundamental randomness, and no barrier to backward influence. Experiments with quantum-random devices suggest that retro-causal effects are seen frequently in various forms, and can be shown to explain some anomalous phenomena such as clairvoyance and precognition. It is argued that all interactions are indeed unitary, reversible, and thus deterministic, but that large-number effects give a persistent illusion nearly equivalent to free will. C
71 Can a Computer have a Mind?: Non-computability of Consciousness Daegene Song <dsong@kias.re.kr> (School of Computational Sciences, Korea Institute for Advanced Study, Seoul, Korea)
Penrose has suggested that there may be a non-computable aspect in consciousness at the fundamental level as in Godel's incompleteness theorem or Turing's halting problem. It is shown that, as in Penrose's suggestion, consciousness in the frame work of quantum computation yields a physical example of the non-computable halting problem. The assumption of the existence of the quantum halting machine leads into a contradiction when a vector representing the observer's reference frame is also the system which is to be unitarily evolved, i.e. consciousness in quantum language, in both the Schrodinger and Heisenberg pictures. C
72 Fundamental Biological Quantum Measurement Processes Michael Steiner, Uzi Awret, R. W. Rendell, Sisir Roy, <mjsasdf@yahoo.com> (Center for Quantum Studies, George Mason University, Fairfax, VA)
Wigner, Von Neumann and others believed that consciousness and quantum state evolution are related. While this is a difficult open question, a simpler question is whether or not a process other than Schrödinger's equation is involved in basic biological processes. It is well known that use of Schrödinger's equation alone to treat interactions generally results in non-classical superpositions. Yet nature has managed to provide recognition processes as well as store information that appears to be completely classical, that is without superposition. Hence it seems reasonable to examine whether or not certain biological processes are somehow associated with the measurement process. We will explore the nature of the dynamic transition from Schrodinger only, i.e. wave only to where one gets measurement or collapse. We are supposing that the biological domain is where the collapse occurs. We examine biological macromolecules which enables the creation of biological records and the finalizing of biological recognition processes. We will be especially interested in biological macromolecules and systems that were designed to function close to the border separating the two domains. We calculate the threshold for several basic biological processes and compare this to the lower bound TL calculated by canvassing current quantum experiments on mesoscopic systems. It is argued that most fundamental biological process require recognition processes that must be inherently based on the measurement process. That is, nature has designed its systems taking into account the size or energy needed for measurement to occur. If this is the case, then we should be able to learn about the characteristics of measurement by examining biological systems. We will examine whether there is biological evidence that a threshold exists in (delta)E(delta)X > T. Several biological fundamental processes are examined. The first is the manner in which protein chains are recognized. One of the basic and ancient elements that is common in all three domains of life—the Eukarya, Bacteria, and Archae is the signal recognition particle (SRP). The SRP has basic functionality that would be consistent with the measurement process. The SRP recognizes and binds to a signal sequence carried by the ribosome and then guides it to the rough endoplastic reticulum (ER). These binding energies usually have three types of contributions, i.e. electrostatic, hydrogen bonds and induced dipole-dipole interactions or Van der Waals’ interactions. Other processes examined include high affinity protein interactions and protein RNA complexes that are crucial to biological recognition and record creation. Antibody substrate, P-MHC TCR complexes, hormone and their corresponding receptors and interaction hotspots will also be examined. We will also review the current status of mesoscopic physics, and show where experiments that have verified Schrödinger evolution lie in terms of T. We will see that most experiments that have been conducted actually have a small (delta)E(delta)X. For example, superconducting squid systems typically have a large delta X but very small (delta)E. Such experiments give us a lower bound TL on the threshold. Based on the most up-to-date experiments, we will provide an estimate of TL . We will see that a given threshold can describe quite well very different physical situations such as ionization and the Rydberg atom, and nuclear processes. C
73 Why meaning is the harder matter: a Boh(e)mian anthropology Koen Stroeken <koen.stroeken@ant.kuleuven.be> (Anthropology, University of Leuven, Huldenberg, Belgium)
Mainstream anthropology has kept itself outside the mind/matter debate, just as most neuroscientists have, albeit for the opposite reason. Students of culture feel hopelessly dualistic when confronted with the dominant materialism that recasts the debate as a mechanistic challenge, that of neurocomputation, which attributes to the brain a sort of ‘immaculate conception’ of consciousness. If a hundred years of research of cultures taught us anything it is that the principle of natural selection can describe the function and survival of ideas (Atran, Sperber) but not their content and origin, that is, the semantic stuff selected. Meanings appear to be universally shared despite our brains being unique individual constellations of absolutely separate matter. That is why, in practice, ethnographers treat human minds as selections from a common consciousness. Defying both materialism and Cartesian dualism, the implication is that subjective experience arises not from 'mother nature' alone, but from interacting with another source of causation, 'father culture' so to speak. This is another way of saying, with Bohm, that matter does not equal consciousness and that we need meaning, a second, moulding (hence harder) type of matter, to bridge both. From an anthropologist's perspective the best candidate for an interdisciplinary paradigm of thought indeed seems Bohm's solution to the quantum riddle: our classical spacetime, the explicate order, selects from an implicate order of potentialities. A cultural selection from the quantum multiverse constitutes the particular spacetime that is our universe, and thus consistently determines what humans can be conscious of and measure. This measured content of consciousness has been experimentally proven to be non-local and quantum entangled (Aspect, Wheeler). What does this mean in a cultural reading of experiments? The fact of our conscious perception knowing the future betrays our physical belonging to a more encompassing reality, the multiverse, for which our (Einsteinian) spacetime is a selection, entirely completed as selections are. Our mind stands as it were at the edge of spacetime, itself unfortunately (as Bohm remarked) the only world we can think. Humans are bohemians in their world. I conclude more concretely with data on spirit possession which illustrate the exceptional parasympathetic nervous system of the human species. Naturally selected to suspend homeostatic reactions and to stand emotions, our body (not just the brain) managed to use the binary principle of meaning systems (inclusion/ exclusion) to further control homeostasis (intrusion/ synchrony) and become conscious of more. In biological terms consciousness would thus be the by-product arising during this suspension and control, for which I tentatively consider a number of macro-neural correlates. C
74 Consciousness and the measurement problem: A possible objective resolution Fred Thaheld <fthaheld@directcon.net> (Folsom, Calif.)
A recent mathematical analysis of the measurement problem by Adler (1), from the standpoint of Ghirardi's (2,3) Continuous Spontaneous Localization (CSL) theory, reveals that collapse of the wave function takes place in the rod cells of the retina in an objective fashion following amplification of the signal, rather than in a subjective fashion (as had been proposed by Ghirardi et al) in the brain, mind or consciousness. This analysis is in agreement with the positions taken by Shimony (4) and Thaheld (5), that this event takes place in the rod cells of the retina but, at an earlier stage prior to amplification, involving the conformational change of the rhodopsin molecule. It is of historical interest to note here that both Wigner (6) (later in life) and Dirac (7) also espoused an objective process. Additional supporting evidence for an objective apaproach can be found in the persual of rhodopsin molecule and retinal rod cell schematics (8), which graphically illustrate why collapse has to take place in this fashion. This can also be subjected to 2 different empirical approaches, one involving excised retinal tissue mounted on a microelectrode array and superposed photon states (9) or, through molecular interferometry (10,11) involving matter-wave diffraction, where a "collapsing" wave packet will lead to a suppression of interference. This proposed solution to the 7 decades-old dilemma of the measurement problem, calling for an actual collapse mechanism, requires a modification of the Schroedinger equation to include nonlinear discontinuous changes. This will then allow one to address one or more related issues such as the Heisenberg 'cut' between the quantum and classical worlds, the validity of Everett's 'many worlds' theory (12), raises the possibility for controllable superluminal communication (13), that any living system with or without eyes might possess this same collapse ability, the maintenance of entanglement after repeated measurements, with interesting implications for the Schroedinger's 'cat' concept, finally leading to a new approach to the SETI issue via astrobiological nonlocality at the cosmological level (14). References: 1. Adler, S., 2006. quant-ph/0605072. 2. Aicardi, F., Borsellino, J., Ghirardi, G.C., Grassi, R. 1991. Found. Phys. Lett. 4, 109. 3. Ghirardi, G.C., 1999. quant-ph/9810028. 4. Shimony, A., 1998. Comments on Leggett's "Macroscopic Realism", in: Quantum measurement: Beyond paradox. R.A. Healey, G. Hellman, eds. Univ. Minnesota, Minneapolis. 5. Thaheld, F.H., 2005. quant-ph/0509042. 6. Wigner, E., 1999. in: Essay Review: Wigner's view of physical reality. M. Esfeld. Stud. Hist. philos.Mod. Phys. 30 B, 145. 7. Dirac, P.A.M., 1930. The principles of quantum mechanics. Clarendon, Oxford. 8. Kandel, E.R., Schwartz, J.H., Jessell, T.M., 2000. Principles of neural science. 4th ed. McGraw-hill, New York. (See especially p. 511, Fig. 26-3 and p. 515, Fig. 26-6. 9. Thaheld, F.H., 2003. BioSystems 71, 305. 10. Carlip, S., Salzman, P., 2006. gr-qc/0606120. 11. Zeilinger, A., 2005. Probing the limits of the quantum world. Physics World. March. 12. Everett, H., 1957. Rev. Mod. Phys. 29, 454. 13. Thaheld, F.H., 2006. physics/0607124. 14. Thaheld, F.H., 2006. physics/0608285. C
75 A New Theory About Time Jeff Tollaksen, Yakir Aharonov and Sandu Popescu <jtollaks@gmu.edu> (Dept of Physics & Dept of Computational Sciences, GMU, Fairfax, va, usa)
We present a fundamentally new approach to time evolution within Quantum Theory. Several advantages of this new picture over the standard formulation of Quantum Theory are 1) it can represent multi-time correlations which are similar to Einstein-Podolsky-Rosen/Bohm entanglement but instead of being between two particles in space, they are correlations for a single particle between two different times, 2) dynamics and kinematics can be unified within the same language, and 3) it introduces a new, more fundamental form of complementarity (namely between dynamics and kinematics), 4) it suggests a new approach to time-transience or subjective becoming, one of the most fundamental aspects of conscious experience. The last item is significant given Einstein's reflection that becoming or the subjective-now does not and cannot occur within physics. As a consequence, to date, physics does not incorporate time-transience, i.e. space-time does not evolve or have dynamics. As an analogy, in a geographic map, nothing indicates that one mountain vanishes and another appears, they all co-exist. Similarly, the passage of time has no fundamental or dynamical importance, it is merely an illusion. The new approach to time evolution incorporates becoming by utilizing the new Hilbert spaces introduced for each instant of time. (In contrast, traditionally one Hilbert space is used to represent the entire universe.) We then define a Super-Hamiltonian, which has as its ground state one entire history for the universe. Using another fundamental discovery we call internal and external reality, we associate the time of this Super-Hamiltonian with both awareness variables and processes related to wavefunction collapse. The evolution of awareness or consciousness is then associated with an adiabatic evolution of the Super-Hamiltonian. Because a single Now requires integration over all of the Super-Hamiltonian time, this new approach also illuminates the common phrase (e.g. by Bohm): now is the intersection of eternity and time. C
76 Gravity minds? Parallels between the basic characters of the consciousness and the gravity. Imre András Török, Gábor, Vincze <torokia@freemail.hu> (Department of Psychology, University of Szeged, Szentes, Hungary)
Our discourse consists of two parts. First we draw an epistemological and phenomenological parallel between two, seemingly remote and the most overall phenomena of the world. With this our aim is to help people understand the mind deeper. At the moment neither the gravity (the missing link from the Grand Unified Theory) nor the conscious experience are explained in their origins. The extreme manifestations of the gravity produce such phenomena that correspond to the criteria of the consciousness determined by Hussler. In case of the black holes we can observe such closeness on the level of the phenomenon that is obvious in case of the subject. That is, the subjective experience of the individual is not accessible on the level of the experience, similarly to this, the inaccessibility of the space of the black hole is obvious in case of the physical phenomena, only their effects can be shown. Beside the phenomenological similarity of the features of the two basic phenomena, their explanation attempts are also similar in the mainstream natural science. In one hand the subjective experiences are considered to be the consequences of other basic phenomena, while the gravity itself seems to be on independent physical phenomenon. In the second part of the discourse we give provocatively and tentatively such a contesting explanation to the gravity and subjectivity that in the first case makes the origin of the gravity possible on mathematical and phyisical basic (as the consequences of a complex phenomenon), in the second case we give contesting explanations related to the materialistic reduction of the consciousness relying on biological evidences. The biological firmament of the reasoning will prove the fact that the phenomenon of ipseity cannot be reduced into a materialist level yet it can be placed in the scientific psychology. C
77 Quantum information theory and the human brain: The special role for human unconscious information processing Maurits Van den Noort, Peggy Bosch; Kenneth Hugdahl <Maurits.Noort@psybp.uib.no> (Dept. of Biological and Medical Psychology, Division of Cognitive Neuroscience, University of Bergen, Bergen, Hordaland, Norway)
Concepts like entanglement, randomness, and complementarity have become the core principles of newly emerging quantum information technologies: quantum teleportation, quantum computation and quantum cryptography (Zeilinger, 2005). Although quantum computation promises to be a dominant form of information technology (e.g. Childress et al., 2006; Duan, Cirac, & Zoller, 2001), we do not know very much about the interaction between humans and quantum computers and the relation between quantum mechanics and (higher) brain functions yet (e.g. Koch & Hepp, 2006; Van den Noort & Bosch, 2006). In this presentation, behavioral studies and studies that focus on the peripheral- and the cortical level will be discussed that suggest a special role for unconscious (emotional) information processing in human computer interaction (Van den Noort, Hugdahl, & Bosch, 2005). The implications of these results both for human conventional computer and for human quantum computer interaction will be discussed. References: Childress, L., Gurudev Dutt, M. V., Taylor, J. M., Zibrov, A. S., Jelezko, F., Wrachtrup, J., Hemmer, P. R., & Lukin, M. D. (2006). Coherent Dynamics of Coupled Electron and Nuclear Spin Qubits in Diamond. Science, 314, 281-285. Duan, L. M., Cirac, J. I., & Zoller, P. (2001). Geometric Manipulation of Trapped Ions for Quantum Computation. Science, 292, 1695-1697. Koch, C., & Hepp, K. (2006). Quantum mechanics in the brain. Nature, 440, 611. Van den Noort, M. W. M. L., Hugdahl, K., & Bosch, M. P. C. (2005). Human Machine Interaction: The Special Role for Human Unconscious Emotional Information Processing. Lecture Notes in Computer Science, 3784, 598-605. Van den Noort, M. W. M. L., & Bosch, M. P. C. (2006). Brain Cell Chatter. Scientific American Mind, 17(5), 4-5. Zeilinger, A. (2005). The message of the quantum. Nature, 438, 743. C
78 Mental causation, common sense and quantum mechanics Vadim Vasilyev <edm@rol.ru> (Philosophy, Moscow State University, Moscow, Russia)
Many authors who try to comprehend the nature of connection of consciousness with quantum processes believe that presence of consciousness in measurement procedures leads to the collapse of the wave function. In other words, they admit the causal efficacy of consciousness or qualia. It is quite obvious, however, that quantum events, taken as such, don’t reveal the causal efficacy of consciousness, and some well-known interpretations of quantum mechanics have no need for any assumption as regards the role of consciousness in quantum phenomena. Hence the importance of the quest for independent arguments in favor of reality of mental causation and refutation of epiphenomenalism. In the near past there were many interesting attempts to destroy epiphenomenalism – Elitzur (1989), Hasker (1999), Kirk (2005), among others. Their arguments are very sophisticated, but, as a rule, such arguments can be blocked with no less sophisticated counter arguments. The simplest refutation of epiphenomenalism would have taken place in the case of contradiction of this doctrine with intuitions of common sense. Most philosophers, however, believe this is not our case. Indeed, while common sense assures us that, for example, our desires, considered as qualia, have an influence on our behavior, in fact it only assures us about a kind of correlation between desires and behavior, correlation that might be an epiphenomenon of some basic neuronal processes. Nevertheless – and this is my main point – it is possible to show that common sense convictions presuppose causal efficacy of consciousness after all. That’s because without such an assumption I simply couldn’t believe that other people have conscious states. I believe they have these states or qualia like I have because of their physical and behavioral similarity with myself. My conclusion is based on simplicity considerations. But if I consider the conscious states as epiphenomena, the world in which only myself is conscious (perhaps due to some peculiar property of my brain) is much simpler than a world where others are encumbered with qualia as well. Indeed, in the first world there is no multiplying of entities which were truly unnecessary and useless for explanation of the reality given in my experience (Jackson (1982), Chalmers (1996) and Robinson (2007) missed this point). Thus, if I assume that consciousness is epiphenomenal, I would hardly believe other people have consciousness at all. But common sense dictates me to believe they have conscious minds. Hence, my common sense comprises an implicit denial of epiphenomenality of conscious states. So we see that in some cases our common sense may even favor quantum mechanics, or, to be more exact, may support one of its most radical interpretations. References. Chalmers, D. 1996. The Conscious Mind. New York: Oxford University Press. Elitzur, A. 1989. Consciousness and the incompleteness of the physical explanation of behavior. Journal of Mind and Bahavior 10: 1–20. Hasker, W. 1999. The Emergent Self. Ithaca, N.Y. : Cornell University Press, 1999. Jackson, F. 1982. Epiphenomenal qualia. Philosophical Quarterly 32: 127–136. Kirk, R. 2005. Zombies and Consciousness. New York: Oxford University Press. Robinson, W. 2007. Epiphenomenalism. Entry in the Stanford Encyclopedia of Philosophy. C
79 Spinoza, Leibniz and Quantum Cosmology Laura Weed <weedl@strose.edu> (Philosophy, The College of St. Rose, Albany, NY)
During the Scientific Revolution, the mechanism of Isaac Newton and Rene Descartes triumphed over the more complex epistemological and metaphysical systems of Baruch Spinoza and G.W. Leibniz because the Spinozistic and Leibnizian systems seemed to speculate about unnecessary entities and forces, violating Ockham’s simplicity rule for scientific theories. In light of contemporary quantum mechanics, however, it may now be time to revisit some of the metaphysical an epistemological proposals of these two authors. I will propose three general metaphysical and epistemological positions espoused by one or both of these authors that may appear less speculative and extraneous to present day scientists than they did to their counterparts of the past. The general positions are 1) that parts and wholes interrelate forming an organic cosmos, rather than a congeries of compounded components; 2) that the totality of what exists exceeds human faculties and methodologies for acquiring knowledge; and 3) that the relationships among the varieties of temporal scales in the universe precludes a meaningful conception of universal mechanical causation. First, Leibniz, Spinoza and quantum mechanics agree that the world is not a computational result of adding parts. Rather, the cosmos is an organic system in which parts and wholes are mutually determining of one another. The paper will explore ways in which Leibnitzian monads, Spinozistic modes and the electrons in the Bell experiment reflect a holistic and inter-relational cosmos, rather than a compositional world. Second, while Newton and Descartes were both optimistic about the capacity of human knowledge to comprehend all there is, and to ultimately result in a grand unification of science, Spinoza and Leibniz both proposed perspectival and methodological limits on the human potential for knowledge. These limits are reflected, I shall argue, in the role of the observer in quantum theory, and in the Everett many-worlds hypothesis. Third, the concept of global mechanical causation proposed by Newton and Descartes presupposes a uniform global space-time, across which these causes might unfold. Both Spinoza and Leibniz understood time as a multi-layered phenomena, distinguishing among multiple local, regional and eternal conceptions of time. I will suggest that their paradigms might be more useful for interpreting Feynman’s proton and electron graphs metaphysically. Clearly, much of what Spinoza and Leibniz wrote is simply out of date and insufficiently prescient to be of any help with contemporary quantum understandings of reality. But I would like to propose that at least the three ideas articulated in this paper would be helpful in constructing a metaphysics and epistemology for the weirdness of the quantum world. Popular scientific conceptions of knowledge and reality have been wedded to Newtonian mechanistic materialism in ways that have become unhelpful for science. This new, although recycled, direction might be more productive. C
80 Towards a Quantum Paradigm: An Integrated View of Matter and Mind George Weissmann <georgeweis@aol.com> (Berkeley, CA)
A fundamental paradigm is the set of conditioned structuring tendencies that shape our experience existentially, conceptually and perceptually. It is based on a set of embodied assumptions or presuppositions. We call the specific fundamental paradigm which grounds our culture’s common sense and scientific views and which structures our existential reality, the Classical Paradigm (CP). A critical examination and analysis of relativistic and quantum phenomena reveals that the assumptions which define the CP break down in large parts of the total phenomenal domain. Remarkably, a century since the relativity and quantum revolutions, we have not yet succeeded in developing a new fundamental paradigm, a Quantum Paradigm, that could naturally ground relativity and quantum physics ontologically.. The mainstream Copenhagen Interpretation of QT is instrumentalist and yields the procedures we so successfully use to calculate the probabilities of the various possible outcomes of an experiment, given its preparation. But it does not provide an account of what is actually occurring in an experiment. In fact, when one tries to interpret it ontologically, it suffers from inner inconsistencies (measurement problem). The Copenhagen interpretation suggests that the topic of QT is not the world itself, but our knowledge of the world, the structure of experience. Various alternative interpretations have been proposed over the years in an attempt to remedy QT’s lack of an ontology. Most of them remained attached to core CP assumptions, including objective realism, which imply banishing consideration of consciousness. Some of these attempts were shown to be incompatible with the predictions and the structure of QT itself, while others survived but suffer from significant shortcomings. As a result, we are still navigating science, our own lives and society on the basis of a fundamentally flawed world view. Our claim is: we cannot ground quantum theory in the CP. In particular, we can no longer banish experience/consciousness from the picture and still hope to understand what QT is telling us about the nature of the world. We report on some promising progress towards the development of a Quantum Paradigm which provides an ontology for QT and inextricably integrates matter and mind. Henry Stapp, building on foundations offered by Whitehead and Heisenberg, has proposed an ontological model which builds on the Copenhagen interpretation and describes an unfolding world process, consisting of events that are - in human terms - moments of our experience. The probabilistic dynamics (tendencies) of this process are described by quantum theory. We propose integrating into this framework the relational postulate of Carlo Rovelli, which states that there are no facts or occurrences in an absolute sense, that these are always relative to a measuring or perceiving system. We further take into account insights gained by consideration of experimentally observed anomalies which suggest that quantum events are not fundamentally random but more like “decisions”. Proceeding thus, we arrive at a rudimentary and preliminary but heuristically useful version of a QP which could ground QT as well as human experience including its observed “anomalies”, and which encounters no “hard problem of consciousness”. C
81 A Model of Human Consciousness (Global Cultural Evolution) Marcus Abundis <marcus@cruzio.com> (unaffiliated, Santa Cruz, CA)
Evolutionary efficaciousness is measured in how well a given species adapts itself to its environment. In applying this premise to humanity, a model of global human cultural evolution is hypothesized. This exploration of Human Creativity focuses on: - emergence of humanity's direct conscious sense (personal ego), - the field of reasoning from which this conscious sense arises (imagination), - the field of reasoning that follows (knowledge), - and the system in which all is bound together (evolution). All else is derivative - a litany of subsequent emergent events (worship, war, work) endlessly folding back upon themselves, revealed as "civilization." This study begins with the organism that originally births humanity, Earth. Earth's geologic record shows at least five episodes of mass extinction followed by recovery. From these episodic cycles of Earthly death and rebirth, five evolutionary dynamics are named. The millennia-long interplay of these five dynamics brings greater diversity and complexity of life, until we arrive at the species of our epoch; including humankind with its challenges of consciousness. Earth's overarching evolutionary dynamics set the stage upon which human consciousness awakens. These dynamics organically stress (test) all organisms for viability, and trigger within humanity's adaptive psychology an “adverse relationship” with environment. A central focus of evolutionary fitness (rivalry with Nature’s adversity) mars humanity’s psyche with a sacred wound, as it appears "Mother wants to kill us?!" This sense of adversity provides evolutionary catalyst (bootstraps consciousness) and draws us to move expansively from discomfort to comfort. We are thus physically and psychologically charged to create adaptive responses, cultivating our "experience of consciousness." The sacred wound presents a paradox central to humanity’s continued expansion of consciousness. It lives in all intellectual and spiritual questions of unity vs. diversity (Earth-Mother vs. humanity) as the mythologizing of Natural adversity. Resolution of paradox begins in primal innocence at The Great Leap Forward (a state of unconscious unity) and evolves towards fully-manifest awareness (god-self, unity consciousness), prompting many states of consciousness along the way. But it is adversity that awakens humanity's unique creative spirit-dynamo to birth successive states of consciousness as a principal adaptive response. Our struggle with paradox fluoresce human consciousness towards diversity and complexity, following Earth's own metabolic trend. Humanity’s mirroring of Earth's evolutionary tendency (diversity and complexity) suggests functional means for human expressiveness. This expressiveness is mapped to Earth's five evolutionary dynamics, using five gender-paired archetypes. Our mirroring of Earth's evolutionary dynamics via these five archetypes (bio-culturalism) propels human consciousness across time. Humanity's bio-culturalism is amplified in these gender-paired archetypes and the mythic devices they enable. At a first level, "high/middle/low dreaming" archetypes reflect the hopes of humanity (creativity) set against Nature’s adversity, also seen in humanity's triune psyche: id, ego, superego, and other important triads. Deepening interoperation of this triune psyche completes two more of the five archetypes to create actualized archetypes. Actualized archetypes latently emerge as diverse but interdependent “realities” for individuals, communities, social enterprises, nation-states, etc. (civilization). P
82 Quantum spaces of human thinking Valentin Ageyev <ageyev@mail.kz> (psychology, Kazakh National University, Almaty, Almaty, Kazakhstan)
Thinking is ability to transform objective relations of nature in the purposes of human actions. Objective relations are quantized relations and devided into four types: casual, regular, system and relations of genesis. The human thinking has quan-tized character too as it is determined by quantized objective relations. Random relations are displayed by magic (sensual) type of thinking. Regular relations are displayed by mythological (intuitive) type of thinking. System relations are displayed by rational (logic) type of thinking. Relations of genesis are displayed by creative (historical) type of thinking. Magic (sensual) thinking is the way of transformation of objective random re-lations in the sensory purposes of spontaneous actions. The man operating spontane-ous way, recreates probable space of nature. Spontaneous action is determined by the sensory purpose which is a product of magic (sensual) thinking. Mythological (intuitive) thinking is the way of transformation of objective regular relations in the perception purposes of regular actions. The man operating in the regular way, recreates the ordered space of nature. Ordering action is determined by the perception purpose which is a product of mythological (intuitive) thinking. Rational (logic) thinking is the way of transformation of objective system rela-tions in the symbolical purposes of system actions. The man operating in the system way, recreates holistic type of nature. System action is determined by the symbolical purpose which is a product of rational (logic) thinking. Historical (creative) thinking is the way of transformation of objective rela-tions of genesis in the sign purposes of creative actions. The man operating in the creative way, recreates historical space of nature development. Creative action is de-termined by the sign purpose which is a product of historical (creative) thinking. Magic (sensual) thinking is the way of "cutting" in the nature of its first quan-tum space – probable. Products of magic (sensual) thinking are the "states" ("prob-ability") which are representing themselves as the purposes of spontaneous actions. As the result of spontaneous actions their purposes turn to the "magic" (sensual) knowledge expressing random character of nature. Mythological (intuitive) thinking is the way of "cutting" in nature of its second quantum space – regular. Products of mythological (intuitive) thinking are «object structures» ("orders") which are representing themselves as the purposes of regular actions. As the result of regular actions their purposes turn to the "mythological" (in-tuitive) knowledge expressing ordered character of nature. Rational (logic) thinking is the way of "cutting" in nature of its third quantum space – holistic. Products of rational (logic) thinking are «object forms» («formal logic»), representing themselves as the purposes of system actions. As the result of system actions their purposes turn to the rational (conscious) knowledge expressing holistic character of nature. Historical (creative) thinking is the way of "cutting" in the nature of its fourth quantum space – historical. Products of historical (creative) thinking are «genesis forms» («genesis logic»), representing themselves as the purposes of creative actions. As the result of creative actions their purposes turn to the historical (sensible) knowl-edge expressing historical character of nature P
83 Concurrency, Quantum and Consciousness Francisco Assis <fmarassis@gmail.com> (Electrical Engineering, Universidade Federal de Campina Grande, Brasil, Campina Grande , Brasil)
In this paper we review facts on theory of consciousness due to three authors: Tononi, Sun and Petri. In[1] Tononi proposes that consciouness level of a system can be measured capacity to integrate information and that quality of consciousness is given basically by the topology of the system. The ``system'' in the Tononi's theory is modeled by a graph G = (V, A, P ), where V = {1,\ 2,\ldots n } is the set of vertices, A subset ov V X V, is the set of edges and P is a probability distribution on the vertices V. In the Tononi's approach, A stand for causal relation between vertices connected, i.e. an edge means existence of a causal relation between its vertices. Following this setup the "amount of consciousness" of the system was associated with the minimum information bipartition. The first contribution of this paper is repositioning the measure proposed by Tononi in the framework of concurrency theory due to Petri[2]. One very remarkble issue of the Petri's theory is its physical motivation, it was sought to determine fundamental concepts of causality, concurrency, etc. in a language independent fashion. Also for insiders it is easy see that concepts of line, cuts and process unfoldering of a marked net correspond respectively to physical concepts of time-like causal flow, space-like regions and solution trajectories of a differential equation. For example, in paradigm of concurrency theory and its developments, e.g., Savari[3], the graph proposed by Tononi is a noncommutation graph. The new pointview we develop is consistent with Sun[4] applications of the idea of that success of physical theories settles on a hierarchy of descriptions similar the modular hierarchy found in computer and eletronic systems. For example, it is well known that unconscious processes cannot generate a complex verbal report while conscious activation can do it. Access consciousness and phenomenal consciouness are taken in consideration and related to other detailed levels of perception, memory. However Sun is clearly interested in constructing a computational machinery able to behaviour like a conscious being. At this point, we change the gear to treat more fundamental ontological aspects of the conscious experience itself and its relationship with quantum physics. The main remark is that concurrency theory with support of an ontologic status can offer a consistent start to a theory of counsciouness. [1] Gulio Tononi, "An Information Integration Theory of Consciousness", BMC Neuroscience, 5:42:1-22, 2004 [2] Carl Adam Petri, "Concurrency Theory", In Lecture Notes in Computer Science, pages 2-4, 1987 [3] S A Savari, "Compression of Words Over a Partially Commutative Alphabet", IEEE Trans. on Information Theory, 50(7):1425-1441, July 2005 [4] L. Andrew and Ron Sun, "Criteria for an Effective Theory of Consciousness and Some Preliminary Attempts", Consciouness and Cognition, 13:268-301, 2004 P
84 Consciously 'chosen' Quantum Design Gerard Blommestijn <gblomm@gmail.com> (Amstelveen, Netherlands)
This presentation is based on the view that the self as 'I' experiences the outcome of the quantum mechanical (QM) reduction process related to the ultimate step of perception in the brain and this is the subjective perception. In the same way the self chooses the outcome of a QM reduction process that forms the initial step of a motor activity in the brain and this is the subjective choice. This thesis proposes that these QM reduction processes are not only connecting consciousness to perception and choice in humans, but also in all other life-forms (with or without brains) and even in the most primordial (bio) chemical compounds leading to the evolution of life. Compared to the standard scientific way of understanding nature, an essence of consciousness is added, this being totally subjective, experiencing and choosing 'I'. So, the 'subjectiveness' of a molecule 'chooses' the outcomes of reduction processes that determine the actions of this molecule (all according to the quantum mechanical probabilities). For instance, at the start of the evolution of life, a molecule 'chooses' outcomes that are moving towards being an essential part of the beginning of the first 'proto-cell'. Here the same principle may be at work as we see when light passes through a succession of many slightly tilted polarizing filters; repeated quantum measurements of the polarization of the photons 'guide' it in a more and more tilted direction. In the same way the continuous conscious perception and 'choice' of biomolecules may quantum mechanically 'guide' (beginning) living systems through their 'design' steps. This principle of consciously 'chosen' Quantum Design will be explained, as well as its application to the processes shaping life and evolution, largely according to the ideas of Johnjoe McFadden documented in the book 'Quantum Evolution' (ed. Flamingo 2000). P
85 Two Gedankens, One Answer; Cloudy weather on the Mind/Body Front Michael Cloud; Sisir Roy;Jim Olds <mcloud1@gmu.edu> (Krasnow Institute, George Mason University, Centreville, Virginia)
We consider approaches whose purpose is to investigate the relationship between consciousness/mind and matter/brain hardware in the context of testable theories. If consciousness is to be resolved as strictly arising from matter in a testable manner it would follow that one of two strategies should be pursued: importing objective data into consciousness, or exporting of subjective conscious experience out to the objective world. We therefore investigate two gedankenexperiments. One involves feeding objective brain state information (e.g. MRI-like data) to the subject of that data in real time, and subsequently asking the same subject to make experimental observations of that data. The second experiment is to consider the issues arising from a calculation (or testable Prediction Engine) attempting to predict its own future behavior. We suggest that both questions involve significant practical difficulties, and raise the question of whether they can be completed in the general case. We conclude with the question of whether under very basic requirements on hardware, the issue of subjective vs. objective can be testably resolved. P
86 Reassessing the Relationship between Time and Consciousness Erik Douglas <erik@temporality.org> (Philosophy (Science, Physics, Time...), Independent Scholar, Portland, OR)
I begin with a review of the key empirical results and ideas forwarded concerning the relationship between time and consciousness over the past twelve years. Time is, of course, a fundamental variable and background notion in most theories, and this is no less the case with explanations about the origin of consciousness. However, our understanding of time is itself heavily dependent of our interpretation of mind and human experience, and herein we find the kind of circular semantic relationship between key notions that suggests itself as a potentially fruitful approach to disclosing elements of the Hard Problem of consciousness to genuine scientific investigation. Following an overview of the general problem space as it is at present, I will turn to my own research into making one very important facet – perhaps the essential feature – of time explicable: the so-called passage of time. Making temporal transience explicit means finding a way to articulate its properties so as to avail them to scientific and physical inquiry. I undertake this through the construction of models which distinguish the qualities ascribed to time in its many applications and contexts, with special attention given to two classes of temporal models: Rhealogical and Chronological. I will use Smythies (2003) JCS article as a point of departure, but significant parts of this talk will draw from my recent published work (cf. Douglas, 2006) and will incorporate material from an forthcoming article to be submitted to the JCS. As a philosopher, my intent is less to answer ill-conceived questions than to repose them in the first place so that they may be properly subject to empirical study. As such, it is my hope to engender a new direction to pursue in how we think engage the study of consciousness. P
87 The Affect is all at once cognition, motivation and behaviour Veronique Elefant-Yanni, Maria-Pia Victoria Feser Susanne Kaiser <veronique.elefant-yanni@pse.unige.ch> (Affective sciences, University of Geneva, Geneva, Geneva, Switzerland)
We commonly perceive semantic terms, which characterize the affect a person feels, on a bipolar continuum, going from merry to sad for example. However in affective sciences, there is a persistent controversy about the number, the nature and the definition of the affect structure dimensions. We consider the affect as the momentary feeling a person has at any time that is induced by the situation as a whole, including internal and external stimuli. Responding to the methodological criticisms addressed to the preceding studies, we conciliated the principal theories regarding the affect structure with the same experimental setting. In particular, using the semantic items all around the circumplex we found three bipolar independent dimensions and using only the PANAS semantic items, we found two unipolar dimensions. Finally, we propose a heuristic theorization of affect based on a current firmly established in social sciences, coherent from semantics to sociology, but largely ignored by researchers in affective sciences, that allows to postulate that affect is all at once cognition, motivation and behaviour. The affect is an ever-present inconscious monitoring process of our environment, but it is also as a summation the first conscious source of knowledge that disposes us, mind and body, to respond to this situation. As the affect aggregates and makes the summation of all those many informations of our situation in no time, we should consider its relation with the quantum consciousness hypothesis. P
88 Imagine consciousness as a single internal analog language formed of ordered water forged during respiration in concert with experience. Ralph Frost <refrost@isp.com> (Model Development, Frost Low Energy Physics, Brookston, IN)
Common sense tells us that all of the abstract math symbols and expressions are secondary, and thus arise from some primary, internal "analog math". That is, that the abstract stuff is wildly secondary and that only the analog-energetic stuff is primary. Cutting our layered cake in this new manner lets us focus on the stuff that's not in the streetlight's intense glare. Pawing around out beyond the paradigmatic shadows, fumbling through the debris, searching for the right analog math then becomes some sort of quest for a new imagery that's somehow related to our baseline energetics. Keeping things simple, that means that we're looking first at the respiration reaction: organics + oxygen -> carbon dioxide + water + new parts + some energy flow This reaction, recycling carbon back from the flip-side of photosynthesis, powers the down-gradient neurology and everything else. Thus, that entire nervous segment must also be sort of secondary or just more involved in output/communications functions. Plus this view says that where ever there is high oxygen consumption there ought to be a high, stochiometric formation and flow of newly formed/forming water molecules -- a.k.a., a highly rational, wildly repeatable internal analog math process, influenced by the "vibrations' passing through each site where the reaction is taking place. Since a water molecule generally is a tetrahedral-shaped unit with two plus, and two minus vertices, within any enfolding field there are at least six ways each molecule can form or emerge. Considering n-units forming in a sequence, this leads directly to a highly rational 6^n internal analog math. Setting n=12, 6^12 gives us 2,176,782,336 different ways to scribble these 12 units together. n=8, or n=13, or n=16, gives us different sorts and sets of associative/logical patternings -- more variations on the same theme. Allowing that the repeating patterns of vibrations in the surroundings play THE big role in which patterns keep repeating in the sequences of water molecules that keep emerging, we arrive rather quickly at a moderately logical feel for the common internal analog math "language" that runs in the unconscious, subconscious, and conscious regions, plus the senses, memory storage (short-term, and when water patterns are bound with organics, longer-term), plus imagination-creativity, "feelings and impressions", and provides one way to hook fight-flight impulse-momentum directly to motility. That is, we get a quick and dirty introductory view of our common "wave mechanics". Is this THE internal analog math? You tell me. Put it to the experimental test. Stop breathing and find out what happens to your consciousness. P
89 The sum over history interpretation of neural signals applied to orientation sensitive cortical maps. Roman Fuchs, Gustav Bernroider <Roman.Fuchs@sbg.ac.at> (Organismic Biology, Neurosignaling Unit, Salzburg, Austria)
Higher level brain functions correlate with the spatio-temporal signal dynamics behind ensembles of nerve cells. The overall situation can be figured as a mapping of the history of membrane currents to the absence or presence of a nerve impulse at a given time and location.This general frame includes all possible signal amplitudes including the quantum scale that causally precede the stimulus sensitive activity of engaged nerve cells. Neural activities along this view can be considered as complex projection amplitudes that do not have to follow a single unique path, but can comprise a large set of alternatives in coherent superposition. The physics behind this concept goes back to the sum over history interpretation, originally proposed in the diagrammatic perturbation theory of R. Feynman. In a previous paper we have applied Feynmans perturbation theory to phase dependent coding mechanism in the brain (Bernroider et al 1996). Here we demonstrate its applicability in the analysis of layer 2 iso-orientation sensitive cortical acivity maps (*). The theoretical background and, in particular, the relation to studies of neural correlates of consciousness (NCC) will be given in a separate paper (Roy and Bernroider, this issue). Bernroider G, F. Ritt and EWN Bernroider (1996), Forma, 11, 141-159 Roy S and Bernroider G, this issue (*) Images of cortical activity maps were generously supplied by T. Bonhoeffer, MPI Munich P
90 Consciousness as a black hole: perceptory cell and dissociated quantum Johann Ge Moll <Johanngmoll@gmail.com> (Department of Psychiatry,Hospital Karlucovo, Medical Academy Sofia, Bulgaria, Sofia, Bulgaria)
1)Unlike the traditional opinion Consciousness doesn't participates into the Reduction of Wave Function, but is responsible for the reverse procedure of the “Restoration of the Wave Function”, retransforming back the Perception Function into Wave Function. 1,1) Similar to a Black Hole, the Consciousness swallows matter and energy, and radiates back Information 2) The Consciousness is an ontological mechanism for de-materialization and de-temporalization: Consciousness dematerializes the body. Here it plays the role of a cosmological machine for re-transformation of the Macroscopic Present into Quantum Future. This re-transformation of the Present into the Future occurs as a transformation of the Present into Memory. 3) The transformation of the Actualistic Energy into Possibilistic Information occurs as a transformation of the Forgetting Fantasizing Energy into Remembering Form, or – briefly – transformation of Time-Oblivion into Memory. 4) The transformation of the Macroscopic Present into Quantum Future has the following consequences: Transformation of the Actualistic Universe into Possibilistic Universe. 5) The transformation of Actualistic ontology into Possibilistic ontology is equal to Transformation of Asymmetry into Symmetry. 6) Symmetry is a logical equivalent of Objective Memory = Objective Memory= Omni-Order = Omni-Arrangement = Chaos = Pseudo-Entropy = Quantum Future = Kingdom of Possibility = Objective Knowledge = Information. 7) As a Black Hole, Consciousness curves time in perpendicular direction and forms Perpendicular Simultaneous Instantaneous Time. 8) By gathering together all Past, Present and Future, Consciousness performs Contraction of Time. 9) As “Time Contraction,” Consciousness verticalizes the Epochs. 2. We described the human organism as a system of two contrary ontological simultaneous movements: the Movement of “Materialization” and the movement of “De-materialization.” The act of transformation of Possibilistic Objective Knowledge into Actualistic Subjective Matter, which takes place as transformation of Possibilistic Quantum Future into Actualistic Macroscopic Present (insofar as the Possibilistic Quantum Future is the kingdom of Knowledge and the Actualistic Macroscopic Present is the kingdom of Matter), are responsible for the movement of “Materialization.” That transformation of Quantum Future into Macro-present occurs as the notorious act of reduction of the Wave Function. It is precisely that reduction of the Wave Function, which transforms the Wave Functions of Information into the Perception Functions of matter and the body, and these Perception Functions, in turn, build the Perception organs and the personal perception cell structures and organs of the body. The Force and the Impulse standing behind the above-mentioned movement of transformation of the Possibilistic future into Actualistic present, and performing the act of reduction of the Wave Function (and actually streaming from the Spirit – Matter) is the World Asymmetric Anti-gravity Force, which is realized subjectively as an act of Fantasy, and the analytically working Consecutive Temporal Intellect, and is objectively presented as an act of “Objective Chance-Fantasy.” The reverse process of reverse Re-transformation of Actualistic Subjective Matter into Possibilistic Objective Knowledge is responsible for the reverse movement of de-materialization, which occurs as reverse transformation of the Actualistic Macroscopic Present into Possibilistic Quantum Future. This reverse re-transformation of the macroscopic Present into Quantum Future occurs as an act of “Restoration of the Wave Function.” The Restoration of the Wave Function is realized as re-transforming back of the Perception function of matter and body into a Wave Function of Information. Consciousness is the organ, which performs this reverse process of “Dematerialization” of the body and Matter. P
91 The enhanced perceptual state Catarina Geoghan <cgeoghan@ntlworld.com> (Brighton, England)
In the early stages of psychosis, the prepsychotic phase, and also during meditation, individuals frequently experience enhanced perceptual sensitivity, whereby sights and sounds appear brighter and louder than usual. It will be argued that this is due to increased facilitation of a coherent reference frequency. This is based on a holographic model for perception according to which increased coherence results in increased response to perceptual stimuli. P
92 Reveals the core secret of mind and it's mechanism Sanjay Ghosh, Papia Ghosh <yogainstruments@yahoo.co.in> (NA, Spectrum Consultants, Howrah, West Bengal, India)
Our world needs a singular answer which can satisfy entirely the quest about mind and it’s mechanism. Now,the question is,can we expect to get such an answer by following the conventional process of observation?Certainly not.Then what should we do?We need to follow a completely new method of observation.What could be the necessary feature for such an observation technique?It must be a process based on new nature of instruments and the act of observation will be of three folds in nature,as like,a)first,we have to learn the art of extracting energy or apparent consciousness from all sorts of instruments b)second,we have to enter into the network of our dormant nervous system,the other name of which is finer part of mind c)finally,we need to know the technique of contemplation on natural objects,like,huge celestial and various earthly bodies. The accumulated power and the quantum of consciousness as to be earned by said succession,will boost one to enter into the causal start of manifestation,so of mind.There the number of active elements to be seen have been reduced into one and that itself will pronounce as the answer of ‘what mind is’!By the time,the mechanism of working of mind to be fully known,because,one will cross the entire track----starting from super gross artificial instruments to bio-physiological instruments and lastly the natural instruments. In fact,our urge towards manifesting ourselves in the name of nature,creates tremendous resistance within ourselves and therefore,we become complex or opaque in nature.So,on the other side,if by adoption of some method ,we can be able to reduce our resistance,we will start becoming simple,so almost transparent. The said transparency is actually the universal nervous body with unlimited quantum of power.The whole purpose of human being is to realize that condition by uniting with real consciousness. Our new package consisting of 236 instruments will lead you to attain such condition in quickest possible time.In Quantum Mind 2007,we propose to give live demonstration for a set of 3 instruments for immediate understanding. These instruments are 1)Near Vision Instrument:This will unvail the secret of conversion from transparent to opaque object and vise versa without using any chemical reagent or applied electricity. 2)Net Metallic Lens Instrument:The metallic ingradients of our body how largely affects our vision and creates tremendous illusion that is to be seen physically by this instrument. 3)Eye Electricity Instrument:The most sensitive as well as vital organ “eye”how produces a variety of unknown nature of power,one will be able to experience from this instrument. Finally,this paper in actual term, is a live demonstration of the mechanism of our Mental Syndrome. P
93 A soul mind body medicine - a complete sould healing system using the power of soul Peter Hudoba, Zhi Gang Sha, MD (China) <sharesearchfoundation@yahoo.ca> (Sha Research Foundation, Burnaby, British Columbia, Canada)
In recent decades, there has been an upsurge of new concepts of treatment. Words like “integrative,” “complementary,” “alternative” and “holistic” now permeate not only the healthcare field, but also everyday discussion. Various forms of mind-body medicine have become more and more popular, to the point of being widely accepted. These modalities emphasize the mind-body connection, which encompasses the effect of our psychological and emotional states on our physical well-being and the power of conscious intent, relaxation, belief, expectation and emotions to affect the health. Authors of this paper discuss the Soul Mind Body Medicine as an adjunct healing modality to conventional standard medical treatment. Mind over matter is powerful, but it is not enough. Soul over matter is the ultimate power. The healing power of the mind and soul can be used in conjunction with any and all other treatment modalities. Dr. Hudoba and Dr. Sha present techniques utilizing mind and soul power with special body postures that are very simple, powerful and effective. Positive results can be achieved relatively quickly. These simple healing practices can be easily taught to patients to support and enhance their healing process. Authors support their presentation with examples of their clinical research using the power of mind and soul in the healing of cancer and in development of human being. P
94 Unified Theory of Bivacuum, the Matter, Fields & Time. New Fundamental Bivacuum - Mediated Interaction and Paranormal Phenomena. Alex Kaivarainen <H2o@karelia.ru> (Dept. of Physics, University of Turku, Turku, Finland)
The coherent physical theory of Psi phenomena, like remote vision, telepathy, telekinesis,remote healing, clairvoyance - is absent till now due to its high complexity and multilateral character. The mechanism of Bivacuum mediated Psi - phenomena is proposed in this work. It is based on number of stages of long term efforts, including creation of few new theories: 1) Unified theory of Bivacuum, rest mass and charge origination, fusion of elementary particles (electrons, protons, neutrons, photons, etc.) from certain number of sub-elementary fermions and dynamic mechanism of their corpuscle-wave [C - W] duality (http://arxiv.org/abs/physics/0207027); 2) Quantitative Hierarchic theory of liquids and solids, verified on examples of water and ice by special, theory based, computer program http://arxiv.org/abs/physics/0102086); 3) Hierarchic model of consciousness: from mesoscopic Bose condensation (mBC) to synaptic reorganization, including the distant and nonlocal interaction between water clusters in microtubules (http://arxiv.org/abs/physics/0003045); 4) Theory of primary Virtual Replica (VR) of any object and its multiplication. The Virtual Replica (VR) of the object, multiplying in space and evolving in time VRM(r,t) can be subdivided on surface VR and volume VR. It represents a three-dimensional (3D) superposition of Bivacuum virtual standing virtual pressure waves (VPWm) and virtual spin waves (VirSWm), modulated by [C-W] pulsation of elementary particles and translational and librational de Broglie waves of molecules of macroscopic object (http://arxiv.org/abs/physics/0207027). The infinitive multiplication of primary VR in space in form of 3D packets of virtual standing waves: VRM(r), is a result of interference of all pervading external coherent basic reference waves - Bivacuum Virtual Pressure Waves (VPW+/-) and Virtual Spin Waves (VirSW) with similar waves, forming primary VR. This phenomena may stand for remote vision of psychic. The ability of enough complex system of VRM(r,t) to self-organization in nonequilibrium conditions, make it possible multiplication of VR not only in space but as well, in time in both time direction - positive (evolution) and negative (devolution). The feedback reaction between most probable/stable VRM(t) and nerve system of psychic, including visual centers of brain, can by responsible for clairvoyance; 5) Theory of nonlocal Virtual Guides (VirG) of spin, momentum and energy, representing virtual microtubules with properties of quasi one-dimensional virtual Bose condensate, constructed from ’head-to-tail’ polymerized Bivacuum bosons (BVB) or Cooper pairs of Bivacuum fermions (BVF+BVF) with opposite spin. The bundles of VirG, connecting coherent nuclears of atoms of Sender (S) and Receiver (S) in state of mesoscopic Bose condensation, as well as nonlocal component of VRM(r,t), determined by interference pattern of Virtual Spin Waves (VirSW), are responsible for nonlocal interaction,like telekinesis, telepathy and remote healing; 6) Theory of Bivacuum Mediated Interaction (BMI) as a new fundamental interaction due to superposition of Virtual replicas of Sender and Receiver, because of VRM(r,t) mechanism, and connection of the remote coherent nucleons with opposite spins via VirG bundles. For example VirG may connect the nucleons of water molecules, composing coherent clusters in remote microtubules of the same or different 'tuned' organisms. Just BMI is responsible for macroscopic nonlocal interaction and different psi-phenomena. The system: [S + R] should be in nonequilibrium state for interaction. The correctness of our approach follows from its ability to explain a lot of unconventional experimental data, like Kozyrev ones, remote genetic transmutation, remote vision, mind-matter interaction, etc. without contradictions with fundamental laws of nature. For details see: http://arxiv.org/abs/physics/0103031. P
95 Sequences of combinations of energy levels that describe instances of self and invoke a current instance of self Iwama Kenzo <iwama@whatisthis.co.jp> (z_a corp., Hirakata, Osaka, Japan)
This paper describes a summary of a robotic program, and puts forth a hypothesis of a brain structure by getting hints from the robotic program as well as the psychophysical results. The robotic program has the following functions: 1) forming sequences of assemblies of components in such a way that the sequences of assemblies of components match inputs from its outside world, 2) keeping and retrieving sequences of assemblies of components in / out of its memory, 3) generalization, and 4) specialization. The generalization process finds common features and relations among various cases of the sequences, and the specialization process makes generalized sequences match a new instance of inputs. The paper explains that the robotic program acquires concepts about its world; the program describes the concepts in sequences of assemblies of components. Our hypothesis of a brain structure is the following: The brain forms sequences of combinations of energy levels. Combinations of energy levels are like E1+E2 = E = E3+E4+E5. When a brain receives inputs from its outside including motor activities, energy generated by the inputs change molecular fine structures and their energy levels. Combinations of changed energy levels make quantum entanglements occur and energy flow. Molecular (and biological) changes of a bit larger scale (Hebbian learning level) are invoked when the energy flow does not go further. The molecular changes of a bit larger scale make the energy flow further and do not occur again when the brain receives the same inputs in the next time since the changed molecular structure become a path of the energy flow invoked by the same inputs. Thus the molecular changes of a bit larger scale encapsulate the changes in the molecular fine structures. The quantum entanglements with molecular structural changes form paths of energy flow, and this explains memory function of the brain. After a large number of combinations of energy levels are encapsulated, Combinations of Energy Levels that are Common to various cases (CELC) are invoked when entanglements occur upon receiving inputs. Energy kept in the combinations of the energy levels (CELC) generate molecular changes of a bit larger scale and encapsulate the combinations of the energy levels (CELC) in the same way as described above. Time sequence in the inputs is also represented in time dependent quantum entanglements among combinations of energy levels encapsulated by molecular changes of a bit larger scale. Sequences of combinations of energy levels match sequences of energy levels invoked by sequences of inputs, but time scales are different from those of the inputs. Combinations of energy levels represent roughly two types of properties: one type represents those specific to certain inputs (including motor activities), and the other type (or CELC) represents generalized properties. Given a set of new inputs at time T, quantum entanglements occur among energy levels encapsulated (both specific and generalized) as well as energy levels of working area of the brain. Temporary entanglements among combinations of energy levels in the working area match the new inputs (specialization), and the next sequence describes inputs that the brain will probably receive at time T + delta T. Entanglements that describe the probable next inputs generate motor activities if no inputs are given from its outside world at time T + delta T. Since quantum entanglements among combinations of energy levels encapsulated represent past and generalized activities, the past and generalized activities make current motors active. Then one can claim that consciousness occurs because past and very general activities described in the combinations of the energy levels invoke activities in a working area that generate a current motor activity. In other words, a described self invokes an instance of self at the next moment. P
96 Why I’m not an “Orch Or”ian? Mohammadreza(Shahram) Khoshbin-e-Khoshnazar <khoshbin@talif.sch.ir> ( , Tehran, Iran)
In my opinion “Orch OR” model 1.violates conservation of energy and 2.does not match with experience. 1.Let us look at the following problem: Just after childbirth, a mammal can recognize her young. However, a mom can not. Actually, she accepts any infant as her child! If a mom looks at her “false” infant, then she’ll feel a “false” subjective experience. Please note that this situation is more complex than previously assumed!! “Orch OR” can solve one part of this problem. There are zillion universes for humans and the number of possible space-time configurations is enormous, so the number of combination of states is quite large. These choices for human can be thought of as consciousness. Notice, however, there is only one real universe and all other possible universes are false universe. The false (virtual) universe allowed by the uncertainty principle and therefore, similar virtual particles” exist for only so short time. But a mom can create a virtual universe. This violates the law of conservation of energy. While, for mammal that consciousness is meaningless, there is no conservation of energy problem, since all of the parallel universes are the same (and actual). 2. “Orch OR” model face at least two important obstacles: first, quantum computation requires isolation (decoherence) and second it is unclear how quantum state isolated within individuals neurons could extent across membranes. To overcome the first problem, it assumes acetylcholine binding to muscarinic receptors act through second messengers to phosphorlate MAP-2 , thereby decoupling microtubules from outside environment, and to overcome second problem it assumes quantum state or field could extend across membranes by quantum tunneling across gap junction. Therefore, if we block muscarinic receptors (with atropine), or impair gap junctions, we’ll expect abnormality in cognitive behaviors. I have not checked first idea, but in 2001, Guldengel et al. produced a mouse with no gap junctions, but apparently normal behavior. In addition, in the X chromosome-linked form of Charcot-Marie-Tooth disease, mutations in one of the connexin genes (connexin 32) prevent this connexin from forming functional gap junction channels. However, apparently , there is no reported abnormality in cognitive behaviors. P
97 An Operational Treatment of Mind as Physical Information: Conceptual Analogies with Quantum Mechanics Sean Lee <seanlee@bu.edu> (Office of Technology Development, Boston University, Boston, MA)
A novel approach to consciousness as an operationally definable natural phenomenon within the framework of physical information is explored. Any meaningful connection of consciousness to the physical requires an unambiguous mapping of a space of subjective states onto information bearing elements of a physical theory, independently of the former's final ontological, causal and semantic status. At the same time, any such operational definition requires, by the definition of the phenomenon in question, that the mapping be performed by the experiencing subject. I argue that such a 'self-measuring' act leads unavoidably to an 'uncertainty principle' that is analogous in some intriguing ways to Heisenberg's principle for quantum mechanics. If we choose to ignore this uncertainty, then with the help of a thought experiment we can define what I call the 'r-equivalence' classes and 'E theory' of consciousness; essentially addressing what Chalmers refers to as the Easy problem. If we instead address this uncertainty and seek an 'H theory' of the Hard problem, we are lead to an account of subjectivity that exhibits two features strongly reminiscent of quantum theory: incomputability (randomness) and what we may think of as violations of local reality. While no direct connection between consciousness and quantum theory is postulated, the conceptual analogy may be made quite deep, perhaps with utility towards a future theory of consciousness. P
98 How Quantum Entanglement Provides Evidence for the Existence of Phenomenal Consciousness Reza Maleeh, Afshin Shafiee; Mariano Bianca <smaleeh@uos.de> (Cognitive Science, University of Osnabrueck, Osnabrueck, Niedersachsen, Germany)
We believe that the rise of consciousness has to do with the concept “information.” So, we discuss a new concept of information, called “pragmatic information,” in a way put forward by Roederer (2005) according to which information and information processing are exclusive attributes of living systems, related to the very definition of life. Thus, in the abiotic world, according to this attitude, information plays no role; physical interactions just happen; they are driven by direct energy exchange between the interacting parts and do not require any operations of information processing. Informational systems are open, that is, the energy needed for information processing must be provided by another source other than sender or recipient. We show that such a characteristic has to do with a specific interpretation of “intentionality” which, again, is the exclusive attribute of living systems. We use the concept of pragmatic information to explain hypothetically many phenomena such as perception, long and short term memory, thinking, imagination and anticipation as well as what happens in the living cells. But there is more to this. We argue that when the complexity of a system exceeds a certain minimum degree, in certain conditions, to be discussed in detail, the mechanical and non-mechanical aspects of information are realized. The former happens with matter and energy exchange while the latter does not. The existence of the latter, to be considered as the prototype giving rise to phenomenal consciousness, can be characterized by preparing the entangled states of quantum particles. The idea is that the correlation between two entangled particles shows the intention of a living being who prepares an entangled state with an informational content which cannot be reduced to separated fragments. In this sense, we say that two entangled particles have an information-based relation without energy-matter signaling. This is a non-mechanical relation between the remote components of a composite system which is due to a non-reducible information content prepared by a purposeful setup provider. So, planned systems (to be called derived informational systems versus original ones) will be categorized as informational systems (mechanically or non-mechanically) just as they show the intention of a living system. To sum up, the phenomenon “entanglement” can be viewed from two different aspects: Firstly, the aspect which deals with the causal part of entanglement. From such a perspective, entanglement is, at least in principle, causally explainable in a contextual manner. Secondly, the aspect which has to do with the intentionality of the setup organizer. The purpose of the one who prepares an entangled state makes the phenomenon informational. Such a phenomenon will not happen in nature, because it needs an intentional living agent to separate the particles in a space-like manner. So, if we accept that there exists a non-mechanical informational relation between two entangled particles, it would be just because of the intention of a setup organizer, otherwise it could also have happened in nature. The existence of such a non-mechanical informational relation can be considered as evidence for the existence of phenomenal consciousness. P
99 Model of Mind & Matter: The Second Person Marty Monteiro <j.monteiro1@chello.nl> (Fnd.Int'l.Inst.Interdisc.Integr., Amsterdam, Netherlands)
A general social model of human being is launched, focusing on the relation between mind and body. In constructing the human being’s mental and bodily architecture, the other human being is incorporated. From the point of view of the 1st person “I”, and the 2nd person “You”, the model pertains to the physical, mental and social process levels. From a growth-dynamic or evolutionary point of view, the physical reality is axiomatic to deduct the mental and social process level. “Interaction” is the key concept modelling all process levels of human functioning. The model is built up in the reference frame of two thinking tools, namely ‘finality’ as well as ‘causality’. The design of the mind-matter model centres on the phenomenon of 'interaction' between object- and between subject systems. Interaction is a simultaneous occurrence between events on physical, mental and social levels. By applying a rule to deduct the mental and social process levels from the physical level, departs from the question of ‘how’ the processes emerge and how their relationships to each other are established. In the reference frame of finality and causality, the process architecture on all levels provides a general basic social model of the human being. From an integrated point of view of the relation between the 1st and 2nd person, it is tried to unveil the mechanism of mind and matter. Recording of and acting upon environmental events of objects/subjects operates on physical, mental, and social levels. The physical level of stimuli is basic for the mental level. Through stimuli-interaction, mental cognition emerges. Cognition is primary social directed to get feed-back by perception extracting information from objects and subjects. The social level of re-corded norms in particular, are prerequisite for the formation and development of personality (long-term memory) and the emergence of new values from personality for building up a cul-ture. Attitude (short-term memory) mediates attuning communication and matching of values to create culture. Personality and culture are the end-results of human functioning. Modelling the architecture on physical, mental, and social levels, and the formation of person-ality and attitude-mediated culture, gives an answer to the question of ‘how’ these processes and systems emerge. It says nothing concerning the question of ‘why’ the human being performs his behaviour in that specific way. This issue refers to the household of energy flow in the reference frame of relative 'shortage-surplus'. From an imbalanced state, 'energy transaction' originates within a person, in order to bring about an energy balance in the framework of other objects/subjects. Through the exchange of psychophysical matter/energy of 'cost-benefit', subjective experience of ‘pain-pleasure’ takes place through 'energy transformation' – an op-eration of ‘fusion-fission’ between mind and matter. The hierarchical built up of personality and attitude-mediated culture is respectively a contra-evolutionary and evolutionary development. This development of personality towards men-talization on the one hand, and materialization of common culture on the other is not a linear event, but a discontinuous state transition. The human being is aware afterwards of the results of these transformational operations, but he is not able to know what happens within the ‘gap’, the discontinuous transitional evolution of the mind as well as matter. Therefore, personality development and natural/cultural evolution raises the ultimate problem concerning the question whether or not a universal force exists as a "unifying-creating force". P
100 Spreading culture on quantum entanglement and consciousness Gloria Nobili, Teodorani Massimo <gloria.nobili@fastwebnet.it> (Physics, University of Bologna, Castel San Pietro Terme, Italy)
The subject of “quantum entanglement” in general doesn’t seem to be particularly considered in Europe in the form of popularizing books or of educational physics projects. These authors have started to spread out this kind of scientific culture in both forms, including popularizing seminars too. Concerning the entanglement phenomenon, recently, new thought experiments have been outlined, new laboratory results have come out in the form of real discoveries in quantum optics, new studies on “bio-entanglement” and “global consciousness effects” have been carried out, and very sophisticated new ideas have been developed in the fields of quantum physics, biophysics, cosmology and epistemology. These authors intend to show their effort of diffusing widely this growing scientific knowledge. Beyond all this there is a long-term strategy aimed at inculcating new concepts in physics in order to trigger the interest of scholars at all levels, in that which is probably the most innovative and interdisciplinary subject of the human knowledge of this new millennium. In order to accomplish this difficult task, these authors are acting in the following ways: A) explain, using intuitive examples, the basic physical mechanism (1, 2, 3, 4) of entanglement at the particle level; B) explain all the possible ways in which entanglement may involve quantum or “quantum-like” non-local effects occurring also in the macro scale (2, 3, 4) represented by biological (DNA bio-computing, microtubules), psychophysical (consciousness, synchronicity and Psi effects), astrobiological (neural spin entanglement), and cosmological (Bit Bang) environments; C) study and spread the scientific knowledge concerning alternative ways for the Search for Extraterrestrial Intelligence (5) and – specifically – prepare research projects regarding possible non-local aspects of SETI (NLSETI) and their applicability (4) on the basis of our physics knowledge and technology; D) prepare extensive plans for post-graduate courses in physics (6) with a special address to “anomalistic physics”, brain biophysics and mathematics; E) train persons and students to reach optimal concentration states – by using well experimented techniques – in order to permit them to exploit at the maximum level their intellectual and consciousness potential. All of these educational and promotional actions are aimed at training people in understanding the strict link existing between physics and consciousness in all of its aspects, in the light of a probable general phenomenon that occurs at all scales by involving (micro and macro) matter, mind and consciousness. A strategy plan containing in a self-consistent way all of these aspects will be schematically illustrated. REFERENCES. BOOKS ( http://www.macrolibrarsi.it/autore.php?aid=4428 ) 1) Teodorani, M. (2006) “Bohm – La Fisica dell’Infinito”. MACRO Edizioni. 2) Teodorani, M. (2006) “Sincronicità”. MACRO Edizioni. 3) Teodorani, M. (2007) “Teletrasporto”. MACRO Edizioni. 4) Teodorani, M. (2007) “Entanglement”. MACRO Edizioni. ARTICLES 5) Teodorani M. (2006) “An Alternative Method for the Scientific Search for Extraterrestrial Intelligent Life: ‘The Local SETI’”. In: J. Seckbach (ed.) “Life as We Know It”, Springer, COLE Books, Vol. 10, pp. 487-503. 6) Teodorani, M. & Nobili, G. (2006) “Project for the Institution of an Advanced Course in Physics” (in Italian). E-School of Physics and Mathematics by Dr. Arrigo Amadori. http://www.arrigoamadori.com/lezioni/CorsiEConferenze/MasterFisica/Master_Fisica_MTGN_e-school.pdf P
101 The Golden Section: Nature's Greatest Secret Scott Olsen <olsens@cf.edu> (Philosophy & Comarative Religion, Central Florida Community College, Ocala, Florida)
"Resonance and Consciousness: buddhas, shamans and microtubules" -- Consciousness is one of the great mysteries of humanity. Like life itself, it may result from a resonance between the Divine (whole) and nature (the parts) exquisitely tuned by the amazing fractal properties of the golden ratio, allowing for more inclusive states of awareness. Penrose and Hameroff provocatively suggest that consciousness emerges through the quantum mechanics of microtubules. It is therefore a real possibility that consciousness may reside in the geometry itself, in the golden ratios of DNA, microtubules, and clathrins. Microtubules are composed of 13 tubulin, and exhibit 8:5 phyllotaxis. Clathrins, located at the tips of microtubules, are truncated icosahedra, abuzz with golden ratios. Perhaps they are the geometric jewels seen near the mouths of serpents by shamans in deep sacramental states of consciousness. Even DNA exhibits a PHI (golden ratio) resonance, in its 34:21 angstrom Fibonacci ratio, and the cross-section through a molecule is decagonal (a double-pentagon with associated golden ratios). Buddha said, "The body is an eye." In a state of PHI-induced quantum coherence, one may experience samadhi, cosmic conscious identification with the awareness of the Universe Itself. P
102 Data reserve and recreating the memory in brain and the experimental witnesses suggesting it Mojtaba Omid <mjtb_omid@yahoo.com> (Tabriz, Iran)
In this hypothesis, at first the digital system is familiarized and then the concept of zero and one or existence or not existence as the digital base is generalized to two kinds of the electromagnetic waves spectrum from the body especially the brain and its cortex and it is indicated that the radio waves of the brain can be considered zero and the lack of radio waves as the result of the replacing infrared waves relating to the metabolism and high temperature of the brain, one ; since according to the rules of specially relativity and the difference of light speed (EM waves) in different environments, in the certain spectrum of the radio waves, the time passing equal zero but it is not zero in the infrared waves, them the data obtained from the sensory organs to the brain and cortex are ciphered and are reserved as zero and increasing the speed of time passing as the result of the reciprocity of two radio and infrared spectrums and the recreation of these codes is accomplished according to the same processes which is described in the paper. Finally, in the second part of the article some witnesses are represented through the images provided by the equipments PET and f MRI from the brains of the patients with different mental and functional problems. In these patients the normal metabolism of the brain is destroyed which disorders the 0 and 1 system to form the codes and their reserve and recreation; these experimental observations prove the hypothesis. P
103 Embryological embodiment of protopsychism and Wave Function Jean Ratte <jean.ratte@holoener.com> (Centre Holoénergétique, Montreal, Quebec, Canada)
According to Goethe the human body is the most sensitive tool to detect subtle process that technological devices cannot. It is still true 200 years later. Despite all the interesting data, neurobiological imagery is invasive and alters the subtle aspects of Mind process. For the last 20 years we use a clinical in vivo non invasive procedure that bridges this Incommensurability between physical correlates and the consciousness This Vascular Semantic Resonance (VSR),3D spectrometer, brings quantum microtubular level to macroscopic clinical detection, and shows symmetry or entangled resonance between map and territory, between molecule and the entangled memory or name of the molecule, between syntax and semantics. The Cardiovascular network is a harmonic oscillator manifold bringing the quantum microtubular level to macroscopic clinical detection Vascular system and microtubule are coupled harmonic oscillators (Abstract #330 Tucson 2 and # 977 Tucson 3). There is Amplification of microtubules and receptor canals vibrations by the cardiovascular system which works as a resonant cavity, an interferometer, a multiplexing waveguide, a manifold. (See Roger Penrose; Road to Reality). Resonance between micro-oscillators such as pericorporeal pigments and cellular pigments is the physical basis of VSR. . The vibratory equivalence, of micro-oscillator pigments and ideograms, of phonemes and morphemes, is the biophysical basis of V S R, a complex biological spectrogram, accessing directly the meaning of signs, the qualia of quanta, resonating not only to the molecule but also to the entangled memory or functional signature of the molecule, detecting symmetry between Implicate and Explicate order. This method shows the human body as radar, an interferometer not only for EM waves, but for the 4 fundamental interactions in their matter and antimatter aspects. VSR shows a Vibratory Parallelism between embryological stages and the 4 fundamental interactions.Operative or vibratory identity is not ontological identity. The first undifferentiated stage or Morula resonates to Gravitation. The second stage, Blastula, differentiates in Ectoderm and Endoderm with polarization of space in inside and outside. Ectoderm resonates to EM field. Endoderm, polarisation of anterior and posterior, resonates to Weak Nuclear field. The third stage or Gastrula gives rise to the multiplexing mesoderm manifold, polarisation of time with bilateral symmetry, which resonates to Strong Nuclear Field. This clinical tool gives new insights in the puzzle of consciousness by showing a multilevel vibratory commonality between Protopsychism, Wave function, Non Locality, Curvature Tensor and Gravitational field and Degeneracy. These concepts resonate like the undifferentiated or prelogical stage or Morula. There is a vibratory scale resonance between quantum level and molecular, cellular and organism levels.DNA shares a vibratory identity with Wave Function (W.F) or Protopsychism. Transcriptase implements a W.F collapse on RNA. Reverse Transcriptase brings back to DNA wave function, or Degeneracy. Gametes Proliferation vibrates like W. F and Fecundation like Collapse of the W.F. Potentialization or Non Local vibrate like W. F and Actualization or Local vibrate like Collapse of W F These clinical results indicate that W F is not only a mathematical device but a true subtle biophysical process like Protopsychism. The Understanding of Vibratory commonality between quanta and qualia, between cosmogenesis and ontogenesis, between matter and antimatter fields such as Vitiello’s double, requires a quantum leap from « neuroectodermic »geometry to « mesodermic » Riemann hypergeometry (see Roger Penrose).Morula is Prelogic. Ectoderm is Logic of non contradiction,wave or particle. Endoderm is Logic of contradiction, wave and particle. Mesoderm is Logic of crossed double contradiction,hypersymmetry matter-antimatter. P
104 Life and Consciousness Michael Shatnev <mshatnev@yahoo.com> (Akhiezer Institute for Theoretical Physics, NSC KIPT,Kharkov,Ukraine)
We first consider the observational problem in quantum mechanics and the notion of complementarity. Then, following Niels Bohr, we discuss the complementary approach to problems of quantum mechanics, biology, sociology, and psychology in more detail. In general philosophical perspective, it is very important that, as regards analysis and synthesis in these fields of knowledge, we are confronted with situations reminding us of the situation in quantum physics. Although, in the present case, we can be concerned only with more or less fitting analogies, yet we can hardly escape the conviction that in the facts which are revealed to us by the quantum theory and lie outside the domain of our ordinary forms of perception we have acquired a means of elucidating general philosophical problems. Next we shortly discuss that quantum mechanics is not complete and therefore may be completed. For this purpose a new mathematical carcass of physics is needed, and, we try to show how to find it. Finally, using these approaches, Deutsch’s, Dyson’s and Penrose’s attitudes, we show how the notions of life and consciousness are connected. P
105 Visions as special form of an altered state of consciousness Josiah Shindi <josiahshindi@yahoo.co.uk> (Psychology, Benue State University, Makurdi, Nigeria, Makurdi, Benue, Nigeria)
The paper reviews Biblical accounts of visions. Several persons who claimed to have seen visions in the last five years were interviewed using structured questions. Results indicate that there are some similarities between the visions reported in the Bible and those of the participants in the study. Precipitating and exasperating factors in vision are discussed together with the visions’ content. Evidence point to the notion of special altered state of consciousness during the period of visions, specifically during the hyponagogic and hypnopompic states. P
106 Metacognitive awareness: adopting new tasks for the remediation program for dyslexics Malini Shukla, Jaison A. Manjaly <malini.shukla1@gmail.com> (Centre for Behavioral and Cognitive Sciences , University of Allahabad, Allahabad, Uttar Pradesh, India)
In this paper we aim to evaluate the role of metacognitive awareness in remediation programs adopted for Dyslexics. The remediation program PREP (PASS Reading Enhancement Program) focuses on cognitive remediation of reading problems by improving the information processing strategies that underlie reading, while at the same time avoiding the direct teaching of word reading skills. It also includes a self comparison of children with dyslexia between their training course experience and the new strategies employed by them after being consciously aware about their deficits. It was observed that Dyslexics were using self learning strategies which motivated them for independent learning. This new self learning techniques were adopted in comparison to the metacogntive awareness of their disability and also initiated a comparative assessment of their disabilities with the peer group. This shows that PREP has helped them in enhancing their metacognitive skills by enabling them in controlling and manipulating cognitive processes; and giving them the knowledge about the regulatory skills; and how to utilize these skills on the basis of being consciously aware about their deficit in reading. Thus their monitoring ability on their own performance has given an impetus to their overall performance. Researches (Tuner and Chapman, 1996) have shown that metacognitive regulation improves the performance in number of ways, which make use of better attentional resources, better use of existing strategies and a greater awareness of comprehension breakdown. The remediation program showed that formation of self helps Dyslexics to perform better. The awareness that I am disabled encourages them to employ better learning techniques. In the light of these observations, we propose that the current structure of the remediation program PREP can be improved by including more tasks to enhance metacogntive awareness and tasks based on this newly evolved metacogntive awareness. We argue that addition of these new tasks can improve the remediation program PREP. P
107 A New Approach to the Problems of Consciousness & Mind Avtar Singh <avsingh@alum.mit.edu> (Center for Horizons Research, Cupertino, CA)
Consciousness issues within the context of modern neuroscience and related problems in contemporary physics are addressed. Current theories of consciousness look towards information theory, information integration theory, complexity theory, neural Darwinism, reentrant neural networks, quantum holism etc. to provide some hints. These theories fall short of the rigors and quantitative measures that are normally required of a scientific theory. The most perplexing philosophical conundrums of the "hard problem" and "qualia" that afflict modern neuroscience can be resolved by a deeper understanding of the physics of the very small (below Planck scale) and very large (at the boundaries of the universe) scales. The modern philosophy of mind proposes that consciousness is a higher-order mental state that monitors the first or base state possibly generated by the brain. This paper builds upon the early approaches to consciousness wherein it was proposed that the state of self-consciousness is not a separate, higher-order consciousness of a conscious experience, but represents a continuum of the lower order states generated by the brain experience. In such a larger context, many of the mysteries of physics and neuroscience can be explained with an integrated model. This paper proposes such an integrated model that provides a direct relationship between the physics concepts of space, time, mass, and energy, and the consciousness concepts of spontaneity and awareness. The observed spontaneity in natural phenomena, which include human mind, is modeled as the higher order or universal consciousness. The integrated model explains the recent observations of the universe and demonstrates that the higher order consciousness is a universal rather than a biologically induced phenomenon. The neurobiological mind is shown to represent a subset of the complimentary states of the prevailing higher order universal consciousness in the form of the continuum of space-time-mass-energy. The proposed approach integrates spontaneity or consciousness into the existing and widely-accepted theories of science to provide a cohesive model of the universe as one wholesome continuum. The model represents the essential reality of different levels and dimensions of experience, both implicit and explicit, consciousness and matter, to be seen as equivalent and complimentary states of the same mass-energy known as the zero-point energy. The universal consciousness is shown to represent the spontaneous kinetic energy of the extreme kind, which is the ultimate complimentary state wherein everything in the universe is experienced as the zero-point energy field in a fully dilated space and time continuum. P
108 Does attention mediate the apparent continuity of Consciousness? : A change detection Perspective Meera Mary Sunny, Jaison A. Manjaly <meeramary1@gmail.com> (Center for Behavioural And Cognitive Sciences, Allahabad University, Allahabad, Uttar Pradesh, India)
Dennett (1991) argues that most of the theories of mind, irrespective of their ontological commitment, presuppose a Cartesian theater and continuity of consciousness. He claims there is no such theater where everything is re-presented. In other words, there is no boundary line which decides the onset of consciousness. He proposes multiple draft model of consciousness as an alternative to Cartesian theater and to minimize the problem of continuity of consciousness. Multiple drafts model claims to show that the apparent continuity of consciousness results from the brain's ignoring of irrelevant or unavailable information and not 'filling in' as suggested by other theorists. This paper shows the inherent problems with this claim. We argue that if one can convincingly claim that attention is a continuous process, it can also be shown that the apparent continuity of consciousness results from this feature of attention. Dennett apparently down plays the role of attention in this unification. He looks at consciousness as a continuously edited draft, without a final published material and accounts this as the dynamicity of consciousness. This paper shows the dynamicity of attention and eventually the possibility that the apparent continuity of consciousness is a feature of the underlying attentional mechanisms using results from an experiment that makes use of the change detection paradigm with hierarchical stimuli. P
109 Implicit activities in auditory magnetoencephalograpy (MEG) Yoshi Tamori, Noriyuki Tomita <yo@his.kanazawa-it.ac.jp> (Human Information System Laboratory, Kanazawa Institute of Technology, Hakusan-shi, Ishikawa, JAPAN)
It is well known that unimodal peek responses exist at only the beginning of MEG waves. Despite of the existence of continuous base tone (CBT: A4=440[Hz] pure tone), the magneto encephalographic (MEG) responses look to have been gone to the background noise later than the onset peaks (N1m and P2m). Even if there might continue introspective perception after the onset responses, the amplitude of MEG response to the secondary tone stimuli generally decreases. It is unknown what kind of neural processing exists in such silent activity period in terms that MEG activity has been gone. Although such silent activity period might be showing perceptual acclimatization, corresponding representation or activity should exist in the brain as long as the existence of introspective perception. In order to investigate the feature of such silent activity period, we added to the continuous base sound (or switched from the continuous base sound to) an extra pure tone of another frequency five hundred millisecond after the onset of continuous base tone. All the sound stimuli in the present experiment were presented to the left ear. In the present study, unimodal response was appeared in the MEG waves one hundred millisecond after the onset of the secondarily applied base tone. This peak is considered to be N100m counterpart of secondarily applied base tone. Current dipoles were estimated by the algorithm based on Savas law. The values of GOF (goodness of fit) for the adopted dipoles were larger than 95% fit. All the estimated current dipoles (ECDs) for N100m are located in the right primary auditory cortex (A1). The amplitude of the secondary peaks during the silent activity period appeared to be depending on the distance, in terms of wave frequency, of the secondary tone from the continuous base tone. In the present study, the cosine component of fixed A1 dipole for the overlapped secondary tone was decreasing, as the frequency approaches to CBT’s. We choose several frequencies (e.g. A4#=466[Hz]; D4#=311[Hz]; G4#=415[Hz]) for the secondarily applied tones. The latency of the secondary applied A4# sound was always larger than the one for the other secondary sounds. Our whole-head type MEG system consists of 160 axial gradiometers with SQUID sensors. The gradiometers have 15 mm diameter and 50 mm baseline. The gradiometers are arranged in a radial manner around the helmet. All the subjects are right handed and they can discriminate the frequency difference of the presented sounds. The loudness of the presented sounds are all calibrated/flattened by 70[dB(A)] considering the property of the perceptual loudness curve in ISO 226:2003. Contained noise in the measured magnetic fields was reduced by averaging techniques over several sessions. It has been suggested that some kind of implicit activity exists in the silent activity period in MEG responses. This frequency dependent depression could be qualitatively explained by a tonotopically aligned inhibitory neural network model. The underlining mechanisms are, however, remaining still unknown matter for the implicit activity. Relation to the consonant feature in the frequency dependent depression will be discussed in the presentation. P
110 Anomalous light phenomena vs bioelectric brain activity Massimo Teodorani, Gloria Nobili <mlteodorani@alice.it> (Cesena (FC), Italy)
111 Proto-experiences and Subjective Experiences Ram Lakhan Pandey Vimal <rlpvimal@yahoo.co.in> (Neuroscience, Vision Research Institute, Acton, MA)
We present an argument for proto-experiences without extending physics. We define elemental proto-experiences (PEs) as the properties of elemental interactions. For example, a negative charge experiences attraction towards a positive charge; this "experience" is defined to be the PE of opposite charges during interaction. Similarly, PEs related to the four fundamental interactions (gravitation, electromagnetism, weak, and strong) can be defined. Thus we introduce experiential entities in elements in terms of characteristics of elemental interactions, which are already present in physics. We are simply interpreting these properties of interaction as PEs. One could argue that there is no shred of evidence for "what it's like" to being an electron being "attracted" to (say) a single proton. However, it is unclear what else electron will "feel" towards proton other than a force of attraction, and we define it as the PE of an electron for a proton. The experience (such as attraction/repulsion) by ions that rush across neural-membrane during spike-generation is called neural-PE. Neural-PEs interact in a neural-net, and neural-net PEs somehow emerge and get embedded in the neural-net during development and sensorimotor tuning with external stimuli. A specific subjective experience (SE), for example, redness is selected out of embedded neural-net color PEs in visual V4/V8-red-green-neural-net when a long wavelength light is presented to our visual system. Similarly, when signals related to neural-PEs travel along the auditory pathway and interact in auditory neural-net, auditory SEs emerge. Thus, the emergence of a specific SE depends on the context, stimuli, and the specific neural-net. What way is our hypothesis different from the straight-forward physicalist view (SEs are emerged entities in neural-nets from the interaction of non-experiential physical entities, such as neural signals)? [This has led to explanatory gap or 'hard problem'.] The difference is that we acknowledge the existence of experiential entities in physics, where the emergence of SE from experiential entity such as PEs is less 'brute' than that from non-experiential matter. What way is our hypothesis different from panpsychism [1]? Panpsychism requires extending physics by adding experiential property to elements, which lacks evidence. Our hypothesis does not require extending physics; it simply interprets the existing and well accepted properties of elemental interactions, which have significant amount of evidence and are the building blocks of physical universe. Our hypothesis implies that non-experiential matter (mass, charge, and space-time) and related elemental proto-experiences (PEs) co-evolved and co-developed, leading to neural-nets and related PEs, respectively. Furthermore, there could be three types of explanatory gaps, namely the gap between (i) SE and the object of SE, (ii) SE and the subject of SE, and (iii) subject and object, where 'object' is internal representation. The hypothesis is that SE, its subject and its object are the same neural activity in a neural-net, where a neural activity is an experiential entity in our framework. These gaps are actually closed if the above hypothesis is not rejected; this trio appears distinct in our daily lives, but it is a sort of illusion because internally they are same neural-activity; when information related to ‘subject experiencing objects’ projected outside, objects in 3D appear with respect to reference subject. Moreover, SE cannot be objectively measured; it requires subjective research; however, the relative effect of SEs, such as that in color discrimination, can be measured objectively. Our hypothesis (a) contributes in bridging the explanatory gaps because experiential entities are introduced, (b) minimizes the problem of causation because our framework is in within the scope of physicalism, and (c) does not require extending physics. P
112 Disorders of consciousness in schizophrenia: a reverse look at consciousness nature Serge Volovyk <sv3@duke.edu> (Department of Medicine, Duke University Medical Center, Durham, North Carolina)
Schizophrenia and consciousness represent the most challenging, mysterious and enigmatic interwoven phenomena. Despite of clinical pattern of schizophrenia since Kraepelin traditionally is assumed to spare consciousness problem, recent neurocognitive research have shown that certain functions of consciousness (sense of agency, self, memory, executive functions, insight, monitoring) could be impaired in schizophrenia and that this may account for symptoms such as depersonalisation, hallucinations, self fragmentation, disorders of memory, delusions of control, etc. Cognitive deficits are considered as specific symptom domains of schizophrenia. These at the first glance unrelated traits have common molecular dynamic nature and mechanisms. Cognitive impairments specific to schizophrenia in generalized sense of information processing reflect continuum of subtle dynamic molecular pathways ranging from perturbation in free radicals redox spatiotemporal homeodynamics with changing hemispheric biochemical dominance/accentuation including alteration of nitric oxide-superoxide complementarity, responsive redox signaling networks, concomitant alterations in genes expression and transcription, redox control of neurotransmission pattern, synaptic circuitry and plasticity to changes in neurogenesis and functional hemispheric asymmetry. Free radicals, primordial “sea” for life origin, evolution and existence, induced by cosmic and terrestrial background radiation, are evolutionally archetypal, ubiquitous, and omnipotent in physiological-pathophysiological dichotomy in brain / CNS. Free radicals dual immanent nature and functions in brain are based on their quantum-chemical dynamic charge transfer / redox ambivalence (interactional nucleo-, electro-, and ambiphilicity spectrum); corresponding spectrum of reactivity and selectivity; subtle borderline norm-pathology dichotomy with discontinuity threshold, physiological functional ambivalence and complementarity, and dynamic free radicals homeostasis. In given generalized framework global stable average incidents rates of “core” schizophrenia (and consciousness disorders as opposite side of phenomenon) at the molecular level may be considered as quantum-chemical stochastic phenomenon originally based on perturbation of free-radical redox brain signaling networks and disorder of information processing connected with effects of natural radiation background, seasonal variation in geomagnetic field, and solar cycles terrestrial activity superimposed upon individual’s immanent developmental trajectory. P
113 New quantum approach to qualia, consciousness and the brain. John Yates <uv@busi8.freeserve.co.uk> (London, United Kingdom)
In this paper I do not rule out the possibility of including the consideration of results such as those of Jahn, Walach, Radin and others, in the spirit that I feel that, unlike much early USA scientific and technical opinion, I would not have effectively ruled out the possibility of the Wright Brothers as having discovered aviation. At the same time such results are certainly not paramount in considerations at this time. My approach uses category theory and a McTaggart A series as well as the conventional B series effectively used by Deutsch, Bohm and Penrose. This sounds philosophically and physically more realistic but at the present state of the art it may be required that the A series is a proper class. My theory will relatively easily link with any physically meaningful and duplicable NDE results which may be provided for example, by NDE experiments like those of Fenwick and Grayson and has many other advantages. Dream precognition results and ESP are very much denied by sceptics and on the whole by physicists. On dreams I certainly have not obtained precognition as such but noted apparent peculiar effects not dissimilar in superficial appearance. In psychology it is necessary to remember that many conclusions have been drawn and are repeatable from work like that of Strogatz. I favour dynamical systems psychology somewhat along the lines of Lange, but requiring an A series philosophy. By adding some ideas due to Stickgold and Hobson, I have already obtained preliminary surprising results. Presently I am proceeding to look at a structure somewhat along the lines of the Sprott work on psychology. I believe that through ignoring the McTaggart A series or trying to subsume the A series to within the B series, important opportunities are being lost and that early calls on quantum theory may be being made, when complex system theory could be more directly appropriate. http://ttjohn.blogspot.com/ presents the entire blog to date, including more work than that required here. The simplest appreciation of the situation may be that the present approach contains a past, a present and a future without further ad hoc additions and so in a sense exhibits qualities, generally recognised as certainly existing in human consciousness, but which are not obvious in theories which do not. Also it allows the existence of a God or Gods and free will (or indeed hypothetical gods or freewill) within its bounds, though does not insist on their existence a priori and in this sense it is more appropriate to consciousness theory than a conventional physics theory would be which almost excludes these factors or a theological theory which a priori insists on them. The absence of the possibility of freewill in a physical theory suggests solipsim or incompleteness rather than some disproof of free will and this is carefully avoided with the present approach, which yet contains much mathematics including all of quantum theory and high energy physics, together with chaos and catastrophe theories where relevant. P |
aa34743ffa300a3f | Email updates
Journal App
google play app store
This article is part of the series Beyond Mendel: modeling in biology.
Open Access Highly Accessed Open Badges Review
Models in biology: ‘accurate descriptions of our pathetic thinking’
Jeremy Gunawardena
Author Affiliations
Department of Systems Biology, Harvard Medical School, 200 Longwood Avenue, Boston, USA
BMC Biology 2014, 12:29 doi:10.1186/1741-7007-12-29
Received:3 February 2014
Published:30 April 2014
© 2014 Gunawardena; licensee BioMed Central Ltd.
In this essay I will sketch some ideas for how to think about models in biology. I will begin by trying to dispel the myth that quantitative modeling is somehow foreign to biology. I will then point out the distinction between forward and reverse modeling and focus thereafter on the former. Instead of going into mathematical technicalities about different varieties of models, I will focus on their logical structure, in terms of assumptions and conclusions. A model is a logical machine for deducing the latter from the former. If the model is correct, then, if you believe its assumptions, you must, as a matter of logic, also believe its conclusions. This leads to consideration of the assumptions underlying models. If these are based on fundamental physical laws, then it may be reasonable to treat the model as ‘predictive’, in the sense that it is not subject to falsification and we can rely on its conclusions. However, at the molecular level, models are more often derived from phenomenology and guesswork. In this case, the model is a test of its assumptions and must be falsifiable. I will discuss three models from this perspective, each of which yields biological insights, and this will lead to some guidelines for prospective model builders.
Mathematical model; Predictive model; Fundamental physical laws; Phenomenology; Membrane-bounded compartment; T-cell receptor; Somitogenesis clock
The revenge of Erwin Chargaff
When I first came to biology from mathematics, I got used to being told that there was no place for mathematics in biology. Being a biological novice, I took these strictures at face value. In retrospect, they proved helpful because the skepticism encouraged me to let go of my mathematical past and to immerse myself in experiments. It was only later, through having to stand up in front of a class of eager students and say something profound (I co-teach Harvard’s introductory graduate course in Systems Biology), that I realized how grievously I had been misled. Biology has some of the finest examples of how quantitative modeling and measurement have been used to unravel the world around us [1,2]. The idea that such methods would not be used would have seemed bizarre to the biochemist Otto Warburg, the geneticist Thomas Hunt Morgan, the evolutionary biologist R. A. Fisher, the structural biologist Max Perutz, the stem-cell biologists Ernest McCulloch and James Till, the developmental biologist Conrad Waddington, the physiologist Arthur Guyton, the neuroscientists Alan Hodgkin and Andrew Huxley, the immunologist Niels Jerne, the pharmacologist James Black, the epidemiologist Ronald Ross, the ecologist Robert MacArthur and to others more or less well known.
Why is it that biologists have such an odd perception of their own discipline? I attribute this to two factors. The first is an important theme in systems biology [3,4]: the mean may not be representative of the distribution. Otto Warburg is a good example. In the eyes of his contemporaries, Warburg was an accomplished theorist: ‘to develop the mathematical analysis of the measurements required very exceptional experimental and theoretical skill’ [5]. Once Warburg had opened the door, however, it became easy for those who followed him to avoid acquiring the same skills. Of Warburg’s three assistants who won Nobel Prizes, one would not describe Hans Krebs or Hugo Theorell as ‘theoretically skilled’, although Otto Meyerhoff was certainly quantitative. On average, theoretical skills recede into the long tail of the distribution, out of sight of the conventional histories and textbooks. It is high time for a revisionist account of the history of biology to restore quantitative reasoning to its rightful place.
The second factor is the enormous success of molecular biology. This is ironic, for many of the instigators of that revolution were physicists: Erwin Schrödinger, Max Delbrück, Francis Crick, Leo Szilard, Seymour Benzer and Wally Gilbert. There was, in fact, a brief window, during the life of physicist George Gamow’s RNA Tie Club, when it was claimed, with poor judgment, that physics and information theory could work out the genetic code [6,7]. Erwin Chargaff, who first uncovered the complementarity of the A-T and G-C nucleotide pairs (Chargaff’s rules), was nominally a member of the club—his code name was lysine—but I doubt that he was taken in by such theoretical pretensions. He famously described the molecular biology of the time as ‘the practice of biochemistry without a license’ [8]. When Marshall Nirenberg and Heinrich Matthaei came out of nowhere to make the first crack in the genetic code [9], thereby showing that licensing was mandatory—one can just sense the smile on Chargaff’s face—the theorists of the day must have felt that the barbarians were at the gates of Rome. Molecular biology never recovered from this historic defeat of theory and there have been so many interesting genes to characterize since, it has never really needed to.
It is the culmination of molecular biology in the genome projects that has finally brought diminishing returns to the one gene, ten PhDs way of life. We now think we know most of the genes and the interesting question is no longer characterizing this or that gene but, rather, understanding how the various molecular components collectively give rise to phenotype and physiology. We call this systems biology. It is a very different enterprise. It has brought into biology an intrusion of aliens and concepts from physics, mathematics, engineering and computer science and a renewed interest in the role of quantitative reasoning and modeling, to which we now turn.
Forward and reverse modeling
We can distinguish two kinds of modeling strategy in the current literature. We can call them forward and reverse modeling. Reverse modeling starts from experimental data and seeks potential causalities suggested by the correlations in the data, captured in the structure of a mathematical model. Forward modeling starts from known, or suspected, causalities, expressed in the form of a model, from which predictions are made about what to expect.
Reverse modeling has been widely used to analyze the post-genome, -omic data glut and is sometimes mistakenly equated with systems biology [10]. It has occasionally suggested new conceptual ideas but has more often been used to suggest new molecular components or interactions, which have then been confirmed by conventional molecular biological approaches. The models themselves have been of less significance for understanding system behavior than as a mathematical context in which statistical inference becomes feasible. In contrast, most of our understanding of system behavior, as in concepts such as homeostasis, feedback, canalization and noise, have emerged from forward modeling.
I will focus below on the kinds of models used in forward modeling. This is not to imply that reverse modeling is unimportant or uninteresting. There are many situations, especially when dealing with physiological or clinical data, where the underlying causalities are unknown or hideously complicated and a reverse-modeling strategy makes good sense. But the issues in distilling causality from correlation deserve their own treatment, which lies outside the scope of the present essay [11].
The logical structure of models
Mathematical models come in a variety of flavors, depending on whether the state of a system is measured in discrete units (‘off’ and ‘on’), in continuous concentrations or as probability distributions and whether time and space are themselves treated discretely or continuously. The resulting menagerie of ordinary differential equations, partial differential equations, delay differential equations, stochastic processes, finite-state automata, cellular automata, Petri nets, hybrid models,... each have their specific technical foibles and a vast associated technical literature. It is easy to get drowned by these technicalities, while losing sight of the bigger picture of what the model is telling us. Underneath all that technical variety, each model has the same logical structure.
Any mathematical model, no matter how complicated, consists of a set of assumptions, from which are deduced a set of conclusions. The technical machinery specific to each flavor of model is concerned with deducing the latter from the former. This deduction comes with a guarantee, which, unlike other guarantees, can never be invalidated. Provided the model is correct, if you accept its assumptions, you must as a matter of logic also accept its conclusions. If ‘Socrates is a man’ and ‘All men are mortal’ then you cannot deny that ‘Socrates is mortal’. The deductive process that leads from assumptions to conclusions involves much the same Aristotelian syllogisms disguised in the particular technical language appropriate to the particular flavor of model being used or, more often, yet further disguised in computer-speak. This guarantee of logical rigor is a mathematical model’s unique benefit.
Note, however, the fine print: ‘provided the model is correct’. If the deductive reasoning is faulty, one can draw any conclusion from any assumption. There is no guarantee that a model is correct (only a guarantee that if it is correct then the conclusions logically follow from the assumptions). We have to hope that the model’s makers have done it right and that the editors and the reviewers have done their jobs. The best way to check this is to redo the calculations by a different method. This is rarely easy but it is what mathematicians do within mathematics itself. Reproducibility improves credibility. We may not have a guarantee that a model is correct but we can become more (or less) confident that it is. The practice of mathematics is not so very different from the experimental world after all.
The correctness of a model is an important issue that is poorly addressed by the current review process. However, it can be addressed as just described. From now on, I will assume the correctness of any model being discussed and will take its guarantee of logical validity at face value.
The guarantee tells us that the conclusions are already wrapped up in the assumptions, of which they are a logical consequence. This is not to say that the conclusions are obvious. This may be far from the case and the deductive process can be extremely challenging. However, that is a matter of mathematical technique. It should not distract from what is important for the biology, which is the set of assumptions, or the price being paid for the conclusions being drawn. Instead of asking whether we believe a model’s conclusions, we should be asking whether we believe the model’s assumptions. What basis do we have for doing so?
On making assumptions
Biology rests on physics. At the length scales and timescales relevant to biology, physicists have worked out the fundamental laws governing the behavior of matter. If our assumptions can be grounded in physics, then it seems that our models should be predictive, in the sense that they are not subject to falsification—that issue has already been taken care of with the fundamental laws—so that we can be confident of the conclusions drawn. Physicists would make an even stronger claim on the basis that, at the fundamental level, there is nothing other than physics. As Richard Feynman put it, ‘all things are made of atoms and... everything that living things do can be understood in terms of the jigglings and wigglings of atoms’ [12, Chapter 3-3]. This suggests that provided we have included all the relevant assumptions in our models then whatever is to be known should emerge from our calculations. Models based on fundamental physical laws appear in this way to be objective descriptions of reality, which we can interrogate to understand reality. This vision of the world and our place in it has been powerful and compelling.
Can we ground biological models on fundamental physical laws? The Schrödinger equation even for a single protein is too hideously complicated to solve directly. There is, however, one context in which it can be approximated. Not surprisingly, this is at the atomic scale of which Feynman spoke, where molecular dynamics models can capture the jigglings and wigglings of the atoms of a protein in solution or in a lipid membrane in terms of physical forces [13]. With improved computing resources, including purpose-built supercomputers, such molecular dynamics models have provided novel insights into the functioning of proteins and multi-protein complexes [14,15]. The award of the 2013 Nobel Prize in Chemistry to Martin Karplus, Michael Levitt and Arieh Warshel recognizes the broad impact of these advances.
As we move up the biological scale, from atoms to molecules, we enter a different realm, of chemistry, or biochemistry, rather than physics. But chemistry is grounded in physics, is it not? Well, so they say but let us see what actually happens when we encounter a chemical reaction
and want to study it quantitatively. To determine the rate of such a reaction, the universal practice in biology is to appeal to the law of mass action, which says that the rate is proportional to the product of the concentrations of the reactants, from which we deduce that
where [ -] denotes concentration and k is the constant of proportionality. Notice the immense convenience that mass action offers, for we can jump from reaction to mathematics without stopping to think about the chemistry. There is only one problem. This law of mass action is not chemistry. A chemist might point out, for instance, that the reaction of hydrogen and bromine in the gas phase to form hydrobromic acid,
has a rate of reaction given by
which is rather far from what mass action claims, and that, in general, you cannot deduce the rate of a reaction from its stoichiometry [16]. (For more about the tangled tale of mass action, see [17], from which this example is thieved.) Mass action is not physics or even chemistry, it is phenomenology: a mathematical formulation, which may account for observed behavior but which is not based on fundamental laws.
Actually, mass action is rather good phenomenology. It has worked well to account for how enzymes behave, starting with Michaelis and Menten and carrying on right through to the modern era [18]. It is certainly more principled than what is typically done when trying to convert biological understanding into mathematical assumptions. If A is known to activate B—perhaps A is a transcription factor and B a protein that is induced by A—then it is not unusual to find activation summarized in some Hill function of the form
for which, as Hill himself well understood and has been repeatedly pointed out [19], there is almost no realistic biochemical justification. It is, at best, a guess.
The point here is not that we should not guess; we often have no choice but to do so. The point is to acknowledge the consequences of phenomenology and guessing for the kinds of models we make. They are no longer objective descriptions of reality. They can no longer be considered predictive, in the sense of physics or even of molecular dynamics. What then are they?
One person who understood the answer was the pharmacologist James Black [20]. Pharmacology has been a quantitative discipline almost since its inception and mathematical models have formed the basis for much of our understanding of how drugs interact with receptors [21]. (Indeed, models were the basis for understanding that there might be such entities as receptors in the first place [2]). Black used mathematical models on the road that led to the first beta-adrenergic receptor antagonists, or beta blockers, and in his lecture for the 1988 Nobel Prize in Physiology or Medicine he crystallized his understanding of them in a way that nobody has ever bettered: ‘Models in analytical pharmacology are not meant to be descriptions, pathetic descriptions, of nature; they are designed to be accurate descriptions of our pathetic thinking about nature’ [22]. Just substitute ‘systems biology’ for ‘analytical pharmacology’ and you have it. Black went on to say about models that: ‘They are meant to expose assumptions, define expectations and help us to devise new tests’.
An important difference arises between models like this, which are based on phenomenology and guesswork, and models based on fundamental physics. If the model is not going to be predictive and if we are not certain of its assumptions, then there is no justification for the model other than as a test of its (pathetic) assumptions. The model must be falsifiable. To achieve this, it is tempting to focus on the model, piling the assumptions up higher and deeper in the hope that they might eventually yield an unexpected conclusion. More often than not, the conclusions reached in this way are banal and unsurprising. It is better to focus on the biology by asking a specific question, so that at least one knows whether or not the assumptions are sufficient for an answer. Indeed, it is better to have a question in mind first because that can guide both the choice of assumptions and the flavor of the model that is used. Sensing which assumptions might be critical and which irrelevant to the question at hand is the art of modeling and, for this, there is no substitute for a deep understanding of the biology. Good model building is a subjective exercise, dependent on local information and expertise, and contingent upon current knowledge. As to what biological insights all this might bring, that is best revealed by example.
Three models
The examples that follow extend from cell biology to immunology to developmental biology. They are personal favorites and illuminate different issues.
Learning how to think about non-identical compartments
The eukaryotic cell has an internal structure of membrane-bounded compartments—nucleus, endoplasmic reticulum, Golgi and endosomes—which dynamically interact through vesicle trafficking. Vesicles bud from and fuse to compartments, thereby exchanging lipids and proteins. The elucidation of trafficking mechanisms was celebrated in the 2013 Nobel Prize in Physiology or Medicine awarded to Jim Rothman, Randy Schekman and Thomas Sudhof. A puzzling question that remains unanswered is how distinct compartments remain distinct, with varied lipid and protein profiles, despite continuously exchanging material. How are non-identical compartments created and maintained?
Reinhart Heinrich and Tom Rapoport address this question through a mathematical model [23], which formalizes the sketch in Figure 1. Coat proteins A and B, corresponding to Coat Protein I (COPI) and COPII, encourage vesicle budding from compartments 1 and 2. Soluble N-ethyl-maleimide-sensitive factor attachment protein receptors (SNAREs) X, U, Y and V are present in the compartment membranes and mediate vesicle fusion by pairing X with U and Y with V, corresponding to v- and t-SNAREs. A critical assumption is that SNAREs are packaged into vesicles to an extent that depends on their affinities for coats, for which there is some experimental evidence. If the cognate SNAREs X and U bind better to coat A than to coat B, while SNAREs Y and V bind better to coat B than to coat A, then the model exhibits a threshold in the relative affinities at which non-identical compartments naturally emerge. Above this threshold, even if the model is started with identical distributions of SNAREs in the two compartments, it evolves over time to a steady state in which the SNARE distributions are different. This is illustrated in Figure 1, with a preponderance of SNAREs X and U in compartment 1 and a preponderance of SNAREs Y and V in compartment 2.
thumbnailFigure 1. Creation of non-identical compartments. Schematic of the Heinrich–Rapoport model, from [23, Figure one], with the distribution of SNAREs corresponding approximately to the steady state with non-identical compartments. Ⓒ2005 Heinrich and Rapoport. Originally published in Journal of Cell Biology, 168:271-280, doi:10.1083/jcb.200409087. SNARE, soluble N-ethyl-maleimide-sensitive factor attachment protein receptor.
The actual details of coats and SNAREs are a good deal more complicated than in this model. It is a parsimonious model, containing just enough biological detail to reveal the phenomenon, thereby allowing its essence—the differential affinity of SNAREs for coats—to be clearly understood. We see that a model can be useful not just to account for data—there is no data here—but to help us think. However, the biological details are only part of the story; the mathematical details must also be addressed. Even a parsimonious model typically has several free parameters, such as, in this case, binding affinities or total amounts of SNAREs or coats. To sidestep the parameter problem, discussed further in the next example, parameters of a similar type are set equal to each other. Here, judgment plays a role in assessing that differences in these parameters might play a secondary role. The merit of this assumption could have been tested by sensitivity analysis [24], which can offer reassurance that the model behavior is not some lucky accident of the particular values chosen for the parameters.
The model immediately suggests experiments that could falsify it, of which the most compelling would be in vitro reconstitution of compartments with a minimal set of coats and SNAREs. I was curious about whether this had been attempted and asked Tom Rapoport about it. Tom is a cell biologist [25] whereas the late Reinhart Heinrich was a physicist [26]. Their long-standing collaboration (they were pioneers in the development of metabolic control analysis in the 1970s) was stimulated by Tom’s father, Samuel Rapoport, himself a biochemist with mathematical convictions [27]. Tom explained that the model had arisen from his sense that there might be a simple explanation for distinct compartments, despite the complexity of trafficking mechanisms, but that his own laboratory was not in a position to undertake the follow-up experiments. Although he had discussed the ideas with others who were better placed to do so, the field still seemed to be focused on the molecular details.
The model makes us think further, as all good models should. The morphology of a multicellular organism is a hereditary feature that is encoded in DNA, in genetic regulatory programs that operate during development. But what encodes the morphology of the eukaryotic cell itself? This is also inherited: internal membranes are dissolved or fragmented during cell division, only to reform in their characteristic patterns in the daughter cells after cytokinesis. Trafficking proteins are genetically encoded but how is the information to reform compartments passed from mother to daughter? The Heinrich–Rapoport model suggests that this characteristic morphology may emerge dynamically, merely as a result of the right proteins being present along with the right lipids. This would be a form of epigenetic inheritance [28], in contrast to the usual genetic encoding in DNA. Of course, DNA never functions on its own, only in concert with a cell. The Heinrich–Rapoport model reminds us that the cell is the basic unit of life. Somebody really ought to test the model.
Discrimination by the T-cell receptor and the parameter problem
Cytotoxic T cells of the adaptive immune system discriminate between self and non-self through the interaction between the T-cell receptor (TCR) and major histocompatibility complex (MHC) proteins on the surface of a target cell. MHCs present short peptide antigens (eight amino acids), derived from proteins in the target cell, on their external surface. The discrimination mechanism must be highly sensitive, to detect a small number of strong agonist, non-self peptide-MHCs (pMHCs) against a much larger background of weak agonist, self pMHCs on the same target cell. It must also be highly specific, since the difference between strong- and weak-agonist pMHCs may rest on only a single amino acid. Discrimination also appears to be very fast, with downstream signaling proteins being activated within 15 seconds of TCR interaction with a strong agonist pMHC. A molecular device that discriminates with such speed, sensitivity and specificity would be a challenge to modern engineering. It is an impressive demonstration of evolutionary tinkering, which Grégoire Altan-Bonnet and Ron Germain sought to explain by combining mathematical modeling with experiments [29].
The lifetime of pMHC-TCR binding had been found to be one of the few biophysical quantities to correlate with T-cell activation. Specificity through binding had previously been analyzed by John Hopfield in a classic study [30]. He showed that a system at thermodynamic equilibrium could not achieve discrimination beyond a certain minimum level but that with sufficient dissipation of energy, arbitrarily high levels of discrimination were possible. He suggested a ‘kinetic proofreading’ scheme to accomplish this, which Tim McKeithan subsequently extended to explain TCR specificity [31]. pMHC binding to the TCR activates lymphocyte-specific protein tyrosine kinase (LCK), which undertakes multiple phosphorylations of TCR accessory proteins and these phosphorylations are presumed to be the dissipative steps. However, the difficulty with a purely kinetic proofreading scheme is that specificity is purchased at the expense of both sensitivity and speed [32]. Previous work from the Germain laboratory had implicated SH2 domain-containing tyrosine phosphatase-1 (SHP-1) in downregulating LCK for weak agonists and the mitogen-activated protein kinase (MAPK), extracellular signal-regulated kinase (ERK), in inhibiting SHP-1 for strong agonists [33]. This led Altan-Bonnet and Germain to put forward the scheme in Figure 2, in which a core kinetic proofreading scheme stimulates negative feedback through SHP-1 together with a slower positive feedback through ERK. The behavior of interlinked feedback loops has been a recurring theme in the literature [34,35].
thumbnailFigure 2. Discrimination by the T-cell receptor. Schematic of the Altan-Bonnet–Germain model from [29, Figure two A], showing a kinetic proofreading scheme through a sequence of tyrosine phosphorylations, which is triggered by the binding of the TCR to pMHC, linked with a negative feedback loop through the tyrosine phosphatase SHP-1 and a positive feedback loop through MAPK. MAPK, mitogen-activated protein kinase; pMHC, peptide-major histocompatibility complex; P, singly phosphorylated; PP, multiply phosphorylated; SHP-1, SH2 domain-containing tyrosine phosphatase-1; TCR, T-cell receptor.
A parsimonious model of such a system might have been formulated with abstract negative and positive feedback differentially influencing a simple kinetic proofreading scheme. In fact, exactly this was done some years later [36]. The advantage of such parsimony is that it is easier to analyze how the interaction between negative and positive feedback regulates model behavior. The biological wood starts to emerge from the molecular trees, much as it did for Heinrich and Rapoport in the previous example. But the goal here also involves the interpretation of quantitative experimental data. Altan-Bonnet and Germain opted instead for a detailed model based on the known biochemistry. Their model has around 300 dynamical variables. Only the core module is described in the main paper, with the remaining nine modules consigned to the Supplementary Graveyard. Herbert Sauro’s JDesigner software, part of the Systems Biology Workbench [37], is required to view the model in its entirety.
The tension between parsimony and detail runs through systems biology like a fault line. To some, and particularly to experimentalists, detail is verisimilitude. The more a model looks like reality, the more it might tell us about reality. The devil is in the details. But we never bother ourselves with all the details. All those phosphorylation sites? Really? All 12 subunits of RNA Pol II? Really? We are always simplifying—ignoring what we think is irrelevant—or abstracting—replacing something complicated by some higher-level entity that is easier to grasp. This is as true for the experimentalist’s informal model—the cartoon that is sketched on the whiteboard—as it is for the mathematician’s formal model. It is impossible to think about molecular systems without such strategies: it is just that experimentalists and mathematicians do it differently and with different motivations. There is much to learn on both sides, for mathematicians about the hidden assumptions that guide experimental thinking, often so deeply buried as to require psychoanalysis to elicit, and for experimentalists about the power of abstraction and its ability to offer a new language in which to think. We are in the infancy of learning how to learn from each other.
The principal disadvantage of a biologically detailed model is the attendant parameter problem. Parameter values are usually estimated by fitting the model to experimental data. Fitting only constrains some parameters; a good rule of thumb is that 20% of the parameters are well constrained by fitting, while 80% are not [38]. As John von Neumann said, expressing a mathematician’s disdain for such sloppiness, ‘With four parameters I can fit an elephant and with five I can make him wiggle his trunk’ [39]. What von Neumann meant is that a model with too many parameters is hard to falsify. It can fit almost any data and what explanatory power it might have may only be an accident of the particular parameter values that emerge from the fitting procedure. Judging from some of the literature, we seem to forget that a model does not predict the data to which it is fitted: the model is chosen to fit them. In disciplines where fitting is a professional necessity, such as X-ray crystallography, it is standard practice to fit to a training data set and to falsify the model, once it is fitted, on whether or not it predicts what is important [40]. In other words, do not fit what you want to explain!
Remarkably, Altan-Bonnet and Germain sidestepped these problems by not fitting their model at all. They adopted the same tactic as Heinrich and Rapoport and set many similar parameters to the same value, leaving a relatively small number of free parameters. Biological detail was balanced by parametric parsimony. The free parameters were then heroically estimated in independent experiments. I am told that every model parameter was constrained, although this is not at all clear from the paper.
What was also not mentioned, as Ron Germain reported, is that ‘the model never worked until we actually measured ERK activation at the single cell level and discovered its digital nature’. We see that the published model emerged through a cycle of falsification, although here it is the model that falsifies the interpretation of population-averaged data, reminding us yet again that the mean may not be representative of the distribution.
With the measured parameter values, the model exhibits a sharp threshold at a pMHC-TCR lifetime of about 3 seconds, above which a few pMHCs (10 to 100) are sufficient to trigger full downstream activation of ERK in 3 minutes. Lifetimes below the threshold exhibit a hierarchy of responses, with those close to the threshold triggering activation only with much larger amounts of pMHCs (100,000), while those further below the threshold are squelched by the negative feedback without ERK activation. This accounts well for the specificity, sensitivity and speed of T-cell discrimination but the authors went further. They interrogated the fitted model to make predictions about issues such as antagonism and tunability and they confirmed these with new experiments [29]. The model was repeatedly forced to put its falsifiability on the line. In doing so, the boundary of its explanatory power was reached: it could not account for the delay in ERK activation with very weak ligands and the authors explicitly pointed this out. This should be the accepted practice; it is the equivalent of a negative control in an experiment. A model that explains everything, explains nothing. Even von Neumann might have approved.
To be so successful, a detailed model relies on a powerful experimental platform. The OT-1 T cells were obtained from a transgenic mouse line that only expresses a TCR that is sensitive to the strong-agonist peptide SIINFEKL (amino acids 257 to 264 of chicken ovalbumin). The RMA-S target cells were derived from a lymphoma that was mutagenized to be deficient in antigen processing, so that the cells present only exogenously supplied peptides on MHCs. T-cell activation was measured by flow cytometry with a phospho-specific antibody to activated ERK. In this way, calibrated amounts of chosen peptides can be presented on MHCs to a single type of TCR, much of the molecular and cellular heterogeneity can be controlled and quantitative data obtained at the single-cell level. Such exceptional experimental capabilities are not always available in other biological contexts.
From micro to macro: the somitogenesis clock
Animals exhibit repetitive anatomical structures, like the spinal column and its attendant array of ribs and muscles in vertebrates and the multiple body segments carrying wings, halteres and legs in arthropods like Drosophila. During vertebrate development, repetitive structures form sequentially over time. In the mid 1970s, the developmental biologist Jonathan Cooke and the mathematician Chris Zeeman suggested that the successive formation of somites (bilateral blocks of mesodermal tissue on either side of the neural tube—see Figure 3) might be driven by a cell-autonomous clock, which progressively initiates somite formation in an anterior to posterior sequence as if in a wavefront [41]. They were led to this clock-and-wavefront model in an attempt to explain the remarkable consistency of somite number within a species, despite substantial variation in embryo sizes at the onset of somitogenesis [42]. In the absence of molecular details, which were beyond reach at the time, their idea fell on stony ground. It disappeared from the literature until Olivier Pourquié’s group found the clock in the chicken. His laboratory showed, using fluorescent in situ hybridization to mRNA in tissue, that the gene c-hairy1 exhibits oscillatory mRNA expression with a period of 90 minutes, exactly the time required to form one somite [43]. The somitogenesis clock was found to be conserved across vertebrates, with basic helix-loop-helix transcription factors of the Hairy/Enhancer of Split (HES) family, acting downstream of Notch signaling, exhibiting oscillations in expression with periods ranging from 30 minutes in zebrafish (at 28°C) to 120 minutes in mouse [44]. Such oscillatory genes in somite formation were termed cyclic genes.
thumbnailFigure 3. The somitogenesis clock. Top: A zebrafish embryo at the ten-somite stage, stained by in situ hybridization for mRNA of the Notch ligand DeltaC, taken from [47, Figure one]. Bottom left: Potential auto-regulatory mechanisms in the zebrafish, taken from [47, Figure three A,B]. In the upper mechanism, the Her1 protein dimerizes before repressing its own transcription. In the lower mechanism, Her1 and Her7 form a heterodimer, which represses transcription of both genes, which occur close to each other but are transcribed in opposite directions. Explicit transcription and translation delays are shown, which are incorporated in the corresponding models. Bottom right: Mouse embryos stained by in situ hybridization for Uncx4.1 mRNA, a homeobox gene that marks somites, taken from [52, Figure four].
As to the mechanism of the oscillation, negative feedback of a protein on its own gene was known to be a feature of other oscillators [45] and some cyclic genes, like hes7 in the mouse, were found to exhibit this property. Negative feedback is usually associated with homeostasis—with restoring a system after perturbation—but, as engineers know all too well, it can bring with it the seeds of instability and oscillation [46]. However, Palmeirim et al. had blocked protein synthesis in chick embryos with cycloheximide and found that c-hairy1 mRNA continued to oscillate, suggesting that c-hairy1 was not itself part of a negative-feedback oscillator but was, perhaps, driven by some other oscillatory mechanism. It remained unclear how the clock worked.
The developmental biologist Julian Lewis tried to resolve this question in the zebrafish with the help of a mathematical model [47]. Zebrafish have a very short somite-formation period of 30 minutes, suggesting that evolutionary tinkering may have led to a less elaborate oscillator than in other animals. The HES family genes her1 and her7 were known to exhibit oscillations and there was some evidence for negative auto-regulation.
Lewis opted for the most parsimonious of models to formalize negative auto-regulation of her1 and her7 on themselves, as informally depicted in Figure 3. However, he made one critical addition by explicitly incorporating the time delays in transcription and translation. Time delay in a negative feedback loop is one feature that promotes oscillation, the other being the strength of the negative feedback. Indeed, there seems to be a trade-off between these features: the more delay, the less strong the feedback has to be for oscillation to occur [48]. Lewis acknowledged the mathematical biologist Nick Monk for alerting him to the importance of delays and Lewis’s article in Current Biology appeared beside one from Monk exploring time delays in a variety of molecular oscillators [49]. The idea must have been in the air because Jensen et al. independently made the same suggestion in a letter [50].
The model parameters, including the time delays, were all estimated on the basis of reasonable choices for her1 and her7, taking into account, for instance, the intronic structure of the genes to estimate transcriptional time delays. Nothing was fitted. With the estimated values, the models showed sustained periodic oscillations. A pure Her7 oscillator with homodimerization of Her7 prior to DNA binding (which determines the strength of the repression) had a period of 30 minutes. As with the Heinrich–Rapoport model, there is no data but much biology. What is achieved is the demonstration that a simple auto-regulatory loop can plausibly yield sustained oscillations of the right period. A significant finding was that the oscillations were remarkably robust to the rate of protein synthesis, which could be lowered by 90% without stopping the oscillations or, indeed, changing the period very much. This suggests a different interpretation of Palmeirim et al.’s cycloheximide block in the chick. As Lewis pointed out, ‘in studying these biological feedback phenomena, intuition without the support of a little mathematics can be a treacherous guide’, a theme to which he returned in a later review [51].
A particularly startling test of the delay model was carried out in the mouse by Ryoichiro Kageyama’s laboratory in collaboration with Lewis [52]. The period for somite formation in the mouse is 120 minutes and evidence had implicated the mouse hes7 gene as part of the clock mechanism. Assuming a Hes7 half-life of 20 minutes (against a measured half-life of 22.3 minutes), Lewis’s delay model yielded sustained oscillations with a period just over 120 minutes. The model also showed that if Hes7 was stabilized slightly to have a half-life only 10 minutes longer, then the clock broke: the oscillations were no longer sustained but damped out after the first three or four peaks of expression [52, Figure six B]. Hirata et al. had the clever idea of mutating each of the seven lysine residues in Hes7 to arginine, on the basis that the ubiquitin-proteasomal degradation system would use one or more of these lysines for ubiquitination. The K14R mutant was found to repress hes7 transcription to the same extent as the wild type but to have an increased half-life of 30 minutes. A knock-in mouse expressing Hes7 K14R/K14R showed, exactly as predicted, the first three to four somites clearly delineated, followed by a disorganized blur (Figure 3).
Further work from the Kageyama laboratory, as well as by others, has explored the role of introns in determining the transcriptional delays in the somitogenesis clock, leading to experiments in transgenic mice that again beautifully confirm the predictions of the Lewis model [53-55]. These results strongly suggest the critical role of delays in breaking the clock but it remains of interest to know the developmental consequences of a working clock with a different period to the wild type [56].
On the face of it, Julian Lewis’s simple model has been a predictive triumph. I cannot think of any other model that can so accurately predict what happens in re-engineered mice. On closer examination, however, there is something distinctly spooky about it. If mouse pre-somitic mesodermal cells are dissociated in culture, individual cells show repetitive peaks of expression of cyclic genes but with great variability in amplitude and period [57]. In isolation, the clock is noisy and unsynchronized, nothing like the beautiful regularity that is observed in the intact tissue. The simple Lewis model can be made much more detailed to allow for such things as stochasticity in gene expression, additional feedback and cell-to-cell communication by signaling pathways, which can serve to synchronize and entrain individual oscillators [47,58-60]. A more abstract approach can also be taken, in which emergent regularity is seen to arise when noisy oscillators interact through time-delayed couplings [61,62]. As Andy Oates said to me, such an abstraction ‘becomes simpler (or at least more satisfying) than an increasingly large genetic regulatory network, which starts to grow trunks at alarming angles’. These kinds of ‘tiered models’ have yielded much insight into the complex mechanisms at work in the tissue [63]. The thing is, none of this molecular complexity is present in the Lewis model. Yet, it describes what happens in the mouse with remarkable accuracy. The microscopic complexity seems to have conspired to produce something beautifully simple at the macroscopic level. In physics, the macroscopic gas law, PV=RT, is beautifully simple and statistical mechanics shows how it emerges from the chaos of molecular interactions [64]. How does the Lewis model emerge in the tissue from the molecular complexity within? It is as if we are seeing a tantalizing glimpse of some future science whose concepts and methods remain barely visible to us in the present. Every time I think about it, the hairs on the back of my neck stand up.
A mathematical model is a logical machine for converting assumptions into conclusions. If the model is correct and we believe its assumptions then we must, as a matter of logic, believe its conclusions. This logical guarantee allows a modeler, in principle, to navigate with confidence far from the assumptions, perhaps much further than intuition might allow, no matter how insightful, and reach surprising conclusions. But, and this is the essential point, the certainty is always relative to the assumptions. Do we believe our assumptions? We believe fundamental physics on which biology rests. We can deduce many things from physics but not, alas, the existence of physicists. This leaves us, at least in the molecular realm, in the hands of phenomenology and informed guesswork. There is nothing wrong with that but we should not fool ourselves that our models are objective and predictive, in the sense of fundamental physics. They are, in James Black’s resonant phrase, ‘accurate descriptions of our pathetic thinking’.
Mathematical models are a tool, which some biologists have used to great effect. My distinguished Harvard colleague, Edward Wilson, has tried to reassure the mathematically phobic that they can still do good science without mathematics [65]. Absolutely, but why not use it when you can? Biology is complicated enough that we surely need every tool at our disposal. For those so minded, the perspective developed here suggests the following guidelines:
1. Ask a question. Building models for the sake of doing so might keep mathematicians happy but it is a poor way to do biology. Asking a question guides the choice of assumptions and the flavor of model and provides a criterion by which success can be judged.
2. Keep it simple. Including all the biochemical details may reassure biologists but it is a poor way to model. Keep the complexity of the assumptions in line with the experimental context and try to find the right abstractions.
3. If the model cannot be falsified, it is not telling you anything. Fitting is the bane of modeling. It deludes us into believing that we have predicted what we have fitted when all we have done is to select the model so that it fits. So, do not fit what you want to explain; stick the model’s neck out after it is fitted and try to falsify it.
In later life, Charles Darwin looked back on his early repugnance for mathematics, the fault of a teacher who was ‘a very dull man’, and said, ‘I have deeply regretted that I did not proceed far enough at least to understand something of the great leading principles of mathematics; for men thus endowed seem to have an extra sense’ [66]. One of those people with an extra sense was an Augustinian friar, toiling in the provincial obscurity of Austro-Hungarian Brünn, teaching physics in the local school while laying the foundations for rescuing Darwin’s theory from oblivion [67], a task later accomplished, in the hands of J. B. S. Haldane, R. A. Fisher and Sewall Wright, largely by mathematics. Darwin and Mendel represent the qualitative and quantitative traditions in biology. It is a historical tragedy that they never came together in their lifetimes. If we are going to make sense of systems biology, we shall have to do a lot better.
COP: Coat Protein I; ERK: Extracellular signal-regulated kinase; HES: Hairy/Enhancer of Split family; LCK: lymphocyte-specific protein tyrosine kinase; MAPK: mitogen-activated protein kinase; MHC: major histocompatibility complex; pMHC: peptide-MHC; SHP-1: SH2 domain-containing tyrosine phosphatase-1; SNARE: soluble N-ethyl-maleimide-sensitive factor attachment protein receptor; TCR: T-cell receptor.
Competing interests
The author declares that he has no competing interests.
I thank Grégoire Altan-Bonnet, Ron Germain, Ryo Kageyama, Julian Lewis, Andy Oates and Tom Rapoport for very helpful comments on their respective models but must point out that the opinions expressed in this paper are mine and that any errors or omissions should be laid at my door. I also thank two anonymous reviewers for their thoughtful comments and Mary Welstead for stringent editorial consultancy.
1. Gunawardena J: Some lessons about models from Michaelis and Menten.
Mol Biol Cell 2012, 23:517-519. PubMed Abstract | Publisher Full Text | PubMed Central Full Text OpenURL
2. Gunawardena J: Biology is more theoretical than physics.
Mol Biol Cell 2013, 24:1827-1829. PubMed Abstract | Publisher Full Text | PubMed Central Full Text OpenURL
Science 1998, 280:895-898. PubMed Abstract | Publisher Full Text OpenURL
4. Altschuler SJ, Wu LF: Cellular heterogeneity: do differences make a difference?
Cell 2010, 3:559-563. OpenURL
5. Krebs H: Otto Warburg: Cell Physiologist, Biochemist and Eccentric . Oxford, UK: Clarendon Press; 1981. OpenURL
6. Watson JD: Genes, Girls and Gamow . Oxford, UK: Oxford University Press; 2001. OpenURL
7. Kay LE: Who Wrote the Book of Life. A History of the Genetic Code . Stanford, CA, USA: Stanford University Press; 2000. OpenURL
8. Chargaff E: Essays on Nucleic Acids . Amsterdam, Holland: Elsevier Publishing Company; 1963. OpenURL
9. Nirenberg M: The genetic code. In Nobel Lectures, Physiology or Medicine 1963–1970 . Amsterdam, Holland: Elsevier Publishing Co; 1972. OpenURL
10. Brenner S: Sequences and consequences.
Phil Trans Roy Soc 2010, 365:207-212. Publisher Full Text OpenURL
11. Pearl J: Causality: Models, Reasoning and Inference . Cambridge, UK: Cambridge University Press; 2000. OpenURL
12. Feynman RP, Leighton RB, Sands M: The Feynman Lectures on Physics. Volume 1. Mainly Mechanics, Radiation and Heat . Reading, MA, USA: Addison-Wesley; 1963. OpenURL
13. Levitt M: The birth of computational structural biology.
Nat Struct Biol 2001, 8:392-393. PubMed Abstract | Publisher Full Text OpenURL
14. Karplus M, Kuriyan J: Molecular dynamics and protein function.
Proc Natl Acad Sci USA 2005, 102:6679-6685. PubMed Abstract | Publisher Full Text | PubMed Central Full Text OpenURL
Annu Rev Biophys 2012, 41:429-452. PubMed Abstract | Publisher Full Text OpenURL
16. Atkins P, de Paula J: Elements of Physical Chemistry . Oxford, UK: Oxford University Press; 2009. OpenURL
17. Mysels KJ: Textbook errors VII: the laws of reaction rates and of equilibria.
J Chem Educ 1956, 33:178-179. Publisher Full Text OpenURL
18. Cornish-Bowden A: Fundamentals of Enzyme Kinetics . London, UK: Portland Press; 1995. OpenURL
19. Weiss JN: The Hill equation revisited: uses and misuses.
FASEB J 1997, 11:835-841. PubMed Abstract | Publisher Full Text OpenURL
20. Black J: A personal view of pharmacology.
Annu Rev Pharmacol Toxicol 1996, 36:1-33. PubMed Abstract | Publisher Full Text OpenURL
22. Black J: Drugs from emasculated hormones: the principles of syntopic antagonism. In Nobel Lectures, Physiology or Medicine 1981–1990 . Edited by Frängsmyr T. Singapore: World Scientific; 1993. OpenURL
23. Heinrich R, Rapoport TA: Generation of nonidentical compartments in vesicular transport systems.
J Cell Biol 2005, 162:271-280. OpenURL
24. Varma A, Morbidelli M, Wu H: Parametric Sensitivity in Chemical Systems . Cambridge, UK: Cambridge University Press; 2005. OpenURL
25. Davis TH: Profile of Tom A Rapoport.
Proc Natl Acad Sci USA 2005, 102:14129-14131. PubMed Abstract | Publisher Full Text | PubMed Central Full Text OpenURL
26. Kirschner M: Reinhart Heinrich (1946–2006). Pioneer in systems biology.
Nature 2006, 444:700. PubMed Abstract | Publisher Full Text OpenURL
27. Heinrich R, Rapoport SM, Rapoport TA: Metabolic regulation and mathematical models.
Prog Biophys Molec Biol 1977, 32:1-82. OpenURL
28. Ptashne M: On the use of the word ‘epigenetic’.
Curr Biol 2007, 17:233-236. Publisher Full Text OpenURL
29. Altan-Bonnet G, Germain RN: Modeling T cell antigen discrimination based on feedback control of digital ERK responses.
PLoS Biol 2005, 3:1925-1938. OpenURL
Proc Natl Acad Sci USA 1974, 71:4135-39. PubMed Abstract | Publisher Full Text | PubMed Central Full Text OpenURL
31. McKeithan TW: Kinetic proofreading in T-cell receptor signal transduction.
Proc Natl Acad Sci USA 1995, 92:5042-5046. PubMed Abstract | Publisher Full Text | PubMed Central Full Text OpenURL
32. Murugan A, Huse DA, Leibler S: Speed, dissipation, and error in kinetic proofreading.
Proc Natl Acad Sci USA 2012, 109:12034-12039. PubMed Abstract | Publisher Full Text | PubMed Central Full Text OpenURL
33. Štefanová I, Hemmer B, Vergelli M, Martin R, Biddison WE, Germain RN: TCR ligand discrimination is enforced by competing ERK positive and SHP-1 negative feedback pathways.
Nat Immunol 2003, 4:248-254. PubMed Abstract | Publisher Full Text OpenURL
Science 2005, 310:496-498. PubMed Abstract | Publisher Full Text | PubMed Central Full Text OpenURL
Science 2008, 321:126-129. PubMed Abstract | Publisher Full Text | PubMed Central Full Text OpenURL
36. François P, Voisinne G, Siggia ED, Altan-Bonnet G, Vergassola M: Phenotypic model for early t-cell activation displaying sensitivity, specificity, and antagonism.
Proc Natl Acad Sci USA 2013, 110:888-897. Publisher Full Text OpenURL
Omics 2003, 7:355-372. PubMed Abstract | Publisher Full Text OpenURL
PLoS Comput Biol 2007, 3:1871-1878. PubMed Abstract | Publisher Full Text | PubMed Central Full Text OpenURL
39. Dyson F: A meeting with Enrico Fermi.
Nature 2004, 427:297. PubMed Abstract | Publisher Full Text OpenURL
40. Brünger A: Free R value: a novel statistical quantity for assessing the accuracy of crystal structures.
Nature 1992, 355:472-475. PubMed Abstract | Publisher Full Text OpenURL
41. Cooke J, Zeeman EC: A clock and wavefront model for control of the number of repeated structures during animal morphogenesis.
J Theor Biol 1976, 58:455-476. PubMed Abstract | Publisher Full Text OpenURL
42. Cooke J: The problem of periodic patterns in embryos.
Phil Trans R Soc Lond B Biol Sci 1981, 295:509-524. Publisher Full Text OpenURL
43. Palmeirim I, Henrique D, Ish-Horowicz D, Pourquié O: Avian hairy gene expression identifies a molecular clock linked to vertebrate segmentation and somitogenesis.
Cell 1997, 91:639-648. PubMed Abstract | Publisher Full Text OpenURL
44. Pourquié O: The segmentation clock: converting embryonic time into spatial pattern.
Science 2003, 301:328-330. PubMed Abstract | Publisher Full Text OpenURL
45. Sassone-Corsi P: Rhythmic transcription with autoregulatory loops: winding up the biological clock.
Cell 1994, 78:361-364. PubMed Abstract | Publisher Full Text OpenURL
46. Åström KJ, Murray RM: Feedback Systems. An Introduction for Scientists and Engineers . Princeton, NJ, USA: Princeton University Press; 2008. OpenURL
47. Lewis J: Autoinhibition with transcriptional delay: a simple mechanism for the Zebrafish somitogenesis oscillator.
Curr Biol 2003, 13:1398-1408. PubMed Abstract | Publisher Full Text OpenURL
48. Tyson JJ, Othmer HG: The dynamics of feedback control circuits in biochemical pathways. In Progress in Theoretical Biology, Volume 5 . Edited by Rosen R, Snell F. New York, NY, USA: Academic Press; 1978. OpenURL
49. Monk NAM: Oscillatory expression of Hes1, p53, and NF-κB driven by transcriptional time delays.
Curr Biol 2003, 13:1409-1413. PubMed Abstract | Publisher Full Text OpenURL
FEBS Lett 2003, 541:176-177. PubMed Abstract | Publisher Full Text OpenURL
51. Lewis J: From signals to patterns: space, time and mathematics in developmental biology.
Science 2008, 322:399-403. PubMed Abstract | Publisher Full Text OpenURL
Nat Genet 2004, 36:750-754. PubMed Abstract | Publisher Full Text OpenURL
Genes Dev 2008, 22:2342-2346. PubMed Abstract | Publisher Full Text | PubMed Central Full Text OpenURL
54. Takashima Y, Ohtsuka T, González A, Miyachi H, Kageyama R: Intronic delay is essential for oscillatory expression in the segmentation clock.
Proc Natl Acad Sci USA 2011, 108:3300-3305. PubMed Abstract | Publisher Full Text | PubMed Central Full Text OpenURL
55. Harima Y, Takashima Y, Ueda Y, Ohtsuka T, Kageyama R: Accelerating the tempo of the segmentation clock by reducing the number of introns in the Hes7 gene.
Cell Rep 2013, 3:1-7. PubMed Abstract | Publisher Full Text OpenURL
56. Oswald A, Oates AC: Control of endogenous gene expression timing by introns.
Genome Biol 2011, 12:107. PubMed Abstract | BioMed Central Full Text | PubMed Central Full Text OpenURL
Proc Natl Acad Sci USA 2006, 103:1313-1318. PubMed Abstract | Publisher Full Text | PubMed Central Full Text OpenURL
58. Giudicelli F, Özbudak EM, Wright GJ, Lewis J: Setting the tempo in development: an investigation of the zebrafish somite clock mechanis.
PLoS Biol 2007, 5:150. Publisher Full Text OpenURL
59. Schröter C, Ares S, Morelli LG, Isakova A, Hens K, Soroldoni D, Gajewski M, Jülicher F, Maerkl SJ, Deplancke B, Oates AC: Topology and dynamics of the zebrafish segmentation clock core circuit.
PLoS Biol 2012, 10:1001364. Publisher Full Text OpenURL
60. Hanisch A, Holder MV, Choorapoikayil S, Gajewski M, Özbudak EM, Lewis J: The elongation rate of RNA polymerase II in zebrafish and its significance in the somite segmentation clock.
Development 2013, 140:444-453. PubMed Abstract | Publisher Full Text OpenURL
61. Morelli LG, Ares S, Herrgen L, Schröter C, Jülicher F, Oates AC: Delayed coupling theory of vertebrate segmentation.
HFSP J 2009, 3:55-66. PubMed Abstract | Publisher Full Text | PubMed Central Full Text OpenURL
62. Herrgen L, Ares S, Morelli LG, Schröter C, Jülicher F, Oates AC: Intercellular coupling regulates the period of the segmentation clock.
Curr Biol 2010, 20:1244-1253. PubMed Abstract | Publisher Full Text OpenURL
Development 2012, 139:625-639. PubMed Abstract | Publisher Full Text OpenURL
64. Khinchin AI: Mathematical Foundations of Statistical Mechanics . New York, NY, USA: Dover Publications Inc; 1949. OpenURL
65. Wilson EO: Letters to a Young Scientist . New York, NY, USA: Liveright Publishing Corporation; 2013. OpenURL
66. Barlow N (Ed): The Autobiography of Charles Darwin. 1809–1882 . New York, NY, USA: W. W. Norton and Co, Inc; 1958. OpenURL
67. Mawer S: Gregor Mendel. Planting the Seeds of Genetics . New York, NY, USA: Abrams; 2006. OpenURL |
c6fbbf987335d397 | Sunday, February 28, 2010
Libertarian Wack XXIV
Entropy I
Saturday, February 27, 2010
Waldman via DeLong
Robert Waldmann writes:
Review Review
I would like to see Wolfgang's review, but maybe he is too close to speak about this stuff.
Friday, February 26, 2010
A History of Violence
We humans clearly have the capability of either warlike or peaceful behavior, but the message of history is that violence is by far the more prominent thread. It easy to imagine, also, that there was nothing freakish about the outcome of the encounter between Moriori and Maori - when peace meets war, war usually wins.
Intergroup warfare is actually fairly rare in the animal kingdom. Besides us, our Chimpanzee cousins, wolves, and ants there doesn't seem to be a lot of it. It seems clear that our history of violence has at least some support in our genetics, since it seems unlikely that our grim history would have unfolded as it has otherwise.
Peace loving people, and that includes most of us, have an obligation to take into account our nature when they try to construct a world without war. We need to understand what it was that made the Maori fierce and their close cousins peaceful. Those who propound on economics need to realize - as they often fail to - that policies that ignore this central human propensity are pointless academic exercises.
Thursday, February 25, 2010
Socially Useful
Humans tend tend to organize themselves into a heterarchy of social units, which both compete and cooperate. Those entities compete with each other, but they also compete for our loyalty. Badges of prestige are a key tool in that competition as well. Individuals and societies place high value on these badges because they link them with their prospects for survival and reproduction.
When we as individuals calculate the usefulness of an activity, we start with our Darwinian selves. Our parents make a similar calculation. It's no secret that athletic accomplishment brings sexual opportunity - in high school as well as the NBA. Wilt Chamberlain claimed to have had sex with 20,000 women - take that Tiger. All that sexual opportunity brings children, lots of them
I think libertarians may find these dimensions of human nature a bit hard to appreciate. By rejecting the collective dimension of human nature, they render themselves blind to much of how the world works. I would be curious if this tendency was innate rather than acquired - sort of another point on the autistic spectrum.
Wednesday, February 24, 2010
Annoying Ideas
Ideas can be annoying because they are wrong, trite, insulting, or just boring. Such are the mosquitoes of the idea world. There is a more interesting type of annoying idea though, and that’s the one that undermines some tenet (or even tenant!) of one’s world view but stubbornly resists easy refutation. These are the ideas worth struggling with, although unfortunately I have a lot of trouble getting along with those who propound them. A lot of these turn out to have a libertarian flavor, and Landsburg is a nearly ideal source.
Let me back up to a recent episode of SL which provoked the comment quoted above:
At first I thought that SL was being ironic here, but it seems not. The deep, and annoying, question is why a parent, for example, should place such a high value on some achievement like winning a gold medal but so little on providing a useful service, like driving a cab. More broadly, why should a society so arrange its values?
As in all such analyses, my instinct is to ask Mr. Darwin. For the parent, the answer is clear. Winning the gold medal (or hitting the little league home run, or being elected Prom Queen) is a big leg up in the battle for reproductive fitness – cab driver, not so much. Our society, whether family, town or nation is modeled on the primitive band in which our ancestors evolved, so that badge of fitness is or seems like one for the band (society) as well as parent. Something like 8% of the inhabitants of large swaths of Asia appear to be direct descendants of Genghis Khan and his close relatives. Our selfish genes don't give a damn about social usefulness.
So what’s wrong with the sort of economic reasoning that we see in L’s post? For me, it’s mostly just one thing – it doesn’t explain actual human behavior. One can imagine a sort of worker’s paradise in which people really thought that way but it wouldn’t be of this world. I think we can see Prof Landsburg's inner social democrat peering out here, wanting markets (and parents) to value the socially useful.
I don't mean by that that we are helpless prisoners of the Darwinian struggle, doomed to keep playing out this game to its grim Malthusian end, though, but I'm sure that we can’t pretend that these aspects of our nature don’t exist and still come to any useful conclusions.
That's why a cynical liberal like me doesn't trust the markets, even though I recognize that they do somethings very well and much better than any central planner could. And another reason that I suspect libertarians misjudge human nature.
Past and Future
The principle which permits prediction is causality, the notion that the past determines the future. In classical physics, it is held that if we knew the past in sufficient detail, and could do the arithmetic, we could compute the detailed future. In practice, of course, that could only be done in particularly simple physical situations, like the motion of planets around the Sun. More complex problems, like the motion of party guests around a punchbowl, were beyond our skill and knowledge. Quantum mechanics kicks a key causal leg out from under us. Even perfect knowledge and perfect computation are not enough to truly predict the future, since an irreducible probabilistic quantum uncertainty will remain.
A powerful kind of causality still exists in quantum mechanics however. The quantum state function evolves according to the Schrödinger equation, and that evolution is strictly deterministic. The irreducible quantum uncertainty only enters the picture when we actually try to measure that quantum state.
Semi-classical quantum gravity seems to offer a more serious challenge to predictivity however. The problem occurs because matter entering a black hole carries information with it, but when a black hole evaporates, it seems that information can't come out with it. Wikipedia has what I consider an excellent discussion of problem here:
The always useful John Baez has this:
More on proposed cures later.
Theological Politics
Tuesday, February 23, 2010
Conservation of Energy
The power of the idea stems from the fact that energy is conserved. Considerable refinement and redefinition was needed to conserve the notion of energy conservation, though. New forms of energy had to be named and noted: gravitational potential energy, thermal energy, chemical energy and electromagnetic energy. The accounting proved well worthwhile, of course, since the different forms of energy could be converted into each other, and energy represented the capability to do work.
Following the history of that energy allows us to unravel much of the workings of the world. Here on Earth, most energy comes from the Sun in the form of photons of electromagnetic energy, some of which is converted by photosynthesis into chemical energy stored in carbohydrates , which in turn gets converted into that ubiquitous gasoline of life, ATP which powers muscle, brain, and other metabolism.
It's the conservation of energy that proves the most important constraint on life and civilization. The World would be much different if we could build a perpetual motion machine of the first kind and get energy for free. That's why it's kind of shocking to the physicist to realize that there really does seem to be such a perpetuum mobile - the Universe, driven by by dark energy. Sean Carroll has a brief account, but if you want more of the details you really want John Baez here. While Sean is pretty categorical, at least in his post title ("Energy is Not Conserved"), John has a bit more measured take:
Is Energy Conserved in General Relativity?
In special cases, yes. In general -- it depends on what you mean by "energy", and what you mean by "conserved".
Each explains why those two statements don't really contradict each other, but John has more details. Sean, though, explains more of the the role of the dark energy.
Monday, February 22, 2010
Of course it's pretty silly for me to be defending Summers, but Landsburg irritates the hell out of me, and that was true even before he banned me for making fun of his brand of psuedo-economics (and criticizing his tendency to get confused while trying to explain relativity - a more forgiveable sin in my book.)
Sunday, February 21, 2010
Area 53
Like the hunt for the Higgs, though, this quarry has so far remain hidden. All the investigators passion and ingenuity has not yet proven capable of stopping that roulette wheel on cue, or popping the Ace-Jack out of the shoe in Area 53, also sometimes known as Las Vegas.
Wednesday, February 17, 2010
Economists in Denial
So why do I find it hard to believe that a professor at a prestigious school should get away with such egregious nonsense? I need to keep reminding myself that much of what passes for economics consists of "a smokescreen for justifying policies convenient to powerful economic interests."
Tuesday, February 16, 2010
Math Puzzle From Comments
Monday, February 15, 2010
Mathematics and Technique
You get the idea, or, if not, read at the link. He isn't being original here. Every decade or two, somewhat similarly motivated people discover that math isn't fun for most students because students aren't taught to be mathematicians. Sometimes these ideas turn into a movement, usually with the result that kids wind up doing things that their parents and teachers don't understand, kids find they can't solve math problems, and a counter-revolution produces a back to basics movement and we begin again.
I am not entirely unsympathetic to Mr. Lockhart's argument, but I am pretty unsympathetic. I am unsumpathetic because I have seen the devastation wrought by previous similarly motivated "reforms" - new math, discovery math, etc. I also suspect that he has either forgotten or never knew how music is actually taught. I would also say that mathematicians should not and cannot be trusted to design a math curriculum for non-mathematicians. Not to be impolite, but a professional mathematician is a freak of nature - unnaturally inclined to live in a very narrow part of his/her head. He can't be trusted to guess what kids find interesting and can't be trusted to remember how he himself learned.
Put techniques for discovering the fun of mathematics in front of 100 school kids and 6 will say cool, and become interested, 34 will try to figure the thing to memorize to pass this new stupid class, and the remaining 60 will look blankly and uncomprehendingly on at yet another bit of tedium. For their teachers, the corresponding numbers will be 2, 10 and 88.
I also distrust this notion of mathematics as an art. Considered purely as an art, mathematics would rank in popular appeal somewhere between dog collar calligraphy and vegetable peeling collage - the audience would only include other mathematicians, and darn few of them. Mathematics is interesting to the wider world because it is useful, necessary and indispensable. We teach it in schools because so many people need to use it. That means that schools need to teach the kinds of skills and techniques that adults actually need to use. Nowhere in Mr. Lockhart's essay do I see any recognition of the way mathematics is used by 99.9% of the human race.
We should remember too, that much of art consists of learning technique. Musicians, or at any rate, most musicians, do have to learn how to read music, and all have to spend thousands of hours learning the technique of their instruments. If Mr. Lockhart wanted to argue that schools are still wasting far too much time on obsolete techniques of mathematics, he would find a ready listener in me.
Games, puzzles and the other fun activities he thinks should take the place of the current curriculum are fine, but is there any chance that any significant segment of the population would thereby learn enough mathematics to do their taxes, much less calculate statistics, solve differential equations or learn the much more difficult techniques needed to actually do mathematics professionally? I doubt it.
I have been a frequent critic of the math curriculum myself, but I regret to say that I see almost nothing valuable in Lockhart's suggestions. Anybody care to venture a contrary opinion?
Sunday, February 14, 2010
Please Shut Up Joe Biden
It was Bush-Cheney that ignored clear and repeated warnings to allow the worst attack on the US in history. It was Bush-Cheney who allowed bin Laden to escape (and flew his relatives out of the US on a special plane after 9/11, before any Americans were allowed to fly), it was Bush-Cheny who got us into the Iraq war and got tens of thousands of Americans killed and wounded in a wild goose chase for WMDs that didn't exist, and it was Bush-Cheney who turned surpluses into deficits and wrecked the economy. What imaginable reason is there not to attack their record at every opportunity?
Saturday, February 13, 2010
Another Point The Tenure Committee Might Want to Consider
Some people take bad news poorly.
Friday, February 12, 2010
That's Entertainment
Thursday, February 11, 2010
Another Relativistic Train
UPDATE continued: The thing is, Fred sees only an electrostatic force because he doesn't bother to measure the magnetic field. A more honest way to express it is to say that part of the field we see as purely magnetic from the stationary frame appears to be an electrical field in Fred's frame.
The details are explained at the freshman physics level in Chapter 5 of Ed Purcell's "Electricity and Magnetism."
Let me use SL's example as a jumping off point for a slight puzzle. Imagine that the wire we are seeing is arranged in a circle like the train we considered once before. The numbers of electrons and protons in the wire are equal, making it electrically neutral. When the electrons are set in motion, Fred decides to amble along with them. He doesn't need to go very fast, since the average drift velocity of electrons in is much less than the speed of light - less, in fact, than the speed of a snail - a few centimeters per hour. Now George, our stationary observer measures the average distance between electrons in his frame, and its still the same as it was at rest, call it l. It ought to be, since we have the same number of electrons and they are still in the same wire. (electrons, unlike opposite ends of your typical rail car, don't mind being pushed a bit further apart).
Fred has carried his own meter stick with him, and he decides to check some distances in his frame (co-moving with the local electrons). He finds that the nearby electrons have been pushed apart by the relativistic factor gamma = 1/sqrt(1-(v/c)^2). I will call it g. The protons, as seen from his frame, are closer together by the factor 1/g. The circle of wire, firmly attached to the protons, has similarly shrunk. So here is our first puzzle: The electrons are farther apart on a track that is smaller - so how do they all fit?
If Fred is carrying a charge, he will see a force due to the fact that the local protons are more densely concentrated than their negative counterparts by a factor that amounts to (v/c)^2 x charge concentration of carriers. The resulting force scales like charge x current x v/c as one would expect from the Lorentz Force law and Biot-Savart law of magnetism.
About that puzzle...
Wednesday, February 10, 2010
Voyages of Discovery
Einstein had a different opinion. He was unimpressed by those he saw as seeking to "drill through the thinnest part of the wood." Feynman too advised young physicists not to pay much attention to the fashionable trends in physics, but to follow their own ideas.
Dr. Woit expressed some disdain for physicists interested in vague entropic forces where "everything could be done with high school mathematics." I would recommend that he look at the key papers of Einstein on SR and the photoelectric effect, Bohr on the Hydrogen atom, and Bekenstein on black hole entropy. The mathematics is elementary but the implications are profound.
Woit and Motl each have recent post decrying the kind of crack-pottery they see overtaking physics. Now physics has always attracted crackpots by the bushel and the peck, but Peter and Lubos aren't referring to the kind of garden variety crackpot who shows up on John Baez's index or 't hooft's "bad theoretical physicist" standard. In particular, they have in mind guys like Erik Verlinde, though Motl has been known to add 't hooft and Penrose to his list.
Your garden variety crackpot is a guy who flunked calculus but knows that his penetrating intuition grasps the universe as it really is. The Woit-Motl crackpots, by contrast, are guys who are, well, somewhat more accomplished in physics than Woit or Motl.
We can agree, of course, that guys like Verlinde (and Penrose and 't hooft and of course Columbus) are crazy. The question is, to borrow a construction of Bohr's, are they crazy enough to be right? If I can push my analogy a bit further, here is a potential clue. Columbus and the Polynesian argonauts of the Pacific were superb sailors who had mastered all the technology available to them. Most of those crazy guys who set out into the unknown will never be heard from again, but maybe we should salute those who just might be crazy enough to be right.
Tuesday, February 09, 2010
Fixing Congress
Monday, February 08, 2010
Snow Days
One of the perks of being a government worker in Washington DC is the occasional day off for snow, inauguration, or other special event. The great Snowmaggedon event of 2010 has already gotten most Federal employees two and one half days off, and one can't be confident that the place will open up Wednesday either. Most employers can't afford to be so generous, though, so the days off for the Feds provoke considerable resentment. The rationale for giving the nonessential government workers the day off is that in a one industry town, having all of them scrambling to get to work through impassible streets with the Metro only partially functional would keep anybody from getting anywhere safely.
Sunday, February 07, 2010
I Beg To Differ
The string theorist Leonard Susskind has proposed something he calls black hole complementarity to resolve certain problems in quantum gravity. Consider two narrative versions of what happens to a spaceship falling into a very large black hole. If we apply the laws of general relativity and quantum mechanics (in the semi-classical approximation) to the spaceship, we may calculate that the passengers perceive falling through the horizon as uneventful. The outside observer, on the other hand, sees something much different - as the spaceship approaches the event horizon, it encounters a hellish blast of Hawking radiation which tears the ship to pieces before it reaches that horizon. So which narrative should we believe? Well, there is one narrative for those on the ship, and one for those on the outside, and never shall the twain meet, for their world lines have become causally disconnected.
In one sense, this is just another, slightly more pointed version of the famous double slit experiment.
[One reason this post looks half baked is because it is. It was a partial draft that got published by mistake. I will try to finish it when I get time.]
Saturday, February 06, 2010
So You Want to be a Genius?
Peyton Manning has narrowly focussed his life on football, seemingly from his earliest years. His fanatical devotion to study and practice is legendary, and so is his performance in the pinch. His signature accomplishment is the second half comeback. Teams throw their best and most complicated stuff at him in the first half, but by the second, he has figured it out and picks them apart. There are quarterbacks with more impressive records but they mostly seem to have played for teams so good that no comeback was required - the other team got blown out in the first half.
Still, if Manning had been 5' 6" and slow of foot rather than 6' 5" and quick, his genius likely would have confined itself to the high school arena. The element of talent is still crucial. Besides his physical skills, though, he needed his quick mind but above all his power of concentration and focus. The last is probably the most fundamental element of genius - patience carried to an extreme.
Friday, February 05, 2010
The Disfunctional Senate
Other countries manage to work pretty well with much smaller numbers of appointees. Senior civil servants should occupy most of those three thousand jobs. The Cabinet, a couple of dozen key aides, and the presidents own staff (who don't require confirmation) should be more than enough. Presidents should be expected to submit their nominees on their second day of office, and the appointees should be guranteed an up or down vote by April.
A Child's Garden ...
First there is the feverish tone:
This may call up images of Mao marching hapless students out to the harvest, but the school gardens I have seen are typically about the same size as a suburban vegetable garden. Divide that by a few hundred or thousand students and each student winds up responsible for a couple of flower pots worth of garden. Is the garden curriculum dominating the school? Maybe, but Flanagan's story lacks the telling detail and specificity that would convince me that she has actually seen one of these in use.
Flanagan's writing style - call it Mo Dowd light - is heavy on the cranky sneer, light on relevant facts, and suffused with vaguely anti-feminist hostility. Here she is on Alice Waters, founding mother of the school garden movement:
Now California public schools do indeed suck, but there a lot of other plausible candidates for reasons, starting with a vastly needy student population, heavy on non native speakers of English and decades of declining budgets driven by old Proposition 13. Unmentioned by Flanagan is the continuing mischief wrought by No Child Left Behind, which has turned schools nationally onto almost total focus on tests.
On the few occasions where she mentions actual evidence it's hardly germane. Her extensive perusal of the literature on school gardening has not revealed that it improved algebra scores. Imagine that. One charter school with a committment to rigorous standards puts up much better scores than a long time garden school.
Her writing is perfect fodder for that sort of conservative who sees every idea, good or bad, as a liberal plot. Unsurprisingly, Steve Landsburg thought the article was great. Four years ago, Ann Hulbert offered a more dyspeptic take on Flanagan and her works.
Thursday, February 04, 2010
Climate MSU
Is Microsoft The New GM?
I am skeptical. Exactly when did MS ever innovate? The earlier MS showed a certain talent for adopting and gaining control of new technologies developed elsewhere, but I can't recall a single major MS product that was homegrown. Amazon had a revolutinary idea. Google had a revolutionary product. Apple has both innovatve products and a relentlessly perfectionist esthetic utterly at odds with Microsoft's "everything including the kitchen sink" philosophy.
Brass is right, though, in thinking that it would be bad to revel in Microsoft's troubles. He has his own diagnosis for why an army of the world's best and brightest have largely failed to bring great new products to the market - entrenched corporate politics, as well as some old scores to settle.
Is it a leadership problem? Or inevitable senescence?
Wednesday, February 03, 2010
Bourgeois Physics
Clifford has a friend who thinks physicists are weird,
…like Einstein, with crazy hair…
So You Want to be a Bank Director
Tuesday, February 02, 2010
David Brooks Has A Message for Old People
Or maybe it's just "So Long Suckers."
For four decades the Republican Party and the CLub for Growth and all the other right wing stink tanks have labored to bankrupt the country to the point where Social Security and Medicare will be destroyed. They have nearly succeeded.
Monday, February 01, 2010
Cheesecake ensues. |
cbaec10b0f4f0d6f | A8: ATOMS AND . . .
COROLLARY THEOREMS: "sooner or later, any valid theorem is going to be invalidated."
" . . . because we all know the English grammar is of German origin . . ."
The above words were written a few days ago (in 2005) by some N. American writer, somewhere on the Internet. The exact place where the incident happened is not important; what is important is that many (or all) of us "know that"! The truth is, yes, you could say English grammar is of German origin, providing that you consider the German language itself has about 70% of its grammatical structure founded on vigorous Latin roots. For clarifications on this topic, and on very many others, we suggest Logically Structured English Grammar 4.
This is an unbelievable situation. Our modern Civilization was founded on such an extraordinarily advanced culture, the Hellenic (Latin) one, yet very many people today do not like this aspect--particularly in the English countries. The German migratory people were just hordes of primitive creatures (same as the French, the Helvets, the Belgs, the Scandinavians, the Huns, and all others) when they came into Europe. We are extremely fortunate that there was a Roman Empire during those dire early days to spread the Latin culture, otherwise we would continue today selling/killing slaves and raiding territories through "fire and sword". This "English xenophobia" is a particularly interesting social-psychology aspect, therefore it is going to be dissected thoroughly in one of our future Amazing Articles.
The lack of basic, general culture these days is stunning! Just an example, Adolf Hitler--the most ardent supporter of the "German race superiority"--was very proud, and greatly impressed with the Latin roots the German people had from the extraordinary Roman Empire. His famous salute "from my heart to the sky" was copied from the Roman Imperial Guards salute: "Heil Hitler" is identical to "Ave Caesar" in meanings, and in gestures. Hitler's hottest dream was to build a one thousand years German Empire, just like the Roman one--read our Article 38.
Anyway, one hundred years ago people knew a lot more about the unmatched greatness of the Roman Empire than they do today. Our modern (national) societies, with their specific local customs, Laws, governmental structures, Democracy, science, literature, entertainment, and . . . whatever, are all tailored according to the Roman model (and, implicitly, to the Hellenic one). Modern education, you say? Well!
[Fragment from "Global Picture in News" December 1, 2006. © Corollary Theorems Ltd.]
Our Atomic Universe is built, naturally and obviously, out of atomic elements. Most of the time, atomic elements group themselves into complex substances: molecules. We have named 118 atomic elements up to now--or something close--but there is place left for many more elements in the Periodic Table of Elements. [The names "element" and "atom" mean exactly the same thing in certain contexts.] In the same time, we know and work with about two hundred thousand complex substances--this is just a rough approximation. [The names "substance" and "molecule" mean exactly the same thing in certain contexts.]
We say that we "know" and we work with atomic elements and molecules, only things are far from being clear to us, regarding what they really are. For example, many scientists know that billions of dollars have been spent on water molecule research during the last 50 years, all over the World. Despite very little gains in knowledge about what this water molecule really is, it appears that lately both the funding and the research efforts have increased a lot!
GREEN LEAF RSo, after 50 years of intense research we do not know too much about the pure, basic H2O (hydro). Well, the plain truth is, the simple, fundamental water molecule is a true mystery to our science. Now, if you add to that mystery what is suggested in Peasant Ambassador, then the little water molecule is going to lead us straight to natural antigravity!
Sure, the water molecule is a substance, built out of two elements, therefore it could be too complex for our gentle, sweet mechanical science. Let's take a look at only one atomic element; for example, the very first and the most simple one: hydrogen. Many researchers today are very proud to announce that they have solved the Schrödinger equation for the hydrogen atom. However, the solved equation mentioned helps exactly nobody and nothing. The Schrödinger equation is part of the Quantum Mechanics mathematical model of the hydrogen atom, and solving it is just another mathematical exercise. That equation tells us precisely nothing about the real hydrogen atom.
It was mentioned in A7 that we do need mathematical models, and on certain atomic/subatomic levels we cannot have better instruments to continue with our research. That is perfectly true, only we have to clarify the concept a little bit. All mathematical models are just theoretical models; they are not the "real thing". This affirmation is valid today, tomorrow, one thousand years from this time, and even one million years from now on.
The mathematical models of the atomic elements we use today, say the Quantum Mechanics Theory, are developed more like a collection of mathematical exercises which are excellent for mathematicians, but they have little or nothing to do with the reality under study: the atom. The Quantum Mechanics Theory [QMT for short] is just a bunch of equations which are solved based on a few assumptions: for example, that "c", the light-speed, is the maximum possible speed in the Universe. In fact QMT is such a preposterously absurdity that our Universe--and us--shouldn't exist, according to its "equations"!
Because very few people know how vague, utopian and fictional QMT really is, huge amounts of social-money are pumped into building enormous installations for breaking and fusing the atoms. With little efforts, the atom may be broken "free of charge" using the microwave stove in the kitchen. Those monstrous particle accelerators are used, mainly, to measure the energies of the atomic particles--well, let's take a look at those energies a little bit, this time without a multi-billion dollars particle-accelerator: we are going to use only our brains!
In the interstellar space the temperature is very low: about 1.3 Kelvin degrees, and that is as far as we can reach in nature towards the absolute 0 Kelvin temperature. Suppose that one oxygen atom is part of a molecule [SiO2--this is sand--for example] inside a rock, somewhere in the open space. Because it is so cold, there is no energy coming into the oxygen atom to sustain the movement of its electrons--eight--around the nucleus. Any book of Physics is telling us that at very low temperatures the atom "keeps a small quantity of energy" which allows the electrons to continue spinning around the nucleus. That explanation sounds reasonable, but only for the untrained mind.
Now, try to imagine that our piece of rock is lost in the open space for about 6 billion years--this is a reality. Next, try to calculate how much energy the electrons of one oxygen atom consume during that enormous period of time--it should be way more than 1 KW per one oxygen atom. Of course, a decent piece of rock has about one billion oxygen atoms, and that means our little piece of rock (the size of a pebble) "consumes" 1 Terra Watt of power out of nowhere! At the scale of the entire matter existing in open space in the Universe the numbers defy imagination, regardless of how wild it could be--this is, only for our Universe to exist, not to evolve in any way/form.
The problem is, where is that enormous amount of energy coming from? Out sweet Quantum Mechanics Theory is dead silent about that but . . . You should read MERCY, dear friends.
In the book following MERCY, LATHAN-KHON-KOP, you could read about the structure of the electron--among very many other interesting topics. We are certain you cannot believe this, but our tiny electron named "particle" in QMT is not a particle at all. In fact, the little electron hides so many secrets that we will not be able to discover them all not even after 10000 years of continuous, ascending development. As for the true nature and the structure of the protons, neutrons and . . . Aaah!
The Cold Fusion experiments using the Ball Lightning state of matter--which we have arbitrarily named the "electrons cloud"--show that the electrons are able to do a lot more than just holding an electrical charge. Amazingly, at the atom-level the electrons act like a shield which prevents the photons to reach the nucleus. That is an interesting observation, and it is similar to the "plasma and laser shield" described in MERCY, Hurran Invasion, and Peasant Ambassador.
Well, we could go on and on with arguments, though we do not want to spoil your surprise when reading our books. Now, let's direct our attention towards the psychological aspects of the scientific research activity, of our days--you could also find some details about this one in MERCY. Things are this way: most scientists today behave similar to the religious fanatics. That is, they "trust", and they "believe" in their "books", and they do not want to hear any heresies. Now, you shouldn't worry too much about that absurd, irrational manifestation, because that is, in fact, natural human behavior. However, we intend to dissect this social-psychology subject sometime, because it is very important.
You see, we could stuff these Internet pages with new incredible theories and strange phenomena about the atom, and beyond the atom, but this will help exactly nobody and nothing. Regardless of how advanced technologically we may become one day, for us, the Homo Sapiens species, the only thing important is to develop true, good human qualities: honesty, respect, love, trust . . . The truth is, too advanced technologies could easily lead us into total destruction, given our existing social-psychology state of mental development.
Any scientist, or future scientist, MUST BE first of all a compassionate and sympathetic person; in other words, an intellectual. That is far more important than understanding the Certitude Factors and the Fundamental Imbalance Principle that drives the Subatomic, for example. In fact, becoming intelligent people with true, good, human feelings is the great goal of our entire Human Species. If we manage to reach our goal, then we would be equal to any Civilization, anywhere, regardless of our--or their--technological level of development.
First published on May 27, 2005
© SC COMPLEMENT CONTROL SRL. All rights reserved.
Send your comments regarding this page using support@corollarytheorems.com
Page last updated on: December 25, 2016
© SC Complement Control SRL. All rights reserved.
Valid HTML 4.01!
Site pages valid according to W3C
Valid CSS!
Stylesheets pages valid according to W3C |
ef9a1b7ee56cfb0d | Take the 2-minute tour ×
I would like to know what quantization is, I mean I would like to have some elementary examples, some soft nontechnical definition, some explanation about what do mathematicians quantize?, can we quantize a function?, a set?, a theorem?, a definition?, a theory?
share|improve this question
Ugh, can someone rewrite this question? – Scott Morrison Nov 20 '09 at 2:42
I fear that the OP might be misinterpreting the meaning of the word "theory" in QFT. – José Figueroa-O'Farrill Nov 20 '09 at 17:46
I rewrote the question. – Kristal Cantwell May 26 '13 at 16:49
11 Answers 11
up vote 80 down vote accepted
As I'm sure you'll see from the many answers you'll get, there are lots of notions of "quantization". Here's another perspective.
Recall the primary motivation of, say, algebraic geometry: a geometric space is determined by its algebra of functions. Well, actually, this isn't quite true --- a complex manifold, for example, tends to have very few entire functions (any bounded entire function on C is constant, and so there are no nonconstant entire functions on a torus, say), so in algebraic geometry, they use "sheaves", which are a way of talking about local functions. In real geometry, though (e.g. topology, or differential geometry), there are partitions of unity, and it is more-or-less true that a space is determined by its algebra of total functions. Some examples: two smooth manifolds are diffeomorphic if and only if the algebras of smooth real-valued functions on them are isomorphic. Two locally compact Hausdorff spaces are homeomorphic if and only if their algebras of continuous real-valued functions that vanish at infinity (i.e. for any epsilon there is a compact set so that the function is less than epsilon outside the compact set) are isomorphic.
(From a physics point of view, it should be taken as a definition of "space" that it depends only on its algebra of functions. Said functions are the possible "observables" or "measurements" --- if you can't measure the difference between two systems, you have no right to treat them as different.)
So anyway, it can be useful to recast geometric ideas into algebraic language. Algebra is somehow more "finite" or "computable" than geometry.
But not every algebra arises as the algebra of functions on a geometric space. In particular, by definition the multiplication in the algebra is "pointwise multiplication", which is necessarily commutative (the functions are valued in R or C, usually).
So from this point of view, "quantum mathematics" is when you try to take geometric facts, written algebraically, and interpret them in a noncommutative algebra. For example, a space is locally compact Hausdorff iff its algebra of continuous functions is commutative c-star algebra, and any commutative c-star algebra is the algebra of continuous functions on some space (in fact, on its spectrum). So a "quantum locally compact Hausdorff space" is a non-commutative c-star algebra. Similarly, "quantum algebraic space" is a non-commutative polynomial algebra.
Anyway, I've explained "quantum", but not "quantization". That's because so far there's just geometry ("kinetics"), and no physics ("dynamics").
Well, a noncommutative algebra has, along with addition and multiplication, an important operation called the "commutator", defined by $[a,b]=ab-ba$. Noncommutativity says precisely that this operation is nontrivial. Let's pick a distinguished function H, and consider the operation $[H,-]$. This is necessarily a differential operator on the algebra, in the sense that it is linear and satisfies the Leibniz product rule. If the algebra were commutative, then differential operators would be the same as vector fields on the corresponding geometric space, and thus are the same as differential equations on the space. In fact, that's still true for noncommutative algebras: we define the "time evolution" by saying that for any function (=algebra element) f, it changes in time with differential [H,f]. (Using this rule on coordinate functions defines the geometric differential equation; in noncommutative land, there does not exist a complete set of coordinate functions, as any set of coordinate functions would define a commutative algebra.)
Ok, so it might happen that for the functions you care about, $[a,b]$ is very small. To make this mathematically precise, let's say that (for the subalgebra of functions that do not have very large values) there is some central algebra element $\hbar$, such that $[a,b]$ is always divisible by $\hbar$. Let $A$ be the algebra, and consider the $A/\hbar A$. If $\hbar$ is supposed to be a "very small number", then taking this quotient should only throw away fine-grained information, but some sort of "classical" geometry should still survive (notice that since $[a,b]$ is divisible by $\hbar$, it goes to $0$ in the quotient, so the quotient is commutative and corresponds to a classical geometric space). We can make this precise by demanding that there is a vector-space lift $(A/\hbar A) \to A$, and that $A$ is generated by the image of this lift along with the element $\hbar$.
Anyway, so with this whole set up, the quotient $A/\hbar A$ actually has a little more structure than just being a commutative algebra. In particular, since $[a,b]$ is divisible by $\hbar$, let's consider the element $\{a,b\} = \hbar^{-1} [a,b]$. (Let's suppose that $\hbar$ is not a zero-divisor, so that this element is well-defined.) Probably, $\{a,b\}$ is not small, because we have divided a small thing by a small thing, so that it does have a nonzero image in the quotient.
This defines on the quotient the structure of a Poisson algebra. In particular, you can check that $\{H,-\}$ is a differential operator for any (distinguished) element $H$, and so still defines a "mechanics", now on a classical space.
Then quantization is the process of reversing the above quotient. In particular, lots of spaces that we care about come with canonical Poisson structures. For example, for any manifold, the algebra of functions on its cotangent bundle has a Possion bracket. "Quantizing a manifold" normally means finding a noncommutative algebra so that some quotient (like the one above) gives the original algebra of functions on the cotangent bundle. The standard way to do this is to use Hilbert spaces and bounded operators, as I think another answerer described.
share|improve this answer
Concerning "...a space is locally compact Hausdorff iff its algebra of continuous functions is commutative c-star algebra": Unless I misunderstand the statement, it is not true: For any topological space $X$, the bounded functions $X\to \mathbb C$ form a commutative $C^*$-algebra. – Rasmus Bentmann Nov 11 '11 at 21:52
@Rasmus: hrm, it's now been a while since c-star-algebra class, and it's not my area. But my understanding is the following. First, when I say "algebra of functions", I never mean the algebra of bounded functions. In the real world, I usually want "all" functions, but when I am working c-star-algebraically, I mean "function that's less than $\epsilon$ outside a compact". Given $X$, the algebra of bounded functions is the algebra of functions on the Stone-Cech completion $\beta X$ of $X$, and it's not surprising for $\beta X$ to have better properties than $X$. – Theo Johnson-Freyd Nov 12 '11 at 6:41
But of course you're right, there's something wrong with the statement, because any indiscrete space has only the constant functions, which clearly form a commutative c-star algebra. Probably I should have added the word "Hausdorff" somewhere --- there's no chance of recovering non-Hausdorff structure from continuous $\mathbb C$-valued functions. – Theo Johnson-Freyd Nov 12 '11 at 6:43
I don't know what it means for a mathematician to quantize something, but I can give you a rough description, and a few specific examples, from a physicist's point of view.
Motivational fluff
When quantum mechanics was first discovered, people tended to think of it as a modified version of classical mechanics [1]. In those days, very few quantum systems were known, so people would create quantum systems by "quantizing" classical ones. To quantize a classical system is to come up with a quantum system that "behaves similarly" in some sense. For example, you generally want there to be an intuitive correspondence between the observables of a classical system and the observables of its quantization, and you generally want the expectation values of the quantized observables to obey the same equations of motion as their classical counterparts.
Because the goal of quantization is to find a quantum system that's "analogous" in some way to a given classical system, it's not a mathematically well-defined procedure, and there's no unique way of doing it. How you attempt to quantize a system, and how you decide whether or not you've succeeded, depends entirely on your motivation and goals.
The harder stuff
I've been using the phrase "quantum system" a lot---what do I really mean? In my opinion, one of the best ways to find out is to read Section 16.5 of Probability via Expectation, by Peter Whittle.
Roughly speaking, a quantum system has two basic parts:
• A complex inner product space $H$, called the state space [2]. Each ray of $H$ represents a possible "pure state" of the system. A pure state is somewhat analogous to a probability distribution, in that it tells you how to assign expectation values to "observables"; in particular, it tells you how to assign probabilities to propositions.
• A collection of self-adjoint linear maps from $H$ to itself, called observables. An observable is somewhat analogous to a random variable; it represents a property of the system that can be measured and found to have a certain value. The values that an observable can take are given by its eigenvalues (or, in the infinite-dimensional case, its spectrum). Say $A$ is an observable, $a$ is an eigenvalue of $A$, and $v_1, \ldots, v_n \in H$ form an orthonormal basis for the eigenspace of $a$. If the state of the system is the ray generated by the unit vector $\psi \in H$, the probability that the observable $A$ will be found to have the value $a$ is $\langle v_1, \psi \rangle + \ldots + \langle v_n, \psi \rangle$, where $\langle \cdot, \cdot \rangle$ is the inner product. You can then easily show that the expectation value of the observable $A$ is $\langle \psi A \psi \rangle$. Observables whose only eigenvalues are $1$ and $0$—that is, projection operators on $H$—play a special role, because they correspond to logical propositions about the system. The expectation value of a projection operator is just the probability of the proposition.
Most interesting quantum systems have another part, which is often very important:
• A set of unitary maps from $H$ to itself, which might be called transformations. These represent "automorphisms" of the system. In physics, many quantum systems have a one-parameter group of transformations, often denoted $U(t)$, that represent time evolution; the idea is that if the state of the system is currently (the ray generated by) $\psi$, the state will be $U(t)\psi$ after $t$ units of time have passed. Physical systems often have other transformation groups as well; for example, a quantum system that's supposed to have a "spatial orientation" will generally have a group of transformations that form a representation of $SO(3)$.
A few examples
• Quantum random walks are, as the name suggests, quantized random walks. More generally, you can quantize the idea of a Markov chain. For a great introduction, see the paper "Quantum walks and their algorithmic applications", by Andris Ambainis.
• In Sections 2 and 3 of the notes "A Short Introduction to Noncommutative Geometry", Peter Bongaarts describes quantized versions of compact topological spaces and classical mechanical systems.
• In Section 4 of the book Noncommutative Geometry (caution---big PDF), Alain Connes introduces a quantized version of calculus. Here, the observables representing complex variables are non-self-adjoint because complex variables can take on complex values. An observable representing a complex variable must therefore be allowed to have complex eigenvalues.
I hope this helps!
[1] Today, in contrast, most physicists think of classical mechanics as an approximation to quantum mechanics.
[2] If $H$ is infinite-dimensional, it's typically a separable Hilbert space. You may even need $H$ to be something fancier, like a rigged Hilbert space.
share|improve this answer
Just to restate some facts already stated in other answers, quantization can mean a few different things. In deformation quantization, we start with a classical theory given by a Poisson manifold. Then, (by definition) the algebra of functions forms a Poisson algebra. A quantization of this algebra is a noncommutative algebra with operators $X_f$ for $f$ a function. There is also a formal parameter $\hbar$. This algebra satifies $$ X_f\ X_g = X_{fg} + \mathcal{O}(\hbar)\ . $$
The idea of quantization is that the Poisson bracket becomes a commutator, or
$$ [X_f,X_g] = \hbar X_{\lbrace f,g \rbrace} + \mathcal{O}(\hbar^2)\ . $$
Thus, we have a noncommutative version of classical mechanics. The existence of such an algebra is a theorem of Kontsevich (the case of a symplectic manifold was solved much earlier, but I forget by whom).
In mathematics, there are plenty of interesting analogous situations where you have a noncommutative thingie which is, in some sense, a formal deformation of a commutative thingie. You can see the other direction of the above as an example of the following general fact. Given a filtered algebra whose associated graded is commutative, there is a natural Poisson structure on the associated graded.
In physics, however, it's not enough to just deform the algebra of functions; we have to now represent things on a Hilbert space. This introduces a whole host of other problems. In geometric quantization, this is split into two steps. Let's say we have a symplectic manifold whose symplectic form is integral. Then we can construct a line bundle with connection whose curvature is that symplectic form. The Hilbert space is the space of $L^2$ sections of this bundle. This is much too large, however, so you have to cut it down (which is step 2). In various cases, well-defined procedures exist, but I don't believe this is well-understood in general. For example, I'm not sure it's possible to represent every function as an operator.
It's probably worth pointing out that, from the point of view of physics, quantization is backwards. It is the quantum theory that is fundamental, and the classical theory should arise as some limit of the quantum theory. There's some interesting mathematics there, and also a whole lot of philosophy too.
share|improve this answer
I believe that the symplectic case was solved independently by De Wilde-Lecomte, Omori-Maeda-Yoshioka and Fedosov. – José Figueroa-O'Farrill Nov 20 '09 at 17:43
The word has many meanings in mathematics, most of them quite vague.
One general way of describing what quantization is for a mathematician is the following: you have your favorite object $X$, and you find that there is a family of other objects $X_q$ parametrized by a parameter $q$ which varies in some set (or is only a ‘formal parameter’ in the way that the variable in a polynomial ring is ‘formally’ an element in a over-ring of the coefficient ring) such that for a special value $q_0$ of the parameter $q$, or, in the ‘formal’ case, when the parameter degenerates in some specific way, you have that $X_{q_0}$ is your original favorite $X$, and if the objects $X_q$ are in some sense (more) non-commutative than $X$, one says that the family $X_q$ is a quantization of $X$.
Very vague, I know. And this is only interesting if both your $X$ is interesting, if the $X_q$ themselves are interesting, and if there is some connection between the two.
For example, integer numbers are undeniably interesting objects, and they have a ‘quantization’, given by the (one of the couple of) usual quantum integers where this is very visible.
The thing is, usually, starting from some interesting $X$, there are really not very many ways in which you can do this. For example, if you start with an enveloging algebra of a simple Lie algebra over $\mathbb C$, then there is just one way to do this (up to the appropriate way of ignoring that there are really many ways to do this)
share|improve this answer
I think you mean quantization is some kind of deformation theory. – Allen Sep 28 '12 at 11:23
There are some good long answers already, so I'm going to try to give as short an answer as possible.
A quantization of $X$ is some $X_\hbar$ depending on a parameter $\hbar$ (occasionally $q=e^\hbar$ instead) such that $X=X_0$ and $X_\hbar$ is generically "less commutative" than $X$. This is by analogy with quantum physics where $X_0$ is classical physics and $\hbar$ measures the failure of position and momentum to commute.
share|improve this answer
In mathematics, quantization often refers to some kind of deformation of a classical object. The Heisenberg Uncertainty Principle says that the position and momentum operators do not commute. In fact, $[X,P]=i\hbar$. In the limit as $\hbar\to 0$, these operators commute once again. Technically speaking, this is nonsense as $\hbar$ is a universal constant, but in mathematics, we are free to play with parameters. A couple of examples include:
• the noncommutative torus, the universal $C^\ast$-algebra generated by two unitaries satisfying $uv=e^{i\theta} vu$. As $\theta\to 0$, we get $C(\mathbb{T}^2)$, the continuous functions on the $2$-torus. We usually think of the deformed algebra as a quantization of the commutative one.
• some quantum groups are deformations of universal enveloping algebras, i.e., we get the universal enveloping algebra as $q\to 1$.
share|improve this answer
As a physicist who has taken a bunch of Quantum Mechanics and Solid State physics, when we say "quantize your system" it means:
You set up your classical Lagrangian $L$ (in terms of kinetic $K$ and potential $U$ energy), given generalized coordinates $q_i,p_i$ (usually position and momentum, but could also be angles and angular velocity). You then take the Hamiltonian $H$ of that system, which in most cases becomes $H=K+U$. This is all in terms of your generalized coordinates.
Once that is done, "quantizing" the system (or your variables) means to simply set $[q_i,p_j]=i\hbar \delta_{ij}$. The quantum mechanics is now in effect. This is known as $\textit{canonical quantization}$.
Quantum Field Theory is a perturbation to Quantum Mechanics, where you perform a second quantization. For instance, in using electrodynamics in quantum mechanics you simply quantize the atomic-motion (which interacts with the $\textbf{E}$-field); this is the "semiclassical approach". Second quantization further quantizes this electromagnetic field, so that now the light and the atom both have discrete structures.
share|improve this answer
Given a theory, described by an action $S(\phi)$ with field $\phi \in \mathcal{P}$, where $\mathcal{P}$ is usually the set of sections of a bundle over some manifold $M$. The action admits $\mathcal{G}$ a set of gauge symmetries, $\phi \rightarrow \phi'$ such that $S(\phi) = S(\phi')$.
One has quantized this theory when one has calculated, or has an algorithm that can calculate
$\int_{\mathcal{P} / \mathcal{G}} \mathcal{O}(\phi) e^{iS(\phi)/\hbar} \mathcal{D}\phi$
for any function $\mathcal{O}(\phi)$ on $\mathcal{P} / \mathcal{G}$.
In the case of quantum field theory $\mathcal{D}\phi$ is usually ill-defined and the integral usually diverges. However, for a certain class of theories, so-called renormalizable theories, one can, more-or-less, make sense of this integral.
An excellent treatment of perturbative renormalization, from a mathematical point-of-view, is found in Kevin Costello's soon to be published book, Renormalization and effective field theory.
share|improve this answer
A very basic answer: think about the classical Hamiltonian, $$ a(x,\xi)=\vert \xi\vert^2-\frac{\kappa}{\vert x\vert},\quad \text{$\kappa>0$ parameter}. $$ The classical motion is described by the integral curves of the Hamiltonian vector field of $a$, $$ \dot x=\frac{\partial a}{\partial\xi},\quad \dot \xi=-\frac{\partial a}{\partial x}. $$ The attempt of describing the motion of an electron around a proton by classical mechanics leads to the study of the previous integral curves and is extremely unstable since the function $a$ is unbounded from below. If classical mechanics were governing atomic motion, matter would not exist, or would be so unstable that could not sustain its observed structure for a long time, with electrons collapsing onto the nucleus.
Now, you change the perspective and you decide, quite arbitrarily that atomic motion will be governed by the spectral theory of the quantization of $a$, i.e. by the selfadjoint operator $$ -\Delta-\frac{\kappa}{\vert x\vert}=A. $$ It turns out that the spectrum of that operator is bounded from below by some fixed negative constant, and this a way to explain stability of matter. Moreover the eigenvalues of $A$ are describing with an astonishing accuracy the levels of energy of an electron around a proton (hydrogen atom).
My point is that, although quantization has many various mathematical interpretations, its success is linked to a striking physical phenomenon: matter is existing with some stability and no explanation of that fact has a classical mechanics interpretation. The atomic mechanics should be revisited, and quantization is quite surprisingly providing a rather satisfactory answer. For physicists, it remains a violence that so refined mathematical objects (unbounded operators acting on - necessarily - infinite dimensional space) have so many things to say about nature. It's not only Einstein's "God does not play dice", but also Feynman's "Nobody understands Quantum Mechanics" or Wigner's "Unreasonable effectiveness of Mathematics."
share|improve this answer
I vote for "Nobody understands Quantum Mechanics". You are not Joking, Mr. Feynman :-) – Patrick I-Z Nov 20 '13 at 1:10
I'm gonna be a bit more down to earth and cover the basics of Weyl quantization (in units where $\hbar = 1$)...
The Hamiltonian is typically introduced first: starting from the de Broglie relation $p = k$ and the Einstein-Planck relation $E = \omega$ we can regard the (Weyl) correspondence principle heuristically as arising by viewing Fourier analysis through the lens of spectral theory for self-adjoint operators: i.e., we have
$p \rightarrow -i\partial_x, \quad H \rightarrow i\partial_t$
which leads immediately to the Schrödinger equation, in which the energy levels are associated with eigenvalues of the Hamiltonian. The Euclidean version is obtained by a Wick rotation:
$t = -i\tau \Rightarrow \partial_t = \partial_{-i\tau} = i\partial_\tau \Rightarrow H \rightarrow -\partial_\tau.$
The time evolution operator encoding the dynamics is just $U(t) = e^{-iHt}$. The rest is details or field theory.
share|improve this answer
Here is a link to an article on quantization in physics:
The article contains links to other articles on quantization including cananonical quantization and geometric quantization, and weyl quantization. quantization involves converting classical fields to operators acting on quantum states of the field theory.
share|improve this answer
Your Answer
|
b42536941ac5027c | Sunday, December 01, 2013
Elizabeth Pennisi writes about Richard Lenski's long-term evolution experiment
Let's talk about two issues in that paragraph.
Gould was mistaken
Stephen Jay Gould wrote an excellent book on the Burgess shale back in 1989 (Wonderful Life).1 The title refers to a movie with a similar name (It's a Wonderful Life). In the movie, George Bailey, played by James Stewart, is taken back in time and shown how his life has changed so many other lives for the better.
In his book on the Burgess shale, Gould introduces the "tape of life" and defends the position that if we rewind the tape of life and replay it, the results will be entirely different. Gould is referring to evolution over the long term (macroevolution is his schtick) and he specifically mentions things like random extinctions, chance, and asteroid impacts. The history of life, like the history of George Bailey's life, is contingent on everything that went before and small changes can have huge impacts. (Think of the Back to the Future movies.)
Gould was not referring to stepping back just a few generation and seeing if the same one or two mutations could happen again. That's not at all what he meant.
So, Elizabeth Pennisi is not being fair when she says, "Gould was mistaken when he claimed that, given a second chance, evolution would likely take a completely different course." She probably never read Wonderful Life so she doesn't understand what Gould actually said. Evolution seems to be some sort of mysterious dark matter to Elizabeth Pennisi.
In Lenski's long-term experiment, one (and only one) of the cultures evolved the ability to utilize the small amount of citrate in the medium. This gave that culture an enormous advantage. The other eleven cultures failed to evolve in this direction. I'll post a description of what happened but the evidence is clear. In order to evolve the ability to utilize citrate a number of improbable mutations had to arise in the correct order. The end result was contingent on chance events. It's pretty good confirmation of Gould's point about the tape of life [see Lenski's long-term evolution experiment: the evolution of bacteria that can use citrate as a carbon source].
And that conclusion is the exact opposite of what Pennisi says. Isn't that strange?
Biologists thought that evolution could stop
Elizabeth Pennisi says that, "contrary to what many biologists thought, evolution never comes to a stop, even in an unchanging environment." She claims that biologists (some? many?) thought that evolution would come to a stop in an unchanging environment. If they thought that, then they would have to believe two things.
1. There's no such thing as random genetic drift or that fixation of deleterious alleles or nearly neutral alleles by random genetic drift doesn't count as evolution.
2. Most species are perfectly adapted to their present environment so that further adaptation is no longer possible. You have to believe this because new mutations are happening all the time and if evolution by natural selection has stopped then none of these new mutations can be beneficial.
I suspect that Pennisi is right and some biologists really are ignorant of random genetic drift and really do think that natural selection is so powerful that every species is at the top of an adaptive peak.
However, what she says is unfair because that's not what evolutionary biologists think. Surely there aren't many evolutionary biologists who just learned about random genetic drift from reading Lenski's papers? Surely there aren't many evolutionary biologists who thought that E. coli was perfectly adapted to growth in minimal medium?
Once again, Pennisi is conveying false and misleading information to her readers. Lenski is not stupid. When he started his experiment 25 years ago he fully expected to see evolution and when the first papers were published they did not come as a great shock to evolutionary biologists in spite of what Elizabeth Pennisi would have you believe.
Is it true that many biologists don't know enough about evolution to realize that it occurs in an unchanging environment?2 I suspect it might be true. I also suspect it's a common belief among members of the general public because that's the way evolution is usually taught. I suspect it's what Elizabeth Pennisi believed.
What does the future hold?
Richard ("Rich") Lenski has a blog. Here's what he recently wrote [Fifty-Thousand Squared],
If the USA can afford to spend enormous amounts of money on physics experiments and huge sums on NASA, then why not spend a few million on an endowment to make sure Lenski's long-term evolution experiment continues far into the future?
1. I'm not interested in quibbling about things that Gould got wrong in his book or about Conway Morris and his silly ideas about convergence. Save that for another day. The point here is whether Pennisi was fair.
2. Please, let's not get dragged into a lengthy discussion about stasis and punctuated equilibria. What Eldredge and Gould showed was that morphological change could be locked in by speciation (cladogenesis). In most cases, the speciation event occurred in the same environment and both species continued to exist side-by-side for millions of yeas. Neither Eldredge nor Gould ever believed that no evolution was occurring during periods of stasis.
1. Thank you especially for bringing up the clarification of what Gould actually said on whether evolution would follow the same course if history could be re-wound to a start again. On more than one occasion I had seen people declare Gould was wrong on this (with too much glee, IMO) while they were referring to microevolution, when actually Gould was writing about very long term macroevolution.
2. Larry,
I know I have been a pest, (it's my nature; I'm asperger's if you can appreciate that) but this post made my day.
Larry, that is why I start my day by drinking coffee while reading your blog. Well that is not entirely true but your blog is up there.
BTW: Didn't I make it clear in the past??? I'm not a creationist....I CAN'T BE. WHO IN THE WORLD WOULD THINK THAT? NEG-ENTHROK? COME ON!!! GET A LIFE!!
3. This comment has been removed by the author.
4. Thanks for this - Wonderful Life is probably my favorite Gould book (as a layperson I've not yet decided whether to tackle the big evolutionary theory book), and Lenski's experiment is something I've admired tremendously for its elegance of design and care of execution. If he gets together an endowment fund (government funding for such a long term experiment seems way too politically contingent to me, unfortunately), I'd love to know where to sign up.
Regarding your two points:
- How can anyone do sufficient research to write about Lenski's experiment for Science and so misunderstand what it says about contingency? (The experiment sheds light on many other things besides the role of contingency, of course.)
- As a layperson, the concept that evolution would stop in stable environments is nonsensical to me. First, where exactly is the environment stable for lengths of time relevant to speciation for every form of life in a particular location? Second, how many forms of life are incapable of spreading into new environments, microscopic or macroscopic? Certainly e. coli have spread through many different locations and environments. (Highly recommend Carl Zimmer's Microcosm.) As you say, there are always new mutations, and if they are not seriously deleterious in the varied and changing environments in which they find themselves, there's a chance they'll catch on.
5. Neutral evolution by drift is verified mathematics. A biological science reporter not knowing that neutral evolution is unstoppable approaches incompetence. Natural selection may also proceed despite a constant environment, but I have not heard an airtight argument that it is inevitable. When you get lots of species together, even in an unchanging physical environment, coevolution seems to promote increasingly complex contrivances, with lags and unexpected disequilibria, like when a disease pops up, challenging all at irregular intervals. In principle, there should be [IMHO] a threshold when evolution, like fire, is self feeding. Pennisi should go back and meditate on her metaphysics.
Two asides: 1) although some of the pronouncements of Steven J. Gould can be questioned, his view that evolution is not determinant is unquestionable. Twin worlds will not be evolutionary photocopies. 2) I think Darwin pretty much accepted by 1859 that evolution was self-generating without needing outside environmental challenges.
Although Lenski’s bacteria continue to increase in fitness, and are not projected (do I remember rightly?) to reach a fitness plateau, only time will tell if the adaptive evolution of Lenski’s bacteria finally stalls. I bet they will stop evolving, even if a few adaptive mutations still arise along the way. If his bacteria were inserted into a more or less complex ecological microcosm (a few protozoans, some nematodes, a few additional bacteria), I would bet not only that adaptive evolution would be more brisk, but that replicates would differentiate more markedly. Now that tried and true protocols exist, someone might run an experiment to see how and if ecological complexity affects the tempo of adaptive evolution.
6. Gould's assertion " that if we rewind the tape of life and replay it, the results will be entirely different." is not only true for macroevoution but but for genetic drift as well. It is a direct consequence of quantum mechanics. Which mutation appears where and when is totally stochastic as the genetic material is molecular, Schrödinger was able to predict this in the 1940's purely on the basis of quantum mechanics from the fact that X-rays generate mutations.
Though a whether a specific mutation in a specific genome is fixed in a specific population size is effectively deterministic. The change in a specific genome over time is not deterministic. There is a vast number of branching paths for the neutral evolution of this genome to take. If we accept Everett's position that the wavefunction does not collapse then this evolution takes place in Hilbert space. The neutral evolution of a molecule of genomic DNA in an organism matches the evolution of its wavefunction and all the resulting "worlds" are equally real.
This presents a large number of possible genomes for for natural selection to operate on in parallel. Consequently at the level of macroevolution "if we rewind the tape of life and replay it, the results will be entirely different." This is in part due to the quantum stochastic processes underlying neutral (and nearly neutral) genetic drift.
1. Yes, quantum mechanics is involved, since quantum mechanics is a fundamental feature of the universe, so it's involved in pretty literally everything everywhere. But you don't really need quantum mechanics concepts (e.g., the uncertainty principle) to capture the concept of contingency in evolution. After all, asteroids are massive enough that quantum effects are quite small, and DNA copying errors would still exist in the absence of mutations caused by radiation.
2. Copying errors occur at the molecular level and are quantum stochastic processes. Andy Albrecht argues that all probabilistic processes are fundamentally quantum mechanical, On this basis one needs to understand the implications of this for all stochastic processes in understanding evolution both at the molecular level and for macroevolution. The quasi-classical world is fundamentaly quantum mechanical and this has ontological implications.
3. No, no, this is not correct. It's easy to be misled because quite a lot of nonsense has been written on this topic by biologists and philosophers who apparently have not discussed the issue with their local quantum chemist. Just because something happens at the molecular level does not mean it is "microscopic" in terms of quantum indeterminacy. This is a common misconception, e.g., it's in Monod's _Chance and Necessity_. Most of biochemistry is "macroscopic" relative to quantum effects.
For instance, when computational chemists model the folding of a protein they use classical dynamics with force-fields-- think of taking a physical model of a molecule made out of springs and magnets and so on, and throwing this physical model in a clothes dryer for a few hours to see how it folds. That is the kind of process that they are modeling, which includes no quantum magic.
*Some* things that happen at the molecular level are subject to quantum indeterminacy. For instance, some enzymes appear to make use of quantum tunneling in their active sites. Anything involving electromagnetic radiation of course is subject to indeterminacy. But not all mutations involve radiation, and even the ones that involve radiation only involve it in an early step, i.e., a UVB or X-ray photon causes (indeterministically) the formation of a reactive species such as peroxide, which then diffuses (quasi-deterministically) and causes damage, which is repaired (quasi-deterministically) by enzymes, which is where the mutation comes in. Of course the causal chain leading to the mutation is indeterminate if any event in the chain is indeterminate.
chemicalscum, I think you left out "not" in the sentence "whether a specific mutation in a specific genome is fixed in a specific population size is effectively deterministic." The chance of fixation for a new mutation is 1/N for a neutral mutation (haploid case) and ~2s for a beneficial mutation.
4. The copy of GAMESS (General Atomic and Molecular Structure System) that I have on the hard drive of this computer has routines for doing QM/MM calculations and using FMO (Fragment Molecular Orbital Theory) that enables calculations on very large systems. These routines can be used as part of protein folding studies.
However this is not important. Like the ab initio, semi-empirical and DFT QM calculations I perform as a pharmaceutical chemist on much smaller systems, the calculations use the time independent form of the Schrödinger equation and treat the wavefunction of the molecule as a standing wave. In determining a molecular structure the QM programs seek out thermodynamic minima just the same a molecular mechanics (MM) ones do. This is static not dynamic and therefore is not relevant to evolutionary processes as here the wavefunction does not evolve.
What is interesting are quantum processes where forking into different states is possible, such as the Schrödinger cat thought experiment. Here we have the (sorry about the bad spacing) density matrix in Dirac notation:
|cat alive> |cat alive + cat dead>
|cat alive - cat dead> |cat dead>
Decoherence (interaction with the environment) destroys the off-diagonal elements of the matrix) leaving behind the trace, here two vectors in Hilbert space: |cat alive> and |cat dead> which continue to evolve independently and in parallel.
This can simply be changed for a specific mutation in DNA that is caused by a QM event to this density matrix:
|no mutation> |no mutation + mutation>
|no mutation - mutation> |mutation>
Again were are left with two vectors which continue to evolve independently and in parallel in Hilbert space. That is there are two different genomes evolving independently and in parallel in Hilbert space.
The Löwdin mechanism proposed for some DNA mutations uses quantum tunnelling and thus is definitely a quantum stochastic process.
I would draw attention to Andy Albrecht's arXiv paper I reference above where he argues "using simple models that all successful practical uses of probabilities originate in quantum fluctuations in the microscopic physical world around us, often propagated to macroscopic scales". In the above analyses the quantum event is clearly propagated to the macroscopic scale.
Arlis you are right about the not. I didn't really think about the not vaguely remembering lots of deterministic looking equations when I tried to do a bit of self study of population genetics. Then it is dealing with probabilities and as argued above all probabilities ultimately derive from the quantum level. Dealing with populations is a bit like statistical thermodynamics, at root there is quantum indeterminacy at the atomic and molecular level which even Boltzmann didn't know about but for large ensembles there is FAPP determinism. Its when you deal with interesting complex individual things like cats and an individual DNA molecules that "worlds" or "histories" (use your favourite terminology here) split and evolve on separate parallel pathways.
There is no classical world as quantum indeterminacy is continually welling up to the macroscopic level. This is why we live in a stochastic quasi-classical world.
5. chemicalscum, thanks for writing this. You seem to know more about this issue than I do, though I'm quite skeptical of the claim in the arXiv paper that you cite. My experience is that I have seen many naive claims from biologists who don't know about physical biochemistry, physicists who don't know about the biology of mutation, and philosophers who don't know about either one, and I've discussed some of the issues with a colleague who is a quantum chemist (first author of, now retired). I'm currently writing a paper on mutation and randomness and I want to be sure to get this part right. Can you contact me to discuss this offline? I'm easy to find.
6. Oh, and by the way, I stand corrected-- I mis-spoke when I said "classical dynamics". This is the point in chemicalscum's first paragraph-- the calculations in GAMESS are based on molecular orbital theory, which is based on quantum mechanics. So, this is not "classical dynamics", although in practice this kind of calculation does not allow quantum uncertainty to percolate up and result in indeterminate folding outcomes (if I understand correctly, this is the point in chemicalscum's second paragraph).
7. What they call "living fossils" is a hit against Evolution going constantly on in unchanging niches.
Did this experiment create new species with new names in the lists??
Was it anything other then ordinary attrition in nature even with a new mutation helping out?
If it doesn't leave its kind its not important as a evidence for evolution. Evolution means crossing thresholds of biology to advance complexity and diversity of note.
Hi, Robert. What I see from laypersons in my particular area of expertise (law) is that they take one little bit of knowledge and incorrectly apply it to the entire field - i.e., they over-generalize. And that's what you just did in the statement above. All that has to happen for so-called "living fossils" to exist is that some species relatively closely resemble their ancestors over extremely long periods of time. For animals and plants large enough to make fossils, that does happen, though my impression is that it is relatively rare. Nothing very surprising about that, and certainly no challenge to evolutionary theory.
2. There are no known example of species that haven't evolved at a relatively constant rate over milllions oy years. The are a few examlpes of species where that evolution didn't result in big changes in gross morphology but even there you can measure changes if you look closely enough. The idea of "living fossils" is a myth.
Robert, your other questions are irrelevant. Nobody ever claimed that this experiment would produce new species. Your attempt to redefine evolution is pathetic.
3. Personaly I don't have much of a problem with the concept of "living fossils". I like Dan Graur's view on the issue:
"""The following paragraphs are taken from a draft of a new book on molecular evolution. I use “living fossils” to illustrate the disconnect between molecular and morphological evolution.
I would greatly appreciate all those objecting to the term “living fossil” to let me know (dgraur at uh dot edu) what’s wrong with my text, and how should it be changed (in addition to updating my references to include the new reports on the Latimeria genome).
“Living fossils are defined as taxa that have not changed morphologically for long periods of time (say, 100 million years). As far as living fossils are are concerned, it is of interest to find out whether the morphological stasis is also accompanied by molecular stasis.
Quite early in the history of molecular evolution, it was noted that sharks, which have not changed to any conspicuous degree since the Devonian, evolved at about the same rate as other “nonfossil” organisms (Fisher and Thompson 1979; Kimura 1989). Furthermore, the mitochondrial DNA of the alligator (Alligator mississippiensis), which is also considered to be a living fossil, evolves much faster than that of birds, which presumably appeared on the evolutionary stage much more recently (Janke and Árnason 1997). Turtles, on the other hand, which have remained morphologically unchanged since Triassic times, seem to evolve at the molecular equivalent of a “turtle’s pace” (Avise et al. 1992). A similar case is observed with the coelacanth species belonging to Latimeria, which before their discovery in 1938 were believed to have been extinct since the end of the Cretaceous, about 65 million years ago (Inoue et al. 2005).
The most spectacular example of a lack of relationship between morphology and molecular evolution is most probably that of the horseshoe crab (class Merostomata), which despite their name are more closely related to scorpions, spiders, and mites than to the crustaceans. While the morphology of horseshoe crabs has changed little in the last 500 million years—indeed, one extant horseshoe crab, Limulus polyphemus is almost indistinguishable in its morphology from its extinct Jurassic relatives—their rates of molecular evolution are unexceptional (e.g., Nguyen et al. 1986; Tokugana et al. 1993). Thus, there seems to be no obvious relationship between morphological and molecular change.”""
4. I would point out that all of Graur's "living fossils" (except the horseshoe crabs) are diverse clades within which there is considerable extant morphological disparity. To claim that they haven't changed in two or three hundred million years is to ignore both extant and fossil disparity. It's like saying that mammals are living fossils because Morganucodon has all the features of extant mammals.
So yes, I do have an objection.
Horseshoe crabs are a bit different. There are only a few species, and to my eye they do all look quite similar to each other and their fossil relatives. The differences are there, but they're subtle. Not so for the others.
5. """"So yes, I do have an objection.""""
Fine by me, I'm a microbial ecologist, so I don't care about those pesky anatomical details anyway ;p Your should email Graur, then. He's honestly interested in other's opinions on the subject for his book. And I will buy the book, so the more discussion of opinions the better.
So there's "living fossils" after all? ;)
Kidding asside, Graur does say in another post that:
"""On the other hand, living fossils do exist. For a species to be considered a living fossil, it must possess a great number of plesiomorphies (i.e., ancestral traits), and these ancestral traits must be shown to be of great antiquity. How many plesiomorphies and what constitutes “great antiquity” should be specified in each case under study. Opinions may vary on these two issues, and some researchers, like Mark Robinson-Rechavi, may find the term objectionable a priori as it tends to color subsequent inferences. Nonetheless, it is possible to define objectively a taxon as a living fossil."""
So it seems that there is some significant arbitraritry to what should constitute enough differences for something to be called a living fossil or not.
Anyway, keep the discussion going, I'm interested.
6. Creating a "living fossil" category isn't the issue. The classical neo-Darwinian view of the Modern Synthesis is that species generally are close to an adaptive optimum. If the environment shifts, they quickly adapt via available variation. If a species is not exhibiting changes, this would mean that the environment is not changing.
There has long been another line of argument that species may become burdened with a tangle of "constraints" that prevents effective change.
I think it is fair to say that there has been a widespread belief in stasis as an evolutionary pattern (I'm not sure if this is what Pennisi is saying). It hardly matters to the paleontological pattern of stasis that some molecules are changing invisibly! One still has to explain the pattern of stasis in those features for which there is stasis! In a nutshell, the neo-Darwinian view is that everything varies, and selection is powerful. The only way these 2 things can be true in the case of stasis is if the species is at an optimum.
7. There are a few examlpes of species where that evolution didn't result in big changes in gross morphology
That is what I meant by "some species relatively closely resemble their ancestors."
As a layperson I'm probably missing something relatively elementary here, but what if the vast majority or even all of the variation is with respect to alleles that are neutral wrt selection? (Say for example that any variation of non-neutral alleles is very tightly constrained.)
9. I don't think you are missing anything. It's just that your scenario conflicts implicitly with the premise that selection is powerful. If selection is ultra-powerful, then there can't be any neutral variation.
The way that neo-Darwinians have negotiated this implicit conflict is to suppose that there are important features directly impacted by selection, and non-important ones that are hidden. The old neo-Darwinian rules still apply to everything important, but they don't apply at the "molecular level".
This resolves the seeming conflict in which "living fossils" (horseshoe crab, coelocanth, platypus, snapping turtle, crocodile, chambered nautilus, etc) are morphologically stable for 10s or 100s of MY, yet still change at the molecular level.
However, we still have to explain stasis in the "important" morphological features of all those "living fossils", and the neo-Darwinian explanation is, as always, selection. If important things change, it's selection. If they don't change, that's due to selection as well.
10. Thanks Arlin, got it. Just for clarity, would you care to provide a summary of a non-ultra-adaptationist alternative?
11. There are many alternatives to the Darwin-Fisher extreme view in which everything varies and selection keeps the population close to the current optimum.
Wright's alternative doesn't really challenge this presumption, but emphasizes the importance of interactions, such that there are better optima that are not locally accessible via continuous upward (in terms of fitness) shifts. An interconnected set of demes that are individually subject to drift can overcome this challenge via the shifting balance mechanism (in theory). This is sort of like turning up the heat in simulated annealing.
There is a structuralist alternative (invoked for over a century) that focuses on the idea that the organism is not infinitely malleable. Only certain types of changes are genetically-developmentally possible. When these "constraints" are limits on the variation that the system is capable of generating (i.e., system A can't generate a variant B), then it is clearly an alternative to the standard view. When "constraints" are understood as effects of selection (we can't get from A to B to C because B has a lower fitness), then this is less of a departure. Often it's hard to tell. But the structuralist would argue that stasis is often due to "constraints". When constraints are due to selection this is only slightly different from the Darwin-Fisher view, except that structuralists tend to rejection the assumption of natura non facit saltum.
IMHO, this ambiguity is a reason that "constraints" has faded away as an alternative paradigm. If the currently accepted paradigm is P, then it's pretty hard to sell I-just-can't-decide-between-P-or-Q as a revolutionary alternative.
There may be other views of stasis but nothing occurs to me at the moment.
12. And if anyone wants to read an article that (judging from the abstract) is going to present what I would call the conventional view linking stasis to stabilizing selection, see:
Estes, S. and S.J. Arnold, Resolving the paradox of stasis: models with stabilizing selection explain evolutionary divergence on all timescales. The American Naturalist, 2007. 169(2): p. 227-44.
8. It might be worth noting that an unchanging environment is impossible. All climatic variation, all other species, including microorganisms, everything must stop for there to be an unchanging environment. How is that even remotely possible.
1. """ How is that even remotely possible."""
You should tell that to the hyppies and some people at Greenpeace.
2. The idea of an unchanging environment (in the wild) seems counter-reality to me. Something that should be considered when talking about unchanging environments or stoppage of evolution is scale or degree. On what time scale (or other scale) is an environment unchanging and on what time scale can or does evolution stop?
As anthrosciguy said:
Even if that could happen, for how long could it last? A nanosecond, a minute, an hour?
3. Larry said something that I think relates to what I said about scale/degree:
Yes, even in cases where changes aren't easily noticeable, if a close enough look is taken and enough time is allowed, changes will be apparent and measurable, and I would think that this applies to life forms and everything else in the universe.
9. It is a creationist point about how some creatures have not changed in looks for claimed ages in millions. Quite a lot of millions of years.
I have heard the present denial there are "living fossils" a word coined be evolutionists previously.
Yet if morphology is so similar , after so long, then it suggests that constantly evolving is practically, i say practically, non existent. Looking for differences is looking too hard. People have more differences then many creatures noted by posters here and many more mere millions of years old. Horses for example are said to look exactly, almost, like they did 20 million years ago in the miocene. etc.
in fact our own body parts could be said to be living fossils. Our eyes, hearing, immune system , liver, etc are identical to claimed relatives which must mean from that common descent there has been LITTLE or no evolution in great time periods.
I think evolution should cling to PE and not constantly evolving concepts. Just leaps and great stasis.
1. Byers said:
What difference does it make whether evolution occurs "constantly" or not when it comes to supporting your religious fairy tales? Even if ALL evolution were to take a million year break now and then that wouldn't add one bit of evidence to support your ridiculous YEC beliefs.
Whether evolution occurs "constantly" or in spurts or a mixture of the two, there's NO way, other than by deluding yourself, that the evidence of the multi-billion year history of the Earth and the evolution of its life forms can be crammed into a 6,000 year time span.
2. The difference was important relative to the discussion. Your changing the thread here.
Its a good point for creationism about claims that creatures lived soooo long but look so alike to modern relatives YET are said to have been evolving ever since the old fossil was found.
Very unlikely for a system saying evolution is constantly going on in all creatures.
The living fossil thing says otherwise at the least.
In fact there is no evidence any evolution going on. Just evidence for diversity within types. Anyways only the geology is "evidence" for evolutionary change. The biology of data points is silent.
3. Robert, you're the one who tries to change a thread every time you post a comment in one. All you do is deny anything that pertains to actual science and you push your YEC religious beliefs. You obviously think that your assertions and questions are legitimate and sciency but it's abundantly clear that you just won't accept anything that challenges or refutes christian YEC fairy tales.
One of the points of mine that you are ignorantly dismissing is that no matter how much you try to to assert that evolution is "Just leaps and great stasis" or that "there is no evidence any evolution is going on" or that "constantly evolving is practically, i say practically, non existent" or that "there has been LITTLE or no evolution in great time periods" (make up your mind), the universe, the Earth, and life on Earth are OLD. Very, very OLD. Billions of years OLD.
The other point is that none of your inconsistent or erroneous assertions about evolution and science provide any supportive, scientific evidence for your religious beliefs.
By the way, how can there be "great time periods" if the whole universe is only 6,000 years old?
10. Ludicrous that Lenski still has to compete in the regular NSF pool to keep this going. Perhaps the most remarkable aspect of the experiment is that it didn't get randomly defunded by a grant panel somewhere along the way.
11. Actually, the beat poet seems to have stumbled on something useful upthread. I think I'm going to adopt "ordinary attrition in nature with new mutations helping out" as my new favorite definition of evolution.
Q: How are new species created?
A: Ordinary attrition in nature with new mutations helping out.
What could be more apt than that?
1. Aaargh, I left the "u" out of "favourite"! Living in the USA is really screwing up my English :-(
12. I found this quote from a paper by Lenski et al. in a presentation that Graur put up a couple of days ago, and it might be relevant to the first point you raised, Larry:
"The evolution of a phenotype is contingent on the particular history of a population. Historical contingency is especially important when it facilitates the evolution of key innovations that are not easily evolved by gradual, cumulative selection."
|
d7c91921ba2dbb4e | Symmetry, Integrability and Geometry: Methods and Applications (SIGMA)
SIGMA 7 (2011), 113, 11 pages arXiv:1112.2333
Breaking Pseudo-Rotational Symmetry through H+2 Metric Deformation in the Eckart Potential Problem
Nehemias Leija-Martinez a, David Edwin Alvarez-Castillo b and Mariana Kirchbach a
a) Institute of Physics, Autonomous University of San Luis Potosi, Av. Manuel Nava 6, San Luis Potosi, S.L.P. 78290, Mexico
b) H. Niewodniczanski Institute of Nuclear Physics, Radzikowskiego 152, 31-342 Kraków, Poland
Received October 12, 2011, in final form December 08, 2011; Published online December 11, 2011; Misprints are corrected December 24, 2011
The peculiarity of the Eckart potential problem on H+2 (the upper sheet of the two-sheeted two-dimensional hyperboloid), to preserve the (2l+1)-fold degeneracy of the states typical for the geodesic motion there, is usually explained in casting the respective Hamiltonian in terms of the Casimir invariant of an so(2,1) algebra, referred to as potential algebra. In general, there are many possible similarity transformations of the symmetry algebras of the free motions on curved surfaces towards potential algebras, which are not all necessarily unitary. In the literature, a transformation of the symmetry algebra of the geodesic motion on H+2 towards the potential algebra of Eckart's Hamiltonian has been constructed for the prime purpose to prove that the Eckart interaction belongs to the class of Natanzon potentials. We here take a different path and search for a transformation which connects the (2l+1) dimensional representation space of the pseudo-rotational so(2,1) algebra, spanned by the rank-l pseudo-spherical harmonics, to the representation space of equal dimension of the potential algebra and find a transformation of the scaling type. Our case is that in so doing one is producing a deformed isometry copy to H+2 such that the free motion on the copy is equivalent to a motion on H+2, perturbed by a coth interaction. In this way, we link the so(2,1) potential algebra concept of the Eckart Hamiltonian to a subtle type of pseudo-rotational symmetry breaking through H+2 metric deformation. From a technical point of view, the results reported here are obtained by virtue of certain nonlinear finite expansions of Jacobi polynomials into pseudo-spherical harmonics. In due places, the pseudo-rotational case is paralleled by its so(3) compact analogue, the cotangent perturbed motion on S2. We expect awareness of different so(2,1)/so(3) isometry copies to benefit simulation studies on curved manifolds of many-body systems.
Key words: pseudo-rotational symmetry; Eckart potential; symmetry breaking through metric deformation.
pdf (467 Kb) tex (130 Kb) [previous version: pdf (466 kb) tex (130 kb)]
1. Natanzon G.A., General properties of potentials for which the Schrödinger equation can be solved by means of hyper geometric functions, Theoret. and Math. Phys. 38 (1979), 146-153.
2. Alhassid Y., Gürsey F., Iachello F., Potential scattering, transfer matrix, and group theory, Phys. Rev. Lett. 50 (1983), 873-876.
3. Engelfield M.J., Quesne C., Dynamical potential algebras for Gendenshtein and Morse potentials, J. Phys. A: Math. Gen. 24 (1991), 3557-3574.
4. Manning M.F., Rosen N., Potential functions for vibration of diatomic molecules, Phys. Rev. 44 (1933), 951-954.
5. Wu J., Alhassid Y., The potential group approach and hypergeometric differential equations, J. Math. Phys. 31 (1990), 557-562.
Wu J., Alhassid Y., Gürsey F., Group theory approach to scattering. IV. Solvable potentials associated with SO(2,2), Ann. Physics 196 (1989), 163-181.
6. Levai G., Solvable potentials associated with su(1,1) algebras: a systematic study, J. Phys. A: Math. Gen. 27 (1994), 3809-3828.
7. Cordero P., Salamó S., Algebraic solution for the Natanzon hypergeometric potentials, J. Math. Phys. 35 (1994), 3301-3307.
8. Cordriansky S., Cordero P., Salamó S., On the generalized Morse potential, J. Phys. A: Math. Gen. 32 (1999), 6287-6293.
9. Gangopadhyaya A., Mallow J.V., Sukhatme U.P., Translational shape invariance and inherent potential algebra, Phys. Rev. A 58 (1998), 4287-4292.
10. Rasinariu C., Mallow J.V., Gangopadhyaya A., Exactly solvable problems of quantum mechanics and their spectrum generating algebras: a review, Cent. Eur. J. Phys. 5 (2007), 111-134.
11. Kalnins E.G., Miller W. Jr., Pogosyan G., Superintegrability on the two-dimensional hyperboloid, J. Math. Phys. 38 (1997), 5416-5433.
Berntson B.K., Classical and quantum analogues of the Kepler problem in non-Euclidean geometries of constant curvature, B.Sc. Thesis, University of Minnesota, 2011.
12. Gazeau J.-P., Coherent states in quantum physics, Wiley-VCH, Weinheim, 2009.
13. Bogdanova I., Vandergheynst P., Gazeau J.-P., Continuous wavelet transformation on the hyperboloid, Appl. Comput. Harmon. Anal. 23 (2007), 286-306.
14. Kim Y.S., Noz M.E., Theory and application of the Poincaré group, D. Reidel Publishing Co., Dordrecht, 1986.
15. De R., Dutt R., Sukhatme U., Mapping of shape invariant potentials under point canonical transformations, J. Phys. A: Math. Gen. 25 (1992), L843-L850.
16. Alvarez-Castillo D. E., Compean C.B., Kirchbach M., Rotational symmetry and degeneracy: a cotangent perturbed rigid rotator of unperturbed level multiplicity, Mol. Phys. 109 (2011), 1477-1483, arXiv:1105.1354.
17. Raposo A., Weber H.-J., Alvarez-Castillo D.E., Kirchbach M., Romanovski polynomials in selected physics problems, Cent. Eur. J. Phys. 5 (2007), 253-284, arXiv:0706.3897.
18. Higgs P.W., Dynamical symmetries in a spherical geometry. I, J. Phys. A: Math. Gen. 12 (1979), 309-323.
Previous article Next article Contents of Volume 7 (2011) |
2be2ef831aa54310 |
top 200 commentsshow all 404
[–]sheriffjbunnell 60 points61 points (20 children)
I didn't teach it for years, but I once had a full on debate with a 6 year old I was teaching, that a baby horse is a pony, I was pretty adamant that I was right, turns out it's a foal, whoops.
I was very apologetic and for the rest of the year if I ever insisted something was fact, I always checked with her which made her laugh, I would look over after a while and she would just nod if what I was saying was correct. I'm not the smartest
[–]fizgigtiznalkie 11 points12 points (0 children)
reminds me on the Simpsons Selma said "There are no lady goats: a lady goat is a sheep"
[–]Ragnrok 51 points52 points (7 children)
You must have felt
puts on sunglasses
Pretty foalish.
[–]luthiz[🍰] 15 points16 points (6 children)
I laughed myself horse.
[–]slimzimm 3 points4 points (5 children)
Anypony can see that this is ridonkeydiculous.
[–]closetedforeveralone 5 points6 points (3 children)
Good thing you didn't put money on it. You don't want to have to ponie up.
[–]infinite_despise 3 points4 points (0 children)
You guys are so filly.
[–]Indubitability 2 points3 points (1 child)
I'm glad to still see pun threads, despite all the naysayers.
[–]HeroOfTime1987 2 points3 points (0 children)
I disagree. All these puns are a night Mare
[–]mmmjr16 1 point2 points (0 children)
[–]adtaylor 3 points4 points (4 children)
Pony is under 14.2 handssss.
I had this debate with plenty of people.
[–]Ragnrok 9 points10 points (1 child)
Pony is under 14.2 handssss.
Just like your mother.
that's kind of cute. i like it.
[–]exjentric 3 points4 points (2 children)
Yeah, never go head to head with a six-year-old girl when it comes to horse facts. They know their shit. Source: I was one of those horse-loving six-year-old girls once.
[–]AMostOriginalUserNam 1 point2 points (0 children)
It could have cute, but for some reason I'm having fun picturing it as bitter.
"Well London is the capital of England according to my text book, but maybe Emily has something to say about it. Huh? What was that? Nothing to say now huh? Didn't think so. Punk."
[–]infinite_despise 101 points102 points (29 children)
Happens a lot when analyzing novels, especially when dealing with symbolism. You can read one critic's point of view and it'll make perfect sense. Through each reading of the novel, you end up basing your opinions off of this one perspective, and sure enough, everything seems to make sense. You bring up these points in class; everyone seems to get it and everyone's content.
Then you stumble across a different critic's perspective, and the first one suddenly becomes a lie.
One of the most important things I've learned over the years is to analyze works for the sake of improving your general knowledge, critical thinking, and writing skills. Don't pretend you're the mind of the author. Don't think that the author is always trying to tell you something.
[–]HereForTheOpenBar 42 points43 points (5 children)
But what about the curtains? THE CURTAINS MAN
[–]infinite_despise 40 points41 points (4 children)
The curtains were fucking blue.
[–]Ragnrok 26 points27 points (0 children)
Oh shit, that must symbolize how the protagonist was raped by a cotton-candy vendor in his youth.
[–]ChristianTMI 6 points7 points (0 children)
Blue is the same as the sky. And curtains cant be so that's personification and a metaphor in one sentence!!!
[–]xnerdyxrealistx 16 points17 points (1 child)
The chapter in To Kill a Mockingbird where they have to shoot the dog is NOT an allegory for racism. My teacher kept comparing racism to rabies. I kept saying in my head no no no. It's not that rabies = racism. The point of that chapter is to show that sometimes someone (Atticus) has to fight for something and fix something that may not be his duty for the world to be a better place. My teacher kept describing the symptoms of rabies and comparing them to racism. I doubt Harper Lee was that dumb.
[–]Pjcrafty 22 points23 points (4 children)
I had always wondered if there are a bunch of dead author ghosts screaming "What? I didn't mean anything by that! It's just a blueberry muffin people!"
[–]troyanonymous1 11 points12 points (0 children)
The muffin symbolizes pregnancy.
-- Expert Katawa Shoujo analysis
[–]smileorwhatever 5 points6 points (2 children)
My teacher used to always say that it didn't matter what the author intended, it mattered what we could interpret. Even if the muffin was just a muffin to the author, if we could see it as a metaphor for the decay of society, then that's what it is.
Sigh. She may have been nuts though.
[–]dftba-ftw 1 point2 points (0 children)
Iv'e heard authors themselves say that , she's wasn't wrong. For example John Green is a big perpetrator of finding meaning even if it wasn't intended by the author.
Ive always hated this as well. I had a literature teacher that thought Beowulf contained symbolism (which it might). But she explained the symbolism in Beowulf in a way that I refuse to believe people were even thinking about when Beowulf was written, or memorized.
[–]128px 7 points8 points (3 children)
In my opinion, the best way to teach literature is to present several interpretations of the story/poem/novel/etc. for the sake of comparison. If other interpretations aren't available the teacher can always present a different work by the same author or another one dealing with the same theme. This kind of synthesis makes analyzing literature more interesting because no discussion can arise form a narrow point of view.
[–]shinypony 13 points14 points (0 children)
cf. Roland Barthes, 'The Death of the Author'.
[–]isleepinahammock 10 points11 points (2 children)
[–]motherfuckingriot 8 points9 points (1 child)
just because he didn't think about them doesnt mean they aren't there.
[–]i-just-cant 2 points3 points (0 children)
Exactly. A book belongs to its readers. Death of the author and all that.
[–]ctwd 4 points5 points (0 children)
I remember studying Canterbury Tales in high school, and I'm pretty sure my teacher just had a bunch of stuff explained to her in a way she didn't really understand, which she then just tried to regurgitate. I think the conclusions she was reaching could have been correct (but it didn't seem like it to me!), but the way she explained things didn't add up to what she was telling us.
They always say on essay questions that it's not so much what your conclusion is, but how you explain it and back it up that counts. She would have failed.
[–]McBurger 2 points3 points (0 children)
I said this to my English teacher a few times. I always felt we really read over every story way too much detail. Every chapter in the book, there was a quiz. And every sentence, there was symbolism and underlying allegories and yadda yadda.
"He stepped out his door, walked down the steps, unlocked his car..."
Suddenly, every English professor: "The steps are symbolic of past, present, and future! The American Dream is dead! He unlocks his car; this is because Biff and Happy are throwing a party at the Great Gatsby's mansion..."
Sometimes I would love to just read a damned book in class. I certainly do not read books like that nowadays when I read naturally. I just read the stories.
[–]11235throwaway 2 points3 points (0 children)
I think great photographs and paintings are sort of the same way.
I think occasionally you have an author or artist going out of his way to make a "big f___ing point" (James Joyce, anyone), but often the artist is just trying to make an image that looks good to him and the writer is just trying to write a good story. All of that critical stuff is usually only helpful in telling why the image or story is good, not what the creator was thinking.
[–]beeblez 1 point2 points (0 children)
corollary to this: don't think the author isn't saying something. Some people read Death of the Author and think that any text means whatever they want it to. Critical insight and analysis is still mandatory.
ie. Saying the billboard in The Great Gatsby is just a billboard is ignorant, debating what the billboard represents is legitimate (and fascinating).
[–]goood-dog 1 point2 points (0 children)
Pro tip:
Read the novel first. Then look at a number of often quoted critical accounts of said novel.
You will be able to develop your own informed reading of the novel by weighing up your impressions and background research against the critics' claims.
[–]infrared_blackbody 66 points67 points (2 children)
No. I'm open enough about what I'm going to teach in the Professional Learning Groups I take part in so any small errors I make are quickly corrected. If I teach something I don't have a solid understanding of, I'll just do more research. If the topic is more confusing than I can 100% comprehend, I'll tell the kids that. I teach physics, I don't need kids getting poor impressions.
[–]IAmNotAPerson6 36 points37 points (0 children)
Please. Do. Not. Stop.
[–]ololcopter 15 points16 points (0 children)
I can't think of any particular thing. I think the problem with a lot of teachers is that they're insecure. I'm pretty open with my class - and sometimes I really have no idea if the question is obscure. The worst thing you can do is BS your way through it and teach something not true (Like my history teacher in 10th grade who, when asked why Hitler hated the Jews, just said it was because his mom died while being operated on by a Jewish doctor..)..
And if I catch myself making an error, I'm fine just bringing it up to my class the next day to set the record straight.. doesn't happen too often, but I feel like it's the right thing to do.
[–]jlwhaley48 13 points14 points (6 children)
Ooooh I got one! I just finished my first year teaching the third grade (yay for me!). Teaching parts of the human body, I always thought it was Tibula and Fibula! That fail made me blush, I'm not gonna lie.
[–]forever_anoob 29 points30 points (29 children)
I worked with an ESL writing teacher who misspelled grammar (spelled it grammer), and made a few other really obvious mistakes in class. The worst part was that she took any mention of a correction as me undermining her authority. I guess she would rather teach the wrong thing than lose face. Edit: stupid autocorrect!
[–]mosnas88 28 points29 points (11 children)
It takes a real teacher to stand up in front of a class of people sometimes 20 years younger and admit "You are right, I was wrong."
Unfortunately good teachers are hard to come by.
[–]forever_anoob 13 points14 points (0 children)
A true teacher knows that when this happens it can be a great teaching moment, and that the classroom should be about what the students know/don't know!
[–]DotReality 8 points9 points (2 children)
I graduate tomorrow, and every teacher I have had up to this point has at least once go oh wait, thats wrong, sorry guys don't copy this/erase that last bit and several have said they could not figure out what they were doing wrong and would get back to us later on it.
I graduate tomorrow as well, congrats and good luck.
[–]DotReality 2 points3 points (0 children)
Thanks, same to you :)
[–]Neuran 1 point2 points (0 children)
Yeah - had some bad teachers who are adamant that something is totally right or ignore the fact I was correcting them :(.
The good teachers would correct it or pull out the "um... I swear I was testing you".
[–]jcpuf -1 points0 points (0 children)
It also takes a great class. Speaking as a teacher: when I'm in front of these respectful, hard-working private-school kids that I teach these days, and I make an error, it's easy to sigh and admit it and explain it. The one time I did that with the shitheads I taught in public school, they attacked me and punished me for it.
[–]Geminii27 4 points5 points (0 children)
I've seen a lot of stuff from people purporting to teach English that is frankly terrifying. It honestly makes me a bit suspicious of foreign-language teachers - are they genuinely knowledgeable about the language and culture, or are they just random people who lived in another country for a while? I certainly wouldn't trust the average English speaker to teach the language with any degree of completeness...
[–]thpiper10 9 points10 points (4 children)
random side note: I had a korean friend in college taking ESL. They got notes back from the teacher on their oral presentations- the teacher wrote all the notes in the messiest cursive I had ever seen. I was shocked and a bit angry that they would think that's acceptable with foreign students, some used to a completely different alphabet, who had never heard of cursive writing.
[–]guerarenegada 1 point2 points (7 children)
I'm an ESL teacher and I can't even tell you how often I pull out a dictionary or grammar resource to double check stuff. Especially spelling. I never trust anything that includes a two vowels in a row.
[–]MouseWithTheOverbite 3 points4 points (1 child)
made a few other obvious mistakes in class.
[–]Elvis_Fluffy_Butt 10 points11 points (0 children)
It was only over night, but my first ever teaching job, we did a unit on the weather and we started talking about clouds. When asked why they were dark, I replied (because for some reason this was my understanding!) that once water formed clouds, dust particles got trapped in them which made them grey! The kids all accepted this, I went home and googled weather only to find I was completely wrong! I made sure that I told them the right information the next day.
Now, when asked a question, and if I'm not sure of the answer I'll ask the student what they think, give them my thoughts then we look it up! I'm happy to admit to them I don't know everything, and they appreciate it!
[–][deleted] 88 points89 points (5 children)
In before everybody that replies is a student, not a teacher.
I'll just upvote comments who are actually from a teacher's perspective. I'm curious to know how they deal with correcting themselves or being corrected. Those are the stories I'd like to hear.
[–]Vicinus 1 point2 points (0 children)
It's the same with question for parents and then the replies go "My parents...."
The question is pointed to (in this thread) the teachers for a reason.
[–]sterlingarcher0069 2 points3 points (0 children)
After reading five, "My teacher..." I decided to put my rarely used downvotes to work since their lack of reading comprehension is adding nothing to the OP's topic. If you're not a teacher, I don't want to hear your side of the story. We already heard it.
[–]Talks_With_Himself 130 points131 points (59 children)
Cursive writing will be used all the time in highschool and college, print will be unacceptable.
EDIT: Wow, my first comment ever and I'm at the top :D
EDIT2: Also, in middle school, highschool, and college, if you write in cursive on an assignment you will most likely get counted down.
[–]mosnas88 66 points67 points (2 children)
Well the bonus is that my signature looks like I am handicapped so people treat me better.
[–]Mobidad 26 points27 points (0 children)
So does this happen often when you're at a store register?
Cashier: Your total is $xx.xx you stupid idiot. mosnas88: it's credit, I'll just go ahead and sign now. Cashier: (looks at signature) Oh, I'm sorry. It's good to see people like you being so independent, have a good day!
[–]stimbus 9 points10 points (0 children)
I've never had good handwriting. When I was 9, I closed my hand up in a sliding van door. My handwriting went from bad to unreadable by almost all.
A few years after that I had a teacher pull me up in front of a class and forced me to write on a black board so she could make fun of me to the other students. After a couple of words I drew a dog. I circled the crotch and said, "Noticed there's no wiener here? That's because she's a bitch just like you." I sent to the principle's office after the laughter of the other students died down.
[–]Doopz479 14 points15 points (43 children)
Is cursive less common in America or something? This is always said on reddit, but where I am I've never seen someone not write in cursive. It would just seem weird to me to print something.
[–]PerogiXW 20 points21 points (8 children)
It's rare to see it used in America. When you do see a handwritten note it's probably printed about 70% of the time, 28% some amalgam of printed and joined up letters, and the rare 2% of the time you'll see fully blown D'Nealian cursive and it's always incredibly annoying to read because no one has used cursive since they were 10.
The worst part is that the SAT (the standard college admissions test) requires you to copy out the honor statement in cursive on the back of the test. It's universally regarded as the hardest part of the test.
Signatures are still always written in cursive.
[–]ctwd 1 point2 points (1 child)
I kept doing cursive for years, but since college, I make a concerted effort to just print.
It rarely matters anymore, since just about the only handwriting I ever do anymore is writing notes to myself. Everything else is on the computer.
[–]Arthropody 1 point2 points (0 children)
Traditionally cursive is taught starting in the third grade. It has been removed it from the curriculum and it is no longer taught in my area. The thought is that students are moving more towards using technology for written expression and it is more important to teach technological skills and practice typing.
[–]BLATANT_HAPPINESS 1 point2 points (1 child)
I agree with this post. My upvote hath been bequeathed. Post Script: I miss people writing in cursive, everyone at my Canadian private school did it and in the public school system no-one is capable.
Joined up writing is the norm and is expected in England. No one I know writes completely in print.
In Australia, we write on a fucking computer because it's the 21st century.
[–]Edibleface 44 points45 points (0 children)
You're supposed to use the keyboard, not write on the screen you damn barbarian.
Not if you're in maths, give me a pacer and paper most days.
[–]Ezterhazy 5 points6 points (19 children)
They seem to get taught a funny kind of joined-up writing in America that's hard to use. Look at these crazy letters. The capital G and the Z are just bonkers.
[–]briilar 3 points4 points (0 children)
I always hated that capital Q looks like a 2. Confused the hell outta me.
[–]Neuran 3 points4 points (0 children)
Lowercase letters are how we do it in the UK... but we don't join capital letters.
What's wrong with the loopy z? I do that :P. My mum got annoyed at my primary school who didn't teach joining up the "dangly" letters and z.
[–]hybbprqag 1 point2 points (5 children)
Now I'm curious, how do you write your capital G and Z?
[–]Ezterhazy 2 points3 points (4 children)
Like you just typed them. G and Z.
[–]hybbprqag 1 point2 points (3 children)
I meant, in joined up writing. Do the capital letters not join up?
[–]Ezterhazy 2 points3 points (0 children)
For me, sometimes. It depends on what the letter after is. I never force it. Hang on, I'll get a pen, write something and take a photo.
[–]Ezterhazy 1 point2 points (1 child)
[–]hybbprqag 2 points3 points (0 children)
Ahh, some of my print letters end up joining up like that when I write quickly.
[–]Neuran 1 point2 points (1 child)
I've known a couple of people who do, but that's because they don't have the dexterity to form very good letters. Having the letters separated at least gives you the chance of deciphering it.
But I'd say the vast majority of people in the UK write cursive, if they're not typing or filling in a form (that requires block caps).
[–]We_Are_The_Romans 1 point2 points (0 children)
Ireland too. When I get a form that says "BLOCK CAPS ONLY" I take a mental shit while my hands try to remember how to separate letters.
[–]TheBSReport 25 points26 points (5 children)
Every time this question is asked this is the top response, can we at least have some variety in our answers since we can't seem to do it with our questions.
[–]Frensoa 13 points14 points (4 children)
You are spending too much time on reddit and especially on r/askreddit.
Go outside for a walk.
[–]TheBSReport 4 points5 points (0 children)
You don't need to spend. 24/7 on this website to see this especially how often this question hits the front page
[–]Dj_Nu12 3 points4 points (2 children)
There is no such thing as too much time
I read your comment as "there's no such thing as outside" for several long seconds.. I need a break.
[–]soggy_cereal 4 points5 points (0 children)
You don't get a break ಠ_ಠ
[–]blindcricket 6 points7 points (2 children)
My friend's Scottish mom is a grade 1 teacher (we live in Canada) and had been teaching rhyming wrong for years because of her accent. She didn't realize words that rhyme with a Scottish accent don't necessarily rhyme with a Canadian accent. She learned this after she had a student teacher in her class during the rhyming unit. Now when the rhyming unit comes up, her and the other grade 1 teacher switch classes for it. She has also had parents complain that their kids say some words with a Scottish accent.
Best comment so far.
[–]PSNDonutDude 59 points60 points (24 children)
My grade 9 teacher for science taught that the picture she was showing was a picture taken by a satellite that left the galaxy and took a picture of the Milky Way. I had to explain how that was impossible.
[–]mosnas88 62 points63 points (23 children)
My science teacher was feeling pretty cocky one day and said there were few questions about space that he couldn't answer. I had a pretty easy one for him and asked him to name the distance to the second closest star. He said it was impossible to know this. It took me 10 seconds to prove him wrong. We won the contest and got to watch October Sky for the class.
[–]Geminii27 16 points17 points (2 children)
Four light years, IIRC. Who forgot that the Sun is a star?
(Also, to be dickish, the third and fourth stars are also at roughly the same distance, given that the Alpha Centauri system is a binary with a third star doing the gravitational watusi with them.)
[–]jackass706 1 point2 points (0 children)
the gravitational watusi
I like it!
[–]PSNDonutDude 8 points9 points (0 children)
Sometimes the simplest questions are the hardest to some.
Happy CakeDay
[–]NoMoreNicksLeft 4 points5 points (0 children)
It's unknown. The second closest known star is still under 5ly though (I think). But there could possibly be several brown dwarfs out there as close as just 1ly away. They're difficult to detect unless you already know where to look. They may not even be stars, depending on how you want to define those.
[–]lurkerlurkerohmy 16 points17 points (2 children)
My buddy showed me this a few weeks ago. SO funny.
[–]Rhesonance 18 points19 points (4 children)
I wouldn't say I taught this for years, but in the 11th grade I was the top student in my AP Calculus class and would serve as a teacher's assistant during the period before the exam helping other students with problems, etc.
Well, there was this one math problem that the entire class could not for the life of them get the answer the teacher and I had gotten through our independent methods. The teacher called me up to the board with him and told me to show them how it was done as the class drew to an end. He did his method, I did mine. We both boxed our final answer at about the same time. Almost immediately the class was in an uproar. He had done 1 x 3 = 1 and I had done 13 = 3 and through our different methods, we somehow got the same (wrong) answer.
TL;DR: Confused 30 of the smartest kids in my high school for almost the entire class. It was hilariously embarrassing.
[–]kablunk 23 points24 points (12 children)
My grade 4 science teacher taught us that "water froze at 0 degrees C" and that "ice melted at 0 degrees C" - both of which are correct, but when I tried to explain to her that this would lead to a loop of some kind, finally settling at an equilibrium point (yep, still grade 4), she told the entire class that that was why it was impossibly to maintain a temperature of 0 degrees Celsius.
[–]deadcom 13 points14 points (9 children)
My brain hurts
[–]evilbrent 18 points19 points (8 children)
It's called the 'latent heat' of water.
When you raise ice to 0 degrees from, say -10, it'll steadily rise in temperature for every joule of energy you put in. (There's some formula, it's like one degree per joule per litre, but probably not that) So there's X amount of energy added to raise from -9 to -8, same amount to go from -5 to -4 etc.
Then when you get to 0 you keep adding energy and the temperature stays the same. Instead of using that energy to raise the temperature of 0degrees ice, you're making 0degrees water. The latent heat required is different for every material.
edit: got rid of my mistake.
[–]Magnarmalok 9 points10 points (2 children)
Pic showing this, basically what happens is, the energy isn't used to raise the temp, but it's now being used to alter the state of the water from solid into fluid.
[–]evilbrent 9 points10 points (1 child)
I hate it when people are just as correct as me but in 1/5 the words ;-)
[–]Magnarmalok 2 points3 points (0 children)
I think more of it as, we're working together, you gave the full tutorial, I gave the gist of it :P
[–]urfaselol 3 points4 points (0 children)
[–]ZeroNihilist 10 points11 points (0 children)
That is so wrong I can't quite express the sheer magnitude of incorrectness. Not only is it quite possible to maintain a temperature of 0 degrees Celsius, it's incredibly easy. If the environment is hotter than 0 degrees, any present ice will maintain a temperature of 0 until it melts. Similarly, if it is cooler than 0 degrees and present liquid water will maintain a temperature of 0 until it freezes. An ice-water mix will maintain 0 degrees until it melts/freezes entirely (though of course there may be local disturbances if you're dealing with a large quantity, these will be rapidly rectified).
So yeah, they weren't just wrong, they said the exact opposite of the truth.
EDIT: Any explanation for the downvotes? What I said was expressed by evilbrent as well, yet he was rightly upvoted. Just curious, since downvoting a fact-based post implies that I was incorrect in some fashion.
That our tongue is like a map with different regions to detect taste. But I was wrong
[–][deleted] 55 points56 points (66 children)
10th Grade Writing course last year (yes I'm young). Teacher tried her best to convince us that oxford commas were unnecessary and that she simply "didn't like them". She'd even fucking circle any of them on all of our papers.
When we confronted her about this, she told us admittedly that people in her grad school even gave her shit for not using oxford commas, yet she still "didn't like them".
To not use them is one thing (you can risk having a silly double-meaning in your sentence), but to force kids that you're teaching to use them is fucking ridiculous.
[–]assesundermonocles 10 points11 points (3 children)
Balls for her for forcing you to go along with her shit. Oxford commas are really more of a preference thing, though it does help dyslexics with long-ass sentences.
In conclusion, your 10th grade teacher is discriminating against dyslexics.
[–]Eurydemus 2 points3 points (0 children)
My teacher constantly made this mistake. I had to tell him numerous times that it was correct. To top it off, I had to take this issue to the school-board. He said it wasn't correct, there was also another issue with spelling mistakes ._. .
[–]busstopboxer 12 points13 points (2 children)
Not every style guide requires Oxford commas. They are unnecessary clutter in the majority of situations and it is perfectly acceptable to only include them when required for clarity.
[–]kevstev 8 points9 points (1 child)
This was my whole problem with "English" class- there are multiple interpretations, different styles, etc. It always felt to me that I was trying to figure out what the teacher wanted me to put on the piece of paper, not actually doing literary analysis.
And grammar rules. The great thing about grammar rules is that there are so many to choose from.
Seriously, thinking about my high school english classes makes my blood pressure rise.
[–]justabitmoresonic 2 points3 points (6 children)
I have plenty of friends who don't like to use the Oxford Comma and they always say it is unnecessary because that's how they were taught. They were taught that you CAN use it, but you don't have to. They always use examples of lists that say "I want blue, red, pink, yellow, green and white" which sort of makes sense but they never listen when I give them the pizza example. "I want 6 pizzas; hawaiian, ham and cheese, pepperoni and olives, margherita, and egg and onion". They aways ask why I didn't just put something without 'and' at the end. THAT'S NOT THE POINT. Damn
[–]skullturf 4 points5 points (5 children)
I realize that when it comes down to it, it's not much more than an aesthetic preference.
BUT, having said that, I have to say that I much prefer the Oxford comma.
Consider your example with the colors: "I want blue, red, pink, yellow, green and white." If you don't use the Oxford comma, it makes it look like green and white are "stuck together" or form a "team". By contrast, if you use the Oxford comma, it's more clear that green and white are two things in the list just like everything else.
[–]justabitmoresonic 2 points3 points (2 children)
Yea that's what I think... but they still think I am an idiot.
[–]Megatron_McLargeHuge 1 point2 points (0 children)
"This essay is dedicated to my parents, Ayn Rand and God."
[–]Patrickfoster 5 points6 points (42 children)
What's an Oxford comma? I thought I was a song
[–][deleted] 48 points49 points (22 children)
[–]bunglejerry 27 points28 points (12 children)
The problem with the JFK sentence is that it's easy for the Oxford Comma to create ambiguity too. Let's make the stripper singular:
• We invited the stripper, JFK and Stalin.
• We invited the stripper, JFK, and Stalin.
The first sentence has no ambiguity. But in the second one, it might be that JFK is the stripper (but Stalin surely isn't).
[–]bryce1012 5 points6 points (3 children)
I personally tend to use the Oxford comma, but generally the best way to avoid the ambiguity is to rearrange it like so:
• We invited JFK, Stalin and the stripper.
• We invited JFK, Stalin, and the stripper.
• We invited JFK, Stalin and the strippers.
• We invited JFK, Stalin, and the strippers.
By moving the unclear reference to the end of the list it becomes pretty clearly explicit in every case, regardless of plurality or Oxford comma.
[–]Jelliphish 3 points4 points (0 children)
Dibs on "Stalin and the Strippers" as a band name
[–]ComebackShane 1 point2 points (0 children)
Stalin and the strippers.
That's a great band name right there.
[–]helloimcallum 4 points5 points (0 children)
[–]NoMoreNicksLeft 1 point2 points (0 children)
Normally, context alone solves the ambiguity, since if we see a name like Candy Apple or Electra... those are stripper names. However, in this particular example JFK was himself known to be a promiscuous sex worker who liked to dance naked.
[–]Patrickfoster 11 points12 points (0 children)
Thanks. I think I get it now.
[–]IAmNotAPerson6 6 points7 points (5 children)
It seems like a colon would be better punctuation for the second sentence.
We invited the strippers, J.F.K. and Stalin.
We invited the strippers: J.F.K. and Stalin.
It might just be me.
[–]tinyhorse 8 points9 points (1 child)
I concur, but can't find that rule written down anywhere. The Elements of Style does suggest using a colon in this situation, but doesn't actually forbid the usage of a comma.
I don't know why I looked it up, but it took me a really long time, so everyone is going to know the results, goddamnit.
[–]DrArmstrong 6 points7 points (12 children)
A, B, and C.
A, B and C.
The bolded comma is the Oxford comma.
[–]Patrickfoster 1 point2 points (8 children)
Why is it different, and do we use them or not?
[–]vannucker 8 points9 points (5 children)
"I saw Jesus, a farmer and a prostitute." could make it seem like you are calling Jesus a gambler and a prostitute
If you write "I saw Jesus, a farmer, and a prostitute," you know that Jesus is hanging with gamblers and prostitutes.
Farmers are gamblers?
[–]Hyro0o0 12 points13 points (1 child)
"I bet all my money for the next year that these crops won't be destroyed by a storm."
[–]MyNameCouldntBeAsLon 1 point2 points (0 children)
"by a biblical plague"
[–]IRBMe 2 points3 points (0 children)
The second one is also ambiguous. It could mean that you saw three people, one of which was a farmer, one of which was a prostitute and one of which was Jesus, or it could mean that you saw two people: a farmer called Jesus, and a prostitute.
[–]Jackal_6 1 point2 points (0 children)
I don't get it. Is it something to do with the weather?
[–]sedMagisAmicaVeritas 1 point2 points (2 children)
I think it's a British influence vs American influence thing. For example in India the first statement is seen as incorrect but the second one is seen as correct. In America the first one is seen as correct but second is incorrect. Like color vs colour and other subtleties.
[–]HydraCarbon 7 points8 points (1 child)
DON'T FUCKING STEAL MY IDEA REDDIT, but I am in a band and someday, if we were to make it, I am going to name an album "Gray is a Color, Grey is a Colour."
[–]bridgasaurus 9 points10 points (1 child)
Who gives a fuck?
[–]a_lot_of_fish 17 points18 points (0 children)
For the record, bridgasaurus is referencing the popular song lyric "Who gives a fuck about an oxford comma?", not simply being rude.
[–]Frankfusion 12 points13 points (13 children)
That Pluto is a planet.
[–]dave_baksh 3 points4 points (0 children)
My first year teaching, I hadn't prepped my lesson properly and was just rolling with some worksheet assessment for Science. I'm a physicist. There was a bird on the sheet and the title was something about 'Galapagos'. I pronounced it Gal-a-pay-go's and told the class it was the name of the bird. Nobody corrected me, they just got on with the work. Shit I don't know about birds and islands and stuff, Biology is confusing.
[–]theyellowleaf[🍰] 2 points3 points (0 children)
If I'm wrong about something, then I just say that I was wrong. The same goes if I just don't know the answer to a question. I say, "I don't know, but I can find out." It happens to everyone, and I think one important lesson that teachers can show students is how to receive correction with grace and appreciation. I teach eighth grade.
[–]slicktricky 17 points18 points (8 children)
This thread has turned into a circle jerk of kids feeling smart for 'showing them authoritarian assholes.' Seems childish.
Anyways, yes. I taught for a few years, a student called me out on something (I believe the subject was chemistry). He got pissed that I was 'wrong.' I was like 95% sure I was right, but I dropped the subject. Went home, found out that I was mistaken through some quick google searches.
Went back to school, corrected myself, kid tries to act all smug. Annoyed the shit out of me. Told him life is not about being right or wrong, but about correcting yourself.
He went on to fail multiple tests, generally suck at life. And yet, that one victory was his excuse to hold on to the idea that he was a smart kid. Yeah, sorry boys, but being right doesn't make you smart.
[–]stentuff 18 points19 points (5 children)
Seems like you're taking quite a lot of pleasure in his failures though.. Might not make him less of a twat but it hardly paints you in a ray of sunshine either..
[–]jcpuf 10 points11 points (2 children)
Teachers are people, just like everyone else. And all people take pleasure in the failures of our antagonists.
[–]thiazzi 13 points14 points (1 child)
Plus kids are fucking cunts.
I reread the post, and it really doesn't seem like the chemistry teacher took pleasure in it. Just a bit of annoyance. He or she was doing nothing wrong.
[–]slicktricky 1 point2 points (0 children)
Not pleasure. Bitter disappointment. Do you know how much it sucks to try and help a student only to watch them pat themselves on the back as they slip further and further behind?
[–]BigDaddyFo 9 points10 points (16 children)
I'm not a teacher, but this is a related story.
My 8th grade History teacher taught me that romance languages were called romance languages because they were romantic. Even as 8th graders, we knew that was wrong. My friends and I still laugh about that.
[–]Throwawaychica 6 points7 points (14 children)
So why are they called romance languages?
[–]Zebra1200 23 points24 points (8 children)
Languages descended from the Romans.
It's a derivative of the word 'Roman', because the Romance languages are based on Latin, the language of the Romans.
[–]Nizzo 6 points7 points (2 children)
Technically right, but "based on" isn't really the right term. French, Italian, etc. aren't their own language where some person or group purposely said "we're gonna take this 'ere Latin and morph it over multiple centuries, and then have some new language", "based on" gives an element of purpose, of effort, while Latin's transformation into the Romances was entirely passive, other than Rome's invasion into the places where Romances are spoken today. French is technically Latin, but with a bunch of Gallic words and some German words put into it, and about 2 millennia of change. It IS Latin, just some heavily changed Latin. Sorry for writing a paragraph out just to bitch about your diction though, it isn't the most important thing.
Ah, the story of Reddit. "I'm not a teacher BUT..."
[–]ambystoma 1 point2 points (0 children)
Taught undergraduate medics and vets. Realised that I had taught them wrong, so emailed them the correct teaching.
[–]rdmqwerty 1 point2 points (0 children)
i am a peer tutor for my school. sometimes i go into lessons thinking i know exactly what to say and do, but i end up teaching something wrong. other times they ask questions that i dont know the answers to. i just play it off like im correct and they usually buy it
[–]TheMasterCommander 4 points5 points (0 children)
as a student i would love to know this
[–]HariEdo 4 points5 points (5 children)
she was adding an extra syllable on accident.
Common mistake, but it's "by accident." If you can't remember whether it's ON or BY, just say "accidentally."
[–]Mugiwara04 5 points6 points (2 children)
"On accident" is just the way some people say it in some regions. It's a dialect and possibly age-related thing, apparently.
I think it sounds funny too, but it's not wrong.
[–]Geminii27 6 points7 points (0 children)
It may be regionally correct, but it gives me the creeps. :/
[–]red_dakini 2 points3 points (0 children)
I've taught that you should major in something your passionate about in college. After the recession, I've realized that is a humungous mistake. Major in something that will make you money; no two ways about it.
[–]chosetec 11 points12 points (1 child)
Why not both?
I dunno. Teach for 10 years and maybe we can come up with an answer.
[–]Trapped_in_Reddit 5 points6 points (16 children)
Airplanes don't fly due to Bernoulli's law and pressure differences effected by the relative speed of air over the top and bottom of the wing.
The fly because the wing deflects air downwards.
[–]natty_dread 10 points11 points (9 children)
That's wrong. Only the explanation that the air over the wing is faster because the path is longer and it has to reach the end of the wing at the same time as the air going under the wing is wrong.
The Navier-Stokes equation explains the phenomenon.
[–]Homomorphism 2 points3 points (1 child)
Isn't saying "the Navier-Stokes equation explains the phenomenon" a bit like saying, in response to a question about colors, "the Schrödinger equation explains the phenomenon"? It's technically true, but it's not helpful at all.
[–]evilbrent 1 point2 points (5 children)
Is that why helicopters can fly upside down?
Actually they do, it's just that all of those different explanations result in the same thing, which yes, means that the wing is altering the airflow so that Newton's Third Law holds the plane up.
[–]toadkiller 3 points4 points (2 children)
You're wrong. Newton's Third Law does generate a small amount lift, but the vast majority of the lift which holds the plane up is from Bernoulli's Principle. If you were to remove all the lift generated by Newton's Third Law, most airplanes would still fly.
[–]selfification 13 points14 points (1 child)
Oh boy here we go... They are both part and parcel of the same thing! Bernoulli's Principle is a statement of the conservation of energy in fluids. Newton's law is a statement of the conservation of momentum. Seriously - you can't argue that one of these things causes lift while the other doesn't. That's like arguing whether it's the electric field or the magnetic field that causes light. They are both part of a larger set of rather harder to solve equations known as the Navier-Stokes equation which describes the properties of fluid flow in general. The general principle behind "lift" has also been described in terms of "circulation", which is one of the phenomena that arises from the Navier Stokes equation. Yet another way of explaining lift comes through extending and reasoning about a wing using the Coanda effect.
I wrote up a blog post about this a while back when a similar argument came up amongst folks I knew:
[–]kablunk 2 points3 points (0 children)
Possibly relevant comic
[–]chuzuki 1 point2 points (0 children)
Bernoulli's doesn't account for ground effect. Clearly something is increasing pressure below the wing instead of decreasing it above. I'd imagine deflecting air downwards fits the bill.
[–]soothfast 4 points5 points (4 children)
On accident? Is this a new thing? Isn't it "by accident"?
[–]flappymcflappypants 5 points6 points (1 child)
Indeed it is "by accident". Glad I'm not the only one who noticed that.
"you have to learn your times tables and long division. You can't just carry a calculator around in your pocket everywhere!"
Suck it bitch im writing this message from a calculator that also makes phone calls which I carry in my pocket everywhere.
Edit: I can do maths. I have an Economics degree with honours, majoring in Econometrics. I'm actually pretty damn good at maths. I just think it's funny that everyone really does carry a calculator everywhere with them.
[–]discworldian 12 points13 points (2 children)
Everyone I know who needs a calculator trusts that thing blindly and makes stupid mistakes. Learning your times tables/long division will teach you the principles and help you make good estimates.
Her argument was just shit.
[–]Skookah 2 points3 points (1 child)
As someone without a fully firm grasp on his times tables or long division, I agree. Should've learned that shit; now I can't even pull simple arithmetic without a goddamn calculator.
[–]willscy 1 point2 points (0 children)
you know if you stopped propping yourself up with a calculator you could probably fix that.
[–]jcpuf 1 point2 points (1 child)
You should be able to do your times tables because you shouldn't have to look up 9*12 on your calculator, and you should do long division because it'll review your times tables, and because it'll help you internalize the process until you can do complex division in your head.
Of course you don't have to do this, if you're perfectly happy being a subhuman with a vestigial brain who can't prime-factor 54 in their head. You sure showed that teacher!
I'm actually not bad at Maths. I have an economics degree with honours, majoring in Econometrics. So actually I'm quite good at maths.
I just think it's funny that I really do carry a calculator around with me everywhere.
[–]tptbrg95 1 point2 points (2 children)
I'm not a teacher but I had several teachers in elementary school that taught misinformation. Also my 10th grade government teacher taught very untrue things regarding double jeopardy laws.
[–]miidgi 1 point2 points (0 children)
We apparently started out trying to keep this to teachers only, but that fell apart quickly. Now it just looks like we picked on you for no reason. Plus, its your cakeday.
[–]Cheehu 1 point2 points (0 children)
Enjoy your cake.
[–]TheNev 1 point2 points (0 children)
It appears that many teachers are teaching a sex ed practical and regretting it later.
[–]ctwd 1 point2 points (0 children)
I had an 8th grade history teacher that go SO MANY WORDS wrong, especially at the very beginning of the year. The text book started out around the time Europeans were settling the Americas, so there were some Spanish words and other exotic-sounding words.
One time a kid was reading out loud from the book, and mispronounced a word in a totally different way than the teacher did, and the teacher corrected him with his own incorrect version. (Mestizo is not pronounced MEZZ -E'-TOE!).
Pretty sad when a 50 year old man who's been teaching his whole adult life (in a SCHOOL, where people should be learning to read well) mispronounces half a dozen words from the book in one day.
[–]TheYellowBastard 1 point2 points (0 children)
I'm quite surprised this hasn't turned into some sort of battle about teaching creationism in schools, granted I haven't scrolled down that much...but I'm impressed!
[–]emmyshangalang 1 point2 points (3 children)
I am not a teacher, but I had an English teacher who couldn't spell. Without a word of a lie, she couldn't even spell "sentence" or "Grammar". Instead, she spelt it "Sentance" and "Grammer"... And that is only the tip of the iceberg...
[–]CookieFace 6 points7 points (2 children)
you mean "iceburg" |
087cfc22698bdee5 |
Sign up
Here's how it works:
1. Anybody can ask a question
2. Anybody can answer
First, let me state that I'm a lot less experienced with physics than most people here. Quantum mechanics was as far as I got and that was about 9 years ago, with no use in the meantime.
A lot of people seem to think that the act of observing the universe we are changing the universe. This is the standard woo hand waving produced by people who don't understand the physics -- thank you Fritjof Capra.
In reading about quantum physics out there on the Internet, I find this idea is propagated far too often. Often enough that I question whether my understanding is accurate.
My questions:
1. Does observing a particle always mean hitting it with something to collapse the wave function?
2. Is there any other type of observation?
3. What do physicists mean when they say "observe"?
Related, but doesn't get to the question of what observation is: What is the difference between a measurement and any other interaction in quantum mechanics?
share|cite|improve this question
up vote 10 down vote accepted
Assuming that the incoming "first" particle is prepared in a pure state, interaction with another particle does seem necessary. Such an interaction might simply be the spontaneous emission of a photon or other particle by the original incoming particle, however.
Most importantly, such an interaction is not itself sufficient. For a measurement event to occur (wave function collapse in the Von Neumann formalism) we must also "physically lose track" of the some of the information of the interacting particle after the interaction has taken place, so that we must replace the entangled state description of the second particle after the interaction with a probabilistic mixture of such states, forcing a description of the first particle after the interaction in terms of a real valued probability density matrix rather than as the complex valued pure state amplitude we started with. This change of description automatically includes an increase in entropy, which also occurs physically.
Unless the second, interacting, particle either escapes the apparatus or interacts with a third particle which so escapes, i.e. "interacts with the environment", no measurement has yet occurred, the entire interaction is in principle reversible, and the complex amplitude description remains appropriate. Measurement requires "loss" (via decoherence) of the entangling information by further entangling with the environment and dissipation.
The escaping third particle is often an emitted photon or phonon. See the reference in the linked answer What is the difference between a measurement and any other interaction in quantum mechanics?, particularly the 1939 article by London and Bauer (but avoid their metaphysics) for details. More recently, see this book on quantum measurement theory, particularly page 102 referring to the view of Zeh.
You may have noticed that some ambiguity remains in this descriptipn. This has been analyzed in great detail and resolved by Zurek, but it gets a little tricky. See e.g. and references therein.
share|cite|improve this answer
"Collapse the waveform" is a loaded term, that would not be agreed to by all physicists. There are a great many "no-collapse" interpretations out there in which there is no special role for measurement that directly alters the wavefunction. There are also collapse-type interpretations in which the collapse happens more or less spontaneously, as in Roger Penrose's theory whereby gravitational effects cause any superposition above a certain mass threshold to collapse incredibly quickly.
As a practical matter, it's hard to think of a measurement technique that couldn't be described as hitting one particle with another. Most quantum optical experiments rely on scattering light off an atom in order to detect the state of the atom, many charged-particle experiments involve running the particles into a surface or a wire in order to detect it, and so on. I think that the solid-state qubit experiment done by people like Rob Schoelkopf at Yale would probably count as an exception, because I believe they use a SQUID to detect the state of their artificial atoms via magnetic fields. If you want to get really picky, though, you could probably consider that a particle interaction as well, though, in some QED sense.
Even there, though, the act of measurement does not leave the initial system unchanged. While there would not be general agreement with the specific phrasing "observation changes the universe," the idea that quantum systems behave differently after a measurement is central to the theory, and can't be avoided.
share|cite|improve this answer
The thing is there are a variety of different opinions that, since they can not be distinguished by experiment, are around and used by different people to interpret experiments.
The conventional view of quantum mechanics, although it has eroded over time, is that a sharp disctinction has to be made between the classical and the quantum. The apparatus has to be described classicaly, while the quantum describes the measurement results of the experiment. Von Neumann has then tried to show that the distinction need not be sharp and that you can include the apparatus in the quantum description, but it then has to be observed itself by another apparatus which has to be described classically. Wigner argued that this regression of the quantum/classical divide can be carried up to the mind, hence why there is a lot of woo latching onto these ideas, because they seem to justify the importance of the human mind over anything else in the world.
Other approaches have argued that there is no distinction between classical and quantum, at least. One is the Many Worlds Interpretation, which states that a superposition of states is representing actually realized states but in different universes. Another is the Bohmian or pilotwave interpretation, which states that the wave equation is describing a wave that guides particles. An extra equation is then supplemented to the Schrödinger equation to show how this guiding happens. In both theories, there is no need to speak about measurement, at least, not in any deeper sense than what would be done in classical physics.
Here's a non-exhaustive list of interpretations of quantum mechanics
So, within the context of the Copenhagen interpretation and the von Neumann/Wigner paradigm, the answer to the title question would be yes, there is a difference between measurement and hitting with another particle. Within the context of Bohmian or MWI interpretations, the answer would be no.
share|cite|improve this answer
How do you know the output of an experiment? You "observe" it.
How do you actually do that? Well, with your eyes.
How does that work? Photons emitted or reflected by a surface hit special molecules in your eyes. This leads to a signal which is transmitted into the rest of your brain.
So the actual observation eventually happens by exchanging photons. Now you want to observe really small particles. Particles that are so small that interaction with a photon actually changes them.
Which begs the question whether you can examine something like an electron without hitting it with a photon or anything that might change it's state. This is really hard. So in most cases, observing something does influence it.
There are a couple of tricks like observing something that is by itself influenced by the particle you want to measure (say the electric field of a moving electron could influence another molecule and you could do your measurement on the molecule).
But in the end, all these processes are based on resonance. If you want to measure anything, you must create a resonance of some kind and that always means two-way interaction at some level.
share|cite|improve this answer
This question reminds me of the Zen koan:
What is the sound of one hand?
According to this site (which also elaborates on the koan) if:
while walking, standing, sitting, and reclining, you proceed straightforwardly without interruption in the study of this koan, you will suddenly pluck out the karmic root of birth and death and break down the cave of ignorance.
... and/or understand quantum mechanics :-) Ok ... maybe not quite well enough for a degree, but you get my point ;)
share|cite|improve this answer
all particles are the sums or products of other particle interactions, however higher energy collisions are required for the threshold of observations to be made.
share|cite|improve this answer
Your Answer
|
6aaf7e5d36f1a15b |
Sign up
Here's how it works:
1. Anybody can ask a question
2. Anybody can answer
Special relativity was well established by the time the schrodinger equation came out. Using the correspondence of classical energy with frequency and momentum with wave number, this is the equation that comes out, and looks sensible because this is of the form of a wave equation, like the one for sound etc, except with an inhomogeneous term
$$\nabla^2 \psi - 1/c^2 \partial_{tt} \psi = (\frac{mc}{\hbar})^2\psi$$
Instead, we have schrodinger's equation which reminds of the heat equation as it is first order in time.
EDIT Some thoughts after seeing the first posted answer
What is wrong with negative energy solutions? When we do freshman physics and solve some equation quadratic in, say, time... we reject the negative time solution as unphysical for our case. We do have a reason why we get them, like, for the same initial conditions and acceleration, this equation tells usthat given these iniial velocity and acceleration, the particle would have been at the place before we started our clock, since we're interested only in what happens after the negative times don't concern us. Another example is if we have areflected wave on a rope, we get these solutions of plane progressive waves travelling at opposite velocities unbounded by our wall and rope extent. We say there is a wall, an infinite energy barrier and whatever progressive wave is travelling beyond that is unphysical, not to be considered, etc.
Now the same thing could be said about $E=\pm\sqrt{p^2c^2+(mc^2)^2}$ that negative solution can't be true because a particle can't have an energy lower than its rest mass energy $(mc^2)$ and reject that.
If we have a divergent series, and because answers must be finite, we say.. ahh! These are not real numbers in the ordinary sense, they are p-adics! And in this interpretation we get rid of divergence. IIRC Casimirt effect was the relevant phenomenon here.
My question boils down to this. I guess the general perception is that mathematics is only as convenient as long as it gives us the answers for physical phenomenon. I feel this is sensible because nature can possibly be more absolute than any formal framework we can construct to analyze it. How and when is it OK to sidestep maths and not miss out a crucial mystery in physics.
share|cite|improve this question
Maths can never be sidestepped in physics, unless one only cares about approximate guesses, sloppy arguments, psychological methods to remember some patterns, or presentation. Whenever we need accurate analyses or descriptions of anything in physics, maths is totally essential. ... I find your particular physics questions that are hiding behind your question confusing, and it could be exactly because you don't really take maths seriously. All those things have very sharp and clear answers but accurate maths is needed for them. – Luboš Motl Mar 17 '11 at 7:51
And by the way, a wave function $\psi$ obeying a second-order, rather than first-order, equation could never be interpreted as a probability amplitude because $\int |\psi|^2$ could never be conserved. But it must be conserved for the total probability of all possible answers to remain at 100 percent. It follows that the objects entering second-order equations are not wave functions but fields - either classical fields or quantum fields. Another way to see it is to notice that the 2nd order equations universally possess unphysical negative-energy solutions. Schr's equation has to be 1st order. – Luboš Motl Mar 17 '11 at 7:52
In your EDIT, you refer to the negative frequency solutions of the KG equation as "negative energy". In classical Physics, negative frequency components contribute positively to the energy; quantum field theory uses algebraic methods to ensure that what could be called negative frequency components also in the QFT context contribute positively to the energy. The language that is used fails to do proper justice to the mathematics, IMO. – Peter Morgan Mar 17 '11 at 12:50
@Peter Morgan, I've never really understood that argument that negative energy solutions happen to be positive energy solutions with opposite whatever; it feels like side-stepping the fact that the math gives you negative energy solutions by introducing a magical factor of -1 for them. The argument that negative energy solutions are not observed because we don't see electrons decaying to those assumes that negative solutions would be more stable, but how are we sure negative solutions aren't actually less stable or equivalent? its a thermodynamical argument? – lurscher Mar 17 '11 at 15:44
@lurscher Sorry to say that I think it's a long, unconventional story. My Answer below points to a small part of what I think about this. The stability of a system could be ensured by other conserved quantities, in which case a lower bound for eigenvalues of the infinitesimal generator of translations would not be necessary. Stability against what is also a question. Anyway, it's not obvious enough (for it to be a universally maintained axiom) that energy bounded below is either necessary or sufficient to ensure stability, whatever that might be in obvious axiomatic terms. – Peter Morgan Mar 17 '11 at 16:38
up vote 7 down vote accepted
That's the Klein-Gordon equation, which applies to scalar fields. For fermionic fields, the appropriate relativistic equation is the Dirac equation, but that was only discovered by Dirac years after Schrödinger discovered his nonrelativistic equation. The nonrelativistic Schrödinger equation is a lot easier to solve too.
The relativistic equations admit negative energy solutions. For fermions, that was only resolved by Dirac much later with his theory of the Dirac sea. For bosons, the issue was resolved by "second quantization".
The problem with negative energy solutions is the lack of stability. A positive energy electron can radiate photons, and decay into a negative energy state, if negative energy states do exist.
share|cite|improve this answer
Just to be sure, the Klein-Gordon equation was discovered before the non-relativistic Schrödinger equation - and it was discovered by Schrödinger himself who was efficient enough and realized that the KG equation disagreed with the Hydrogen atom. – Luboš Motl Mar 17 '11 at 7:48
"A positive energy electron can radiate photons, and decay into a negative energy state, if negative energy states do exist" only if all other conserved discrete and continuous quantities are conserved. – Peter Morgan Mar 17 '11 at 12:47
Your question just took a huge leap sideways. Probably into Soft-question. You don't "sidestep maths" in Physics. You introduce "different math" and look carefully to see whether it fits the experimental data better. People generally pick a particular mathematical structure and try to characterize its differences from the existing best theories, which for the best alternatives takes people decades. There are other issues, such as whether your different maths is more tractable, whether people think it's beautiful, whether it gives a better explanation, whether it suggests other interesting maths, whether it suggests interesting engineering, whether it's simple enough for it to be used for engineering. The wish-list for a successful theory in Physics is quite long and not very articulated. Philosophy of Physics tries to say something about the process and the requirements for theory acceptance, which I've found it helpful to read but ultimately rather unsatisfying. "miss[ing] out a crucial mystery in physics" would be bad, but it's arguably the case that if it's not Mathematics it's not Physics, which in the end of hard practical use will be because if it's not mathematics you'll be hard pressed to do serious quantitative engineering.
For your original question, I've been pursuing why negative frequency components are so bad for about 5 years. If you feel like wasting your time, by all means look at my published papers (you can find them through my SE links) and at my Questions on SE, all of which revolve around negative frequency components (although I think you won't see very clearly why they do in many cases, even if you know more about QFT than your Question reveals). I don't recommend it. I can't at this stage of my research give you a concise Answer to your original Question.
share|cite|improve this answer
It seems to me that the questioner is asking about Quantum Mechanics, not Quantum Field Theory. So what one can or cannot do in QFT is evading the question.
The short answer to the question as posed is « Yes, it is the wave equation.»
As prof. Motl pointed out, it was discovered by Schroedinger first, following exactly the reasoning the OP presents. It does not describe the electron, but it does describe one kind of meson, and so it has to be said that it agrees with experiment. I emphasise that this is qua relativistic one-particle QM equation, not second-quantised. Both Pauli's lectures in wave mechanics and Greiner's Relativistic Quantum Mechanics treat the K.-G. equation at length as a one-particle relativistic wave equation. Furthermore, the negative energy states can be eliminated by taking the positive square root: $$\sqrt{-\nabla ^2 + m^2} \psi = i{\partial \over\partial t}\psi.$$ Every solution of this equation is a solution of the original K.-G. equation, so if the latter is physical, so is this one.
Now we have an equation that agrees with experiment, does not need any fiddling with the maths, but does not allow the Born-rule probability interpretation of the non-relativistic Schroedinger equation. What one says about this depends on whether one thinks philosophy, the Born interpretation, should trump experimental data, namely the « mesonic hydrogen » energy levels, or the other way round....
share|cite|improve this answer
The square root equation you wrote down admits a Born rule interpretation-- it is just the one-particle fock space evolution equation. – Ron Maimon Dec 25 '11 at 7:32
The $\sqrt{-\nabla^2+m^2}$ operator is non-local, which is usually taken as something of a strike against it, right? I don't think this necessarily kills this approach, because the ways in which different aspects of QM might be local or non-local is a very long running saga, but it does give a certain pause. In contrast, usually taken to be better, the Dirac operator is local. I imagine you have a definite response to this particular cavil, Joseph, and I'm curious what it is. – Peter Morgan Dec 25 '11 at 16:21
Hi, Peter. Well, every solution of this non-local eq. is also a solution of the original K.G. eq., and it is only the solutions which are physical, the linguistic form of the equation has no physical significance. So one has to criticise the original eq. too, if there is something wrong with the solutions of the « positive energy K.G. eq.» My question is, what does « non-locality » mean as far as a given, concrete, solution is concerned? And then, after all, there is the experimental support for the eq., ... – joseph f. johnson Dec 25 '11 at 18:10
You can throw away the negative energy solution if you're just doing relativity and not QM, just like in the examples you mentioned. But if you add QM you can no longer do that: remember the energy spectrum are the eigenvalues of the Hamiltonian (operator), so you just can't throw away some eigenvalues without also throwing away the linear algebra the theory is based upon.
So no, it is never ok to sidestep math. When you eliminate negative roots of cuadratic equations in classical physics you're not sidestepping math, after all you need to apply the rules of algebra to get the solutions, you're just applying physical considerations to the solutions to get rid of (physiccally) spurious ones that can be ignored without altering the mathematical structure of the theory. In the RQM case the negative energy solutions are unphysical but you can't just ignore them, you have to deal with them.
share|cite|improve this answer
If I'm not mistaken, people have shown that in a scattering problem, even if we start with a wave purely made of positive energies, after the scattering there will be negative components popping up, so it does not make sense to simply throw away the negative energy states. But I can't find the reference now.
share|cite|improve this answer
This is true for local coupling only, you can make up artificial coupling for the square-root equation which allows scattering. – Ron Maimon Sep 3 '12 at 2:57
Your Answer
|
454a072db0e92bad | Dismiss Notice
Join Physics Forums Today!
Vector spaces
1. Jul 30, 2005 #1
what is the reason behind choosing the linear vector spaces in representing the state of a system? why is it convenient ? and why do we actually need a linearity ???
2. jcsd
3. Jul 30, 2005 #2
At the beginning there were the natural numbers (surely to make money), thereafter someone introduced the addition (surely to win more money). From the addition one deduced the multiplication for efficiency (3+3+...3 takes more time to calculate than 100*3: the productivity). The linearity and the business were born :biggrin:
4. Jul 30, 2005 #3
The Schrödinger equation is a linear differential equation, so the sum of the solutions is still a solution. Thefore, quantum states that are solutions of the SE form a vector space (actually Hilbert space).
5. Jul 30, 2005 #4
User Avatar
Staff: Mentor
We exist in a multi-dimensional world/universe, and vector analysis provides a convenient tool for dealing with n-dimensional models/state spaces.
6. Jul 30, 2005 #5
Good Lord how I love this forum! So manyh good questions and answers here.
The best answer to your question, imho, is this - We chose math to describe Nature. Nature chose to be linear. Why? Nobody really knows. Its simply rooted in the axioms of quantum mechanics.
7. Jul 30, 2005 #6
It's true.
It's true
If you give me the permission, I think the nature also shows non linear phenomenon and further more chaotic phenomenon. I think it is not the nature but the human laziness or a human principle of the lowest coast that explain the historic introduction of linear ways of thinking.
Once more time I will try, just for fun, the contradiction. If you consider that the principle of uncertainty (Heisenberg) also is rooted in the axioms of quantum mechanics and if you consider any trinome of a variable (a.x² + b.x + c = 0), you know that the latter owns 0, 1 or 2 solutions depending on the delta. The 2 solutions [delta non zero; x = (-b/2a) + or - ...] are in someway centred on the double one [delta = 0; x = (-b/2a)] and if the coefficient c is not constant in the nature, then the 2 different solutions can be less or more distant from the double one; there is a kind of amplitude, of imprecision in the solutions... Don't you think that it is a mathematical argument to introduce at least bilinearity in the quantum mechanics?
As said, just :rolleyes: for fun. Blackforest.
8. Jul 30, 2005 #7
User Avatar
Staff Emeritus
Gold Member
Dearly Missed
This argument assumes the solutions of the quadratic must be real. Quantum Mechanics solutions are all complex numbers (until the state vector is acted on by a Hermitian operator with real eigenvalues). So the quadratic always has two solutions, perhaps a repeated one counted twice. Fundamental Theorem of Algebra: Over the complex field every polynomial of degree n has n roots, counting repeated ones as many times as they are repeated.
9. Jul 30, 2005 #8
User Avatar
Staff Emeritus
Science Advisor
I disagree with that. WE choose linear models because they are far easier than non-linear models. In fact, one can argue that "modern" physics, quantum physics and relativity consist in replacing linear models with more accurate but harder, slightly non-linear, models.
10. Jul 30, 2005 #9
I'm not certain whether I agree/disagree with that ... yet. Please give an example of how you'd choose the axioms on QM such that they are stated in terms of a non-linear model.
11. Jul 30, 2005 #10
I was speaking only of QM.
If you mean that they can be derived from the axioms then I agree.
"the delta"? What do you mean by this? A measure of the associated parabola perhaps?
Can you give me an example of what this parabola represents?
Sometimes you have to restrict the mathemetical solutions to those which describe the physical phenomena. Therefore rather than there being some sort of "bilinearity" that you suspect it may be that nature does not allow something you assumed to be true before you got to that point. But I'm not all that clear on what you're talking about with this parabola thing above.
If you're refering to linearity then that equation you gave is irrelevant since it is the operator x which is supposed to be linear and not any equation such as the one you gave.
Last edited: Jul 30, 2005
12. Jul 30, 2005 #11
User Avatar
Staff Emeritus
Gold Member
Dearly Missed
His "delta" is the discriminant [tex]\sqrt{b^2 - 4ab}[/tex], often denoted [tex]\Delta[/tex].
13. Jul 31, 2005 #12
Yes, but Sorry sir : [tex]\sqrt{b^2 - 4ac}[/tex]
14. Jul 31, 2005 #13
Oh I was only trying to explain how one could, perhaps, introduce something else than linearity, f.e here bilinearity, in our way of thinking. "x" was only any variable without any condition, not necessary an operator. It was effectively just a kind of "parable". The contra-argumentation of self adjoint is a bad point for me, except if one could find some real situations, as you suggest yourself for the reality of the things, where the n solutions are automatically centred on one of them ... I repeat: it was a try just for fun and to show that one can perhaps develop other ways of thinking. Best regards
15. Jul 31, 2005 #14
... Said with other words, I was trying to connect the dispersion of the (n) solutions with the uncertainty concerning the value of the variable x (which is a central question in QM). For the bilinearity (x is solution of a trinome as given above) the idea appears quite more clearly than for the case where x is solution of a polynome of degree n.
16. Jul 31, 2005 #15
You were discussing the HUP right? The terms which appear in that expression must have an associated operator to even be physically meaningful though. If you're speaking about a physical observable then it is an axiom that the operator corresponding to all physical observables are Hermetian operators. These operators are linear. There are non-linear equations in all fields of physics that I know of. But the one that you're speaking of has no meaning to me as far as introducing non-linearity into QM since that quite literally means to me that you're introducing non-linear operators. Otherwise your comments have no meaning to me and you'll have to "dumb it down" for me. :tongue:
Again, you're speaking of the "reality of things." But this can only mean that you're refering to something which has an operator. All physically observable quantities in QM must have a corresponding Hermetian operator - That's an axiom.
17. Jul 31, 2005 #16
The reasons why I was introducing:
a) this discussion
b) the equation of the parabola
are easy to understand.
a) to develop the initial question of preeto283;
b) because
f(t) = 1/2. acceleration. (time)² + speed. time + initial position (at t = 0)
is a meaningfull non linear equation in physics depending on the time.
And now I am the pupil and if you agree I ask: Is there an operator associated to the time in QM? Which one? How does it work?
Concerning the Heisenberg's principle, you are totally right, it is not really an axiom because it can be mathematically demonstrated starting with considerations depending on the dispersion of variables (another delta of course).
18. Jul 31, 2005 #17
May be the ideal description of the nature must be non-linear but the non-linear equation is hard to solve. Because the states in space-time with random variables or random parameters with hidden indexes is translated to the abstract linear space of wave-functions?
19. Jul 31, 2005 #18
Firstly, let's remember that QM is a model. So we choose how it works, and we choose linearity because it works reasonably well enough and makes our lives much easier.
But if we look at the fields that make up nature, they couple to each other in highly non-linear ways, just looking at the Lagrangians for interactions betweens fields will tell you that.
Furthermore, even the classical gravitational field is non-linear. Einstein's field equations are non-linear; again just looking at the equations (and understanding what the terms are) will tell you the same thing.
So nature, as far as we can tell is non-linear. QM is not nature -- it is a convenient mathematical model, invented by humans, that describes nature remarkably well.
But there is also added confusion here. The original poster is speaking of the linearity of the space of states. Well what is a state? What we define it to be. So of course we can say that the states form a linear complex vector space if we want to (equipped with an inner product). But fundamental equations in QM can still be non-linear - we can easily have non-linear terms in the Schrodinger equation, for example, depending on how we choose to describe the energy of the system.
20. Jul 31, 2005 #19
User Avatar
Science Advisor
Homework Helper
The drunk who loses his wallet at night always starts looking for it under the lampost where the light is good.
Have something to add?
Similar Discussions: Vector spaces
1. Dual vector space (Replies: 4)
2. What is vector space? (Replies: 8) |
383de0c0904d6015 | I am having trouble recognizing all the approximations that are used in computational chemistry. I would like to start an answer list (similar as to the list of resources for learning chemistry) that addresses this question.
I will try to answer this question myself. I am hoping that anyone who is knowledgeable in the topic will correct and/or contribute. I am planning to start from general (e.g. Born Oppenheimer, LCAO) to specific (e.g. pseudopotential, functional). I am also planning to include why this approximation is necessary.
• 4
$\begingroup$ Asking for a comprehensive list makes this question impossible to answer between reasonable limits, as it effectively asks for a list that includes all the tricks used from integration, basis set approximations, through diagonalization to whatever that is included every single QChem software that meant for production and not just simple demonstrations. $\endgroup$
– Greg
Sep 25 '16 at 17:34
• 3
$\begingroup$ @Greg I will try to avoid the math. I know that it is important and that it is sometimes difficult to separate the physics/chemistry from math. Otherwise it is too large, as you say. For example, I will not delve into how Frank Boys figured out that STO could be represented with GTO. Nor will I explain the different types of diagonalization (conjugate gradient, Davidson, etc). I will try not go to basis sets either unless there is a physical/chemical reasoning. Additionally, the reason why some basis sets are better or worse can be illustrated by first showing what has been sheared away first. $\endgroup$ Sep 26 '16 at 2:26
• 1
$\begingroup$ Overlooked in answers here is the reality of classical simulation, encompassing huge fields of monte carlo and molecular dynamics, which seeks to define mathematical fits to highly accurate quantum phenomena, and thereby runs a lot faster than solving the Schrodinger equation. There is no fundamental theory that proves this can work in all cases, but it has doubtless been useful: github.com/khavernathy/mcmd $\endgroup$
– khaverim
Oct 17 '17 at 18:46
• 2
$\begingroup$ @khaverim I suspect that this bias is primarily due to the site's users; most of us use or develop only the electronic structure theory side of things, so we have far fewer experts in MM/MD/MC, which means many related questions go completely unanswered. $\endgroup$ Oct 17 '17 at 20:40
• 1
$\begingroup$ @khaverim if you have anything to add, you can always post it as a new answer. Even if you don't think you have a full answer, you could start a community wiki answer that could help this post become a more comprehensive reference for other users. $\endgroup$
– Tyberius
Oct 17 '17 at 23:27
The goal of computational chemistry is to obtain the properties of a system. This is done by solving Dirac's Equations.
Treating particles as point particles with mass
In most computational software, particles are treated as points with some mass. Neutrons and protons may be lumped into a nucleus. However, this is not true in all cases as pointed out in this question: Does computational chemistry include neutrons in ground state calculations? .
Electrons, protons, and neutrons are not simply point particles with some mass and charge. The theory and forces can get quite complex. I haven't been able to find journals that address the effect of this approximation. But I will try to post something if I come across it.
Neglecting Relativistic Effects
Wien2k has a nice summary of relativistic effects that need to be considered [8]. The relativistic effects that need to be included are:
1. Mass-velocity correction
2. The Darwin term
3. Spin-orbit coupling
4. Indirect relativistic effect
However, relativistic effects usually become important for elements further down the fifth row in the periodic table. This is because relativistic effects are dependent on nuclear charge. The velocity of the electron as well as spin orbit coupling increases as the nuclear charge increases.
This does not mean that relativistic effects do not affect "light" atoms. A good example that shows this is the sodium doublet which is a result of spin-orbit coupling. Nevertheless, solving Schrodinger's equation is enough in many cases.
Most computational chemistry software have the option of scalar relativistic calculations. The scalar relativistic technique was developed by Koelling and Harmon. These are calculations that include the Darwin term and the mass velocity correction. According to Koelling and Harmon's paper:
We present a technique for the reduction of the Dirac equation which initially omits the spin-orbit interaction (thus keeping spin as a good quantum number), but retains all other relativistic kinematic effects such as mass-velocity, Darwin,and higher order terms.[9]
Some offer fully relativistic calculations, which include all (or most) relativistic effects. But these are rarer to find, and are only available for certain cases.
So why are relativistic effects neglected? Well, relativistic effects are small for light molecules and relativistic calculations are expensive
This is so because relativistic calculations need self-consistent solutions of about twice as many states as non-relativistic ones. [10]
Thus, scalar relativistic calculations offer a nice middle ground between efficiency and accuracy.
The Born-Oppenheimer Approximation
I am not sure what motivated Born and Oppenheimer to use this approximation. The seminal paper is in German.[1] It appears that the motivation was to simplify the Hamiltonian. According to the introductory course at MIT,[2]
it allows one to compute the electronic structure of a molecule without saying anything about the quantum mechanics of the nuclei
And according to Wikipedia,[3] it reduces the amount of computations:
As many approximations, it does not hold true in all scenarios. Cases where the Born-Oppenheimer approximation fail are:[4]
ion-molecule, charge transfer, and other reactions involving obvious electronic curve crossings
Qualitatively, the Born-Oppenheimer approximation says that the nuclei are so slow moving that we can assume them to be fixed when describing the behavior of electrons. Mathematically(?), the Born-Oppenheimer approximation allows to treat the electrons and protons independently. This does not imply that the nuclei and electrons are independent of each other. In other words, it does not mean that the nuclei are not influenced by the motion of electrons. The nuclei still feel the motion of the electrons. In addition, the Born-Oppenheimer approximation does not say that the nuclei does not move. It only means that when describing the motion of electrons, we assume that the nuclei are fixed.
No Analytical Solution to Dirac/Schrodinger Equation
Unfortunately there is no analytic solution to the Dirac equation for any atom that has more than one electron even after the Born-Oppenheimer approximation (a list of quantum-mechanical problems that have an analytical solution is available on Wikipedia[5]). Many texts state that the reason as to why the Schrödinger equation is not exactly solvable for more than one electron is due to the Coulomb repulsion between electrons.[6]
However, this is not entirely true. A counterargument is Hooke's atom. The Hamiltonian for Hooke's atom has an Coulomb electron-electron repulsion term. However, it has an exact solution for more than one electron under certain circumstatnces.[7]
The true reason as to why the Schrödinger equation is not solvable for multi-electron atoms is due to the fact that the motion of electrons cannot be decoupled from each other. In other words, the Hamiltonian is not separable for a multi electron system. If we were to get rid of the electron-electron Coulomb repulsion, the motion of the electrons can be decoupled. This may be the reason as to why the electron-electron Coulomb repulsion (a.k.a. electron correlation) is used as the reason why the Schrödinger equation is not exactly solvable.
From non-interacting to the real thing
Since the Dirac equation cannot be solved analytically, we must make models that are solvable and add approximations to it. These approximations are further refined to get more accurate results.
The most simple model (and the foundation for computational chemistry) is the system of non-interacting electrons. As the name suggests, the electrons do not interact with other electrons. This allows us to write the Hamiltonian for all electrons as the sum of one-electron Hamiltonians.
The one electron Hamiltonian consists of a kinetic energy term and a potential energy term.
$$\mathcal{H}=\sum^N_i \left(\frac{\hbar^2}{2m_i\nabla^2 }+V_i\right)$$ The solutions for the non-interacting system of electrons are analytical. And they give a starting point for other calculations. Hartree-Fock, DFT, and solid state physics use this simplified model.
Note: it appears that there are several terms that say something similar to this. There is the independent electron approximation and the central field approximation. I'll dig into literature to see what are the differences between these three terms.
Variations of the non-interacting system of electrons
There are several variations of non-interacting electron systems. In all these, the electrons do not interact with each other. What sets them apart from each other is the potential energy term of the Hamiltonian $V_i$. This potential energy term is often called the effective potential.
Free electron model
This model is used as the starting point in solid state physics. The free electron model describes the behavior of valence electrons in a metal or semiconductor. Here, the potential $V_i$ is equal to 0 for all electrons. It does not feel the effect of other electrons or nuclei. The wavefunctions of this model are plane waves.
Nearly free electron model
The nearly free electron includes a weak periodic potential. In other words $$V_i(r)=V_i(r+k)$$ It is used to describe periodic systems such as ideal crystals. The wavefunctions of the nearly free electron model are Bloch waves. Bloch waves are plane waves that are multiplied by a periodic function.
Hartree method
In the Hartree method, the potential considers repulsion between two electrons and attraction to the nucleus. In this model,
Hartree assumed that each electron moves in the averaged potential of the electrostatic interactions with surrounding electrons [11]
Hartree replaced the electron-electron interaction with an effective potential that only depended on the coordinates of the $i$th electron. This effective potential describes an electron interacting with an electron cloud.
From one-electron orbitals to the multielectron orbital
As mentioned above, the total Hamiltonian can be approximated as the sum of one-electron Hamiltonians for non-interacting electrons. However, the wavefunction has to satisfy antisymmetry.
Exact exchange
Electron correlation
Some define electron correlation as everything that the Hartree-Fock method leaves out.
Electron correlation: DFT edition
Electron correlation: Post-Hartree-Fock edition
Periodic systems and pseudopotentials
1. M. Born and R. Oppenheimer, Ann. Phys. 1927, 389, 457–484.
doi: 10.1002/andp.19273892002
2. Born Oppenheimer Approximation. Open Courseware MIT:Introductory Quantum Mechanics. Fall 2005. Section 12 Lecture. (https://ocw.mit.edu/courses/chemistry/5-73-introductory-quantum-mechanics-i-fall-2005/lecture-notes/sec12.pdf) (https://ocw.mit.edu/courses/chemistry/5-73-introductory-quantum-mechanics-i-fall-2005/)
3. Born-Oppenheimer approximation. Wikipedia (https://en.wikipedia.org/wiki/Born%E2%80%93Oppenheimer_approximation)
4. L. J. Butler, Annu. Rev. Phys. Chem. 1998, 49, 125-71.
PMID: 15012427 doi: 10.1146/annurev.physchem.49.1.125
5. List of quantum-mechanical systems with analytical solutions. Wikipedia
6. LibreTexts: 9.1: The Schrödinger Equation For Multi-Electron Atoms
7. Hooke's atom. Wikipedia
8. Summary of relativistic effects. Wien2k (http://www.wien2k.at/reg_user/textbooks/WIEN2k_lecture-notes_2013/Relativity-NCM.pdf)
9. A technique for relativistic spin-polarised calculations. Journal of Physics C: Solid State Physics, Volume 10, Number 16 (http://iopscience.iop.org/article/10.1088/0022-3719/10/16/019/meta)
10. The Scalar Relativistic Approximation. Takeda, T. Z Physik B (1978) 32: 43. doi:10.1007/BF01322185 (http://link.springer.com/article/10.1007/BF01322185)
11. Density Functional Theory in Quantum Chemistry. Tsuneda, T. 2014. ISBN: 978-4-431-54824-9. Page 36.
• 2
$\begingroup$ @Rodriguez I will look into that. Also, this is an ongoing answer. It is still not complete! Nevertheless I would appreciate any feedback. $\endgroup$ Sep 25 '16 at 16:52
• 3
$\begingroup$ 1. Schroedinger equation is approximation, better go with Dirac to have relativity (and spin). 2. Born-Oppenheimer is two-step process. First is separation of total wavefunction into product of nuclear and electronic part, second is setting the nuclear kinetic energy to zero. (see en.wikipedia.org/wiki/Born%E2%80%93Oppenheimer_approximation) $\endgroup$
– ssavec
Sep 25 '16 at 18:55
• 2
$\begingroup$ Please use manual markup for links, i.e. [link](http://...). It looks better, is easier readable, and will not lead into nirvana if interpreted wrong by the page. Just for funs, you can check what I mean by clicking the links in revision 8. $\endgroup$ Sep 26 '16 at 8:34
• 3
$\begingroup$ 1. As @ssavec already mentioned, the first usual approximation is neglecting relativistic effects. 2. Big mistake in the very first sentence: the goal is to obtain properties of a molecular system. Then, depending on the formalism used (MM, WFT, DFT, ...) you are looking for a mathematical entity that can somehow spits out the values of the properties. And this entity is not always the wave function. 3. Don't get me wrong, but if you don't quite understand the purpose of arguably the central approximation in present day QC (BO one), it might be a bit to early for you to answer the question. $\endgroup$
– Wildcat
Sep 26 '16 at 10:55
• 2
$\begingroup$ @QuantumAMERICCINO There is some stuff I would explain differently. I think I will write something about the Variational Principle and (Post)-Hartree-Fock as a new answer. Maybe even some DFT. Then we can see how to connect it best to your content. $\endgroup$
– Feodoran
May 5 '17 at 8:44
Variational Principle
The Variational Principle states that any approximate solution to the true wave function $\Psi$ will be higher in its total energy. The total energy is calculated as the expectation value of the Hamiltonian $E=\langle\Psi|\hat H|\Psi\rangle$. Thus we can set up a trial wave function $\tilde\Psi$ and vary its parameters until its corresponding energy $\tilde E=\langle\tilde\Psi|\hat H|\tilde\Psi\rangle$ reaches a minimum. This solution provides an upper limit for the true (exact) energy ($\tilde E \le E$) as well as the best possible approximation to the wave function.
When comparing variational methods we can directly judge which result is better by comparing their total energy. Examples for variational methods are Hartree-Fock and Configuration Interaction. Non-variational methods are Coupled Cluster, Density Functional Theory and Pertubation Theory.
Note that the total energy by itself has not much physical meaning and depends on different numerical parameters. Of practical interest are always difference between total energies, for example binding energies.
The main issue of solving the Schrödinger is the Coulomb interaction between electrons. There is no general analytic solution for more than 1 electron. However, for the non-interacting system, which ignore electron-electron interaction completely, one can employ separation of variables for the electronic coordinates. This allows us to express the total electronic wave function as a product of one-electron wave functions (orbitals). This is called the Hartree-Product and when additionally accounting for the anti-symmetry as required by the Pauli principle, it leads to the Slater-Determinant.
In the Hartree-Fock method the Slater-Determinant, which is only exact for the non-interacting system, is combined with the Hamilonian including the electron-electron interation term. Further applying the Variational Principle leads to the Hartree-Fock equations, which may be solved numerically. The physical interpretation of this approximation is, that the electrons feel each other only in an averaged way, hence the name mean-field approximation.
For the numerical solution one needs to diagonalize the Fock matrix resulting in its eigenvalues and eigenvectors. Since the Fock matrix depends on its own eigenvectors, this is solved iteratively: After choosing an initial guess the result is used to update the Fock matrix, which is then diagonalized again. After each iteration an improved result is obtained. This is known as self-consistent field method.
The SCF algorithm scales with $M^3$, where $M$ is the number of basis sets. However, the preceding calculation of all required one- and two-electron integrals is usually more time consuming than the actual HF calculation.
Electron Correlation & Post-Hartree-Fock
The correlation energy is defined as the difference between the Hartree-Fock energy and the exact energy. Electron Correlation is the error made in the Hartree-Fock approximation, since the Ansatz made for the electronic wave function does not reflect its true form. The numerically exact solution can be obtained with the Full Configuration Interaction method.
Configuration Interaction
The electronic wave function, like any other wave function in quantum mechanics, can be expanded in an arbitrary basis set. In Configuration Interaction it is recognized that the electron configurations, which can be created based on the molecular orbitals obtained from a Hartree-Fock calculation, can be used as such a basis.
Note that this basis of electron configurations is a different one, than the one-electron basis set which is used to to represent the molecular orbitals.
A Configuration Interaction calculation is thus preceded by a Hartree-Fock calculation. The Slater-Determinant used in the Hartree-Fock calculation is the first, and usually most important, configuration. Further excited determinants (configurations) are generated by promoting electrons from occupied orbitals to virtual (in the HF configuration unoccupied) ones. Applying again the Variational Principle one can calculate that linear combination of such configurations that minimized the electronic energy.
Computationally one needs to set up the matrix representation of the Hamiltonian (in the basis of the chosen basis of the configurations) and diagonalize it. In contrast to the Fock matrix it does not depend on its own solution, so no iterative procedure is required here. However, this is commonly speed up by iterative approaches which are much less time consuming than an exact diagoanlization.
In total $\binom{M}{N}$ configurations can be generated, where $M$ is the number of orbitals and $N$ is the number of electrons. Since this number increases exponentially, only the smallest molecules can be calculated even on the fastest super computers (of course this also depends on various other numerical parameters). Therefore different truncation schemes exist to only include some of the configurations.
• Full CI includes all possible configurations and is the numerically exact limit.
• Hartree-Fock from the perpective of CI it is known the one-determinant approximation.
• CIS includes only configrations where 1 electron is excited with respect to the HF configuration (Single excitations). Due to Brillouin's theorem, no improvement of the HF ground state energy is made, but rough approximations for excited states can be obtained.
• CID includes only configrations where 2 electrons are excited with respect to the HF configuration (Double excitations). First improvements to the ground state energy.
• CISD since Single excitation couple with Double and those in turn with the HF ground state, this is an improvement over CID. Furthermore, there are much less Single excitations than Double excitations, which means computationally CISD has no considerable additional efford over CID.
• CISDT... The higher the excitation degree the smaller the correction to the electronic energy, but the higher the computational demand.
• CASCI stands for Complete Active Space CI. The active space is a chosen set of orbitals within which all possible configurations are considered. Orbitals with energy below the active space orbitals are (doubly) occupied for all configurations, orbitals above in energy are always left empty.
Multi-configurational self-consistent field
The MCSCF approach combines a CI calculation with the HF method in the sense that the coefficients of the chosen configurations (CI optimization) and the molecular orbital coefficient (SCF optimization) are optimized simultaneously. A common example is the CASSCF method.
Coupled Cluster
Coupled Cluster is a reparametrization of the electronic wave function. In the CI approach the wave function can be written as an excitation operator $\hat T$ acting directly on the HF determinant and creating all possible excited configurations. The CC wave function is obtained by having the exponential operator $\exp(\hat T)$ acting on the HF determinant.
The effect is that for example CCSD will not only include the Single and Double excitations, but certain Triple and Quadruple excitations as well, making the method size consitent.
Unfortunately the arising Coupled Cluster equations cannot be solved by employing the Variational Principle, hence the resulting total energy may be lower than the exact FCI energy. This also means that a CC calculation including all possible configurations is not equivalent to FCI. Although CCSD seems to be an exception here. CC is computationally more expensive than a CI calculation of the same excitation operator, but results are more accurate.
CCSD(T) is considered the gold standard of quantum chemistry and includes pertubative estimates of the Triple excitations. It's overall scaling is in the order of $N^7$.
Multi Reference Methods
Instead of creating excitated configurations only based on the HF configuration, one can also have multiple reference configurations. Commonly one starts with a MCSCF wave function and its configurationsi as reference. In a second step one can do for example do a MRCI or MRCC calculation.
This approach is required for strongly correlated systems, where other methods (DFT, CCSD(T)) are failing.
• A. Szabo and N. S. Ostlund. Modern Quantum Chemistry: Introduction to Advanced Electronic Structure Theory (Dover Books on Chemistry). Dover Publications, 1996.
• T. Helgaker, P. Jorgensen, and J. Olsen. Molecular electronic-structure theory. 1. Auflage. New York: Wiley, 2000.
Speeding up two-electron integral evaluation through approximate methods
Remarks on Notation (using Mulliken Notation)
• $\mu,\nu,\kappa,\lambda:$ atomic orbital (AO) basis functions
• $P,Q,R,S:$ auxiliary basis functions
• $(\mu\nu|\kappa\lambda) = \int\int\phi_\mu^*(r_1)\phi_\nu(r_1)\frac{1}{r_{12}} \phi_\kappa^*(r_2) \phi_\lambda(r_2)d\tau_1d\tau_2 = \left<\mu\kappa|\nu\lambda \right>$
• $(\mu\nu|1|P) = (\mu\nu P)$
• $(\mu\nu|g_{12}|P) = (\mu\nu|P)$ where $g_{12}$ is the two-electron interaction, almost always $= \frac{1}{r_{12}}$
• $S_{PQ} = (PQ)$
• $V_{PQ} = (P|Q)$
Resolution of the Identity approximations
Computation of 2-electron 4-centre Integrals $(\mu\nu|\lambda\sigma)$ can be a significant bottleneck in electronic structure calculations.
$\rightarrow$ Idea of RI is to avoid such integrals
Proposed solution: Reexpansion of pair products $|\mu\nu)$ with
$$ |\mu\nu) \approx |\widetilde{\mu\nu}) = \sum_P C_{\mu\nu}^P |P) $$
Where we choose the auxiliary basis set $\{P\}$ to be $\sum_P |P)(P| \approx 1$ (Hence the name Resolution of the Identity)
If we approximate the unity operator $\hat{1}$ in the Coulomb metric $\hat{1} \approx \sum_{PQ} |\chi_P)(P|Q)^{-1} (\chi_Q|$
The integral $(\mu\nu|\kappa\lambda)$ can then be approximated as $(\mu\nu|\kappa\lambda) \approx \sum_{PQ}(\mu\nu|P)(P|Q)^{-1}(Q|\kappa\lambda)$.
The same formulation is obtained if the the integral error $\Delta (\mu\nu - \widetilde{\mu\nu}|\kappa\lambda - \widetilde{\kappa\lambda})$ is minimized with respect to the coefficients $C_{\mu\nu}^P$.
This leads to a reduction of computation cost since we need only to compute three-index integrals. These are much faster to compute than four-index integrals. Additionally, a lot fewer three-index index integrals have to be computed than four-index integrals would be.
The RI approximation is used to accelerate the calculation of the Coulomb operator $\hat{J}$ in the RI-J approximation Here the electron density is expanded in the auxiliary set ($D_{\kappa\lambda}$ is the density matrix):
$$ J_{\mu\nu} = (\mu\nu|\rho) \approx \sum_{PQ} \sum_{\kappa\lambda} (\mu\nu|P)(P|Q)^{-1}(Q|\kappa\lambda)D_{\kappa\lambda} $$
If you implement this in your program in such a way that you calculate all the quantities strictly from right to left, the scaling is brought down from $N^4$ to $N^3$.
It can also be used to approximate the Exchange operator $\hat{K}$. Here no reduction of the scaling is possible, but the prefactor is reduced.
RI approximations are essential also for MP2/CC2-R12/F12 methods. There you need the robust expression $(\mu\nu|\kappa\lambda) \approx (\widetilde{\mu\nu|\kappa\lambda}) = (\widetilde{\mu\nu}|\kappa\lambda) + (\mu\nu|\widetilde{\kappa\lambda}) - (\widetilde{\mu\nu}|\widetilde{\kappa\lambda})$ for example for the operators $r_{12}$ and $[\hat{T}_{12},r_{12}]$.
The auxiliary basis sets can be used freely, but they have to be fitted for their purpose (meaning there are individual auxiliary basis sets for every operator, often denoted $jbas$ (for pair products $(ij|$), $cbas$ (for pair products $(ia|$), $i,j$ being occupied, $a$ being virtual orbitals. I know of no aux basis sets for doubly occupied pair products $(ab|$, so there are certain integrals where RI/DF can't be used afaik.
Cholesky Decomposition
The two-electron integrals $(\mu\nu|\kappa\lambda)$ can be expressed in a positive definite Hermitian $V_{\mu\nu,\kappa\lambda}$.
Since this matrix is positive definite it can be decomposed via a Cholesky decomposition $\mathbf{V} = \mathbf{LL}^{\dagger}$. For a detailed explanation read the original publication by Beebe and Linderberg.
In this approximation the basis set $\{P\}$ is conceived via the orbital basis, so no generation of a basis set is needed.
While CD is computationally more demanding than RI, it is numerically robust and can be used for more precise results. Since the only constricting value is the threshhold at which the expansion is stopped, the precision of the CD in theory can be chosen arbitrarily and systematically.
• P. Merlot, T. Kjaergaard, T. Helgaker, R. Lindh, F. Aquilante, S. Reine, T. B. Pedersen, J. Comp. Chem. 2013, 34, 1486 - 1496
• O. Vahtras, J. Almloef. Chem. Phys. Letters 1993, 213, 514-518
• F. Weigend, M. Kattanek, R. Ahlrichs, J. Chem. Phys., 2009, 130, 164106-1
• N. H. F. Beebe, J. Linderberg, Int. J. Q. Chem., 1977, 12, 683-705
Your Answer
|
7dba99e281b67646 | Excited-state processes from plasma to bioimaging: Theory and applications
Excited-state processes. The two main relaxation channels of an electronically excited molecule are fluorescence and radiationless relaxation, a process in which the system relaxes to the ground state by dissipating electronic energy into heat. Other competing processes, such as transition to a triplet state via inter-system crossing (not shown), excited-state chemistry, and electron transfer, alter the chemical identity of the molecule.
What do plasma, solar panels, qubits, and glow-in-the-dark pigs have in common? The fundamental physics of these phenomena is governed by excited-state processes initiated by light. When a photon is absorbed by a molecule, it promotes an electron to a higher energy level, leading to a new electron distribution that often features an open-shell pattern. This event initiates a variety of processes: radiative and radiationless relaxation, photochemical transformations, electron ejection or attachment. The competition between these processes determines the fate of an electronically excited system—some systems emit light back, some effectively convert excess electronic energy into heat, some produce charge carriers, some change their chemical identity. From the quantum-mechanical point of view, these processes entail coupled electronic and nuclear dynamics. Understanding how these quantum processes unfold in systems with many degrees of freedom, usually coupled to the environment, is of great fundamental importance. Ultimately, we want to know how the chemical structure of the molecule and the environment affect branching ratios and timescales of various excited-state processes. From the practical point of view, the ability to control these processes is the key to the successful design of new photovoltaic materials, bioimaging probes, photodynamic therapies, and materials for high-energy applications (e.g., fusion reactors). Moreover, precise understanding of light-matter interactions and the ability to describe them quantitatively allows us to utilize radiation as a tool for interrogating properties of molecules and materials. Spectroscopy is indeed the most-common and most-powerful tool for deciphering molecular structure. The techniques vary from classic UV-VIS and photoelectron spectroscopies to novel non-linear approaches and high-energy X-ray attosecond pulses.
All these phenomena are governed by the same law: the Schrödinger equation. Tantalizingly simple, it is notoriously difficult to solve. Although quantum chemistry has been very successful in developing practical approaches for the electronic structure problem, challenges still abound. The research of our lab has been driven by fundamental challenges posed by excited-state processes. Specifically, we are pursuing the following directions.
Methods for open-shell and electronically excited species, and approaches for strong correlation. Open-shell character and electronic degeneracies result in multi-configurational wave functions that are not amenable to treatment by the standard single-reference hierarchy of methods. Our group is developing approaches based on a robust and powerful formalism: equation-of-motion coupled-cluster (EOM-CC). We develop new theoretical models for electron correlation, novel algorithms for solving many-body problems, and implement these ideas in practical and efficient computer codes. We are proud to be a part of the Q-Chem open-teamware software project. Quantum chemical methods developed in our group are included in the Q-Chem electronic structure package and are broadly used for calculations of excited states, spectroscopy modeling, magnetic and optical properties of molecular materials and biomolecules, and more. We are also involved in community-wide software development efforts through our partnership with MolSSI.
Relevant publications: EOM-CC methodology, Spin-flip developments
Extending many-body methods to new domains: Resonances and core-level states. Metastable states (such as highly excited autoionizing states, transient anions, and core-ionized states) belong to the continuum part of the spectrum and are thus notoriously difficult to describe theoretically. Our group is developing non-hermitian extensions of EOM-CC theory to treat electronic structure in the continuum.
Relevant publications: Resonances, X-ray
Connection between quantum chemistry and experiment: Spectroscopy modeling. To make the connection between theory and experiment, one needs to be able to go beyond energies and wave functions and model observable properties. We develop tools for computational spectroscopy, ranging from simple models for calculating vibrational progressions to computing total and differential cross-sections in photoelectron/photodetachment experiments by means of Dyson orbitals. Among recent developments are extensions of the theory to model spectroscopy in non-linear and high-energy regimes. We also develop theoretical tools for modeling dynamical processes, such as non-adiabatic relaxation, intersystem crossing, and photo-induced electron transfer.
Relevant publications: Molecular orbital conceps and observables, Spectroscopy and excited-state processes,
Multi-scale methods for extended systems. Multi-scale methods, such as the QM/MM approach, enable rigorous quantum-mechanical treatment of a subsystem (e.g., a chromophore) embedded in an environment (protein, solvent, molecular solid). We are developing multi-scale approaches for modeling condensed-phase processes, such as spectroscopy in solution, photo-induced processes in photo-active proteins, and solar energy harvesting.
Relevant publications: QM/MM, EFP, and beyond Condensed-phase simulations
Applications and collaborations. Parallel to method-development work, we are actively involved in numerous collaborative studies. Our application work can be roughly divided into the following domains:
1. Bioimaging: Characterizing excited-state properties in fluorescent proteins from the Green Fluorescent Protein family.
2. Relevant publications: GFP and beyond
3. Solar energy, photovoltaics, batteries, fuel cells.
4. Relevant publications: Sustainable energy
5. Electron transfer in biological systems.
6. Relevant publications: Charge transfer
7. Spectroscopy of all shades and flavors for bound and metastable states.
8. Relevant publications: Dyson, Spectra
9. Novel materials and quantum information science.
10. Relevant publications: QIS
11. Chemistry and spectroscopy of open-shell species.
12. Relevant publications: Orbital concepts for analysis and quantitative calculations, Spectroscopy modeling, SMMs |
d79d2f7aa95480b3 | QM 1
Before completing this post, I need to acknowledge that my goal in writing about modern physics was to create a milieu for more talking about Western Zen. However, as I’ve proceeded, the goal has somewhat changed. I want you, as a reader, to become, if you aren’t already, a physics buff, much in the way I became a history buff after finding history incredibly boring and hateful throughout high school and college. The apotheosis of my history disenchantment came at Stanford in a course taught by a highly regarded historian. The course was entitled “The High Middle Ages” and I actually took it as an elective thinking that it was likely to be fascinating. It was only gradually over the years that I realized that history at its best although based on factual evidence, consists of stories full of meaning, significance and human interest. Turning back to physics, I note that even after more than a hundred years of revolution, physics still suffers a hangover from 300 years of its classical period in which it was characterized by a supposedly passionless objectivity and a mundane view of reality. In fact, modern physics can be imagined as a scientific fantasy, a far-flung poetic construction from which equations can be deduced and the fantasy brought back to earth in experiments and in the devices of our age. When I use the word “fantasy” I do not mean to suggest any lack of rigorous or critical thinking in science. I do want to imply a new expansion of what science is about, a new awareness, hinting at a “reality” deeper than what we have ever imagined in the past. However, to me even more significant than a new reality is the fact that the Quantum Revolution showed that physics can never be considered absolute. The latest and greatest theories are always subject to a revolution which undermines the metaphysics underlying the theory. Who knows what the next revolution will bring? Judging from our understanding of the physics of our age, a new revolution will not change the feeling that we are living in a universe which is an unimaginable miracle.
In what follows I’ve included formulas and mathematics whose significance can be easily be talked about without going into the gory details. The hope is that these will be helpful in clarifying the excitement of physics and the metaphysical ideas lying behind. Of course, the condensed treatment here can be further explicated in the books I mention and in Wikipedia.
My last post, about the massive revolution in physics of the early 20th century, ended by describing the situation in early 1925 when it became abundantly clear in the words of Max Jammer (Jammer, p 196) that physics of the atom was “a lamentable hodgepodge of hypotheses, principles, theorems, and computational recipes rather than a logical consistent theory.” Metaphysically, physicists clung to classical ideas such as particles whose motion consisted of trajectories governed by differential equations and waves as material substances spread out in space and governed by partial differential equations. Clearly these ideas were logically inconsistent with experimental results, but the deep classical metaphysics, refined over 300 years could not be abandoned until there was a consistent theory which allowed something new and different.
Werner Heisenberg, born Dec 5, 1901 was 23 years old in the summer of 1925. He had been a brilliant student at Munich studying with Arnold Sommerfeld, had recently moved to Göttingen, a citadel of math and physics, and had made the acquaintance of Bohr in Copenhagen where he became totally enthralled with doing something about the quantum mess. He noted that the electron orbits of the current theory were purely theoretical constructs and could not be directly observed. Experiments could measure the wavelengths and intensity of the light atoms gave off, so following the Zeitgeist of the times as expounded by Mach and Einstein, Heisenberg decided to try make a direct theory of atomic radiation. One of the ideas of the old quantum theory that Heisenberg used was Bohr’s “Correspondence” principle which notes that as electron orbits become large along with their quantum numbers, quantum results should merge with the classical. Classical physics failed only when things became small enough that Planck’s constant h became significant. Bohr had used this idea in obtaining his formula for the hydrogen atom’s energy levels. In various “old quantum” results the Correspondence Principle was always used, but in different, creative ways for each situation. Heisenberg managed to incorporate it into his ultimate vector-matrix construction once and for all. Heisenberg’s first paper in the Fall of 1925 was jumped on by him and many others and developed into a coherent theory. The new results eliminated many slight discrepancies between theory and experiment, but more important, showed great promise during the last half of 1925 of becoming an actual logical theory.
In January, 1926, Erwin Schrödinger published his first great paper on wave mechanics. Schrödinger, working from classical mechanics, but following de Broglie’s idea of “matter waves”, and using the Correspondence Principle, came up with a wave theory of particle motion, a partial differential equation which could be solved for many systems such as the hydrogen atom, and which soon duplicated Heisenberg’s new results. Within a couple of months Schrödinger closed down a developing controversy by showing that his and Heisenberg’s approaches, though based on seemingly radically opposed ideas, were, in fact, mathematically isomorphic. Meanwhile starting in early 1926, PAM Dirac introduced an abstract algebraic operator approach that went deeper than either Heisenberg or Schrödinger. A significant aspect of Dirac’s genius was his ability to cut through mathematical clutter to a simpler expression of things. I will dare here to be specific about what I’ll call THE fundamental quantum result, hoping that the simplicity of Dirac’s notation will enable those of you without a background in advanced undergraduate mathematics to get some of the feel and flavor of QM.
In ordinary algebra a new level of mathematical abstraction is reached by using letters such as x,y,z or a,b,c to stand for specific numbers, numbers such as 1,2,3 or 3.1416. Numbers, if you think about it, are already somewhat abstract entities. If one has two apples and one orange, one has 3 objects and the “3” doesn’t care that you’re mixing apples and oranges. With algebra, If I use x to stand for a number, the “x” doesn’t care that I don’t know the number it stands for. In Dirac’s abstract scheme what he calls c-numbers are simply symbols of the ordinary algebra that one studies in high school. Along with the c-numbers (classic numbers) Dirac introduces q-numbers (quantum numbers) which are algebraic symbols that behave somewhat differently than those of ordinary algebra. Two of the most important q-numbers are p and s, where p stands for the momentum of a moving particle, mv, mass times velocity in classical physics, and s stands for the position of the particle in space. (I’ve used s instead of the usual q for position to try avoid a confusion with the q of q-number.) Taken as q-numbers, p and s satisfy
ps – sp = h/2πi
which I’ll call the Fundamental Quantum Result in which h is Planck’s constant and i the square root of -1. Actually, Dirac, observing that in most formulas or equations involving h, it occurs as h/2π, defined what is now called h bar or h slash using the symbol ħ = h/2π for the “reduced” Planck constant. If one reads about QM elsewhere (perhaps in Wikipedia) one will see ħ almost universally used. Rather than the way I’ve written the FQR above, it will appear as something like
pqqp = ħ/i
where I’ve restored the usual q for position. What this expression is saying is that in the new QM if one multiplies something first by position q and then by momentum p, the result is different from the multiplications done in the opposite order. We say these q-numbers are non-commutative, the order of multiplication matters. Boldface type is used because position and momentum are vectors and the equation actually applies to each of their 3 components. Furthermore, the FQR tells us exact size of the non-commute. In usual human sized physical units ħ is .00…001054… where there are 33 zeros before the 1054. If we can ignore the size of ħ and set it to zero, p and q, then commute, can be considered c-numbers and we’re back to classical physics. Incidentally, Heisenberg, Born and Jordan obtained the FQR using p and q as infinite matrices and it can be derived also using Schrödinger’s differential operators. It is interesting to note that by using his new abstract algebra, Dirac not only obtained the FQR but could calculate the energy levels of the hydrogen atom. Only later did physicists obtain that result using Heisenberg’s matrices. Sometimes the deep abstract leads to surprisingly concrete results.
For most physicists in 1926, the big excitement was Schrödinger’s equation. Partial differential equations were a familiar tool, while matrices were at that time known mainly to mathematicians. The “old quantum theory” had made a few forays into one or another area leaving the fundamentals of atomic physics and chemistry pretty much in the dark. With Schrödinger’s equation, light was thrown everywhere. One could calculate how two hydrogen atoms were bound in the hydrogen molecule. Then using that binding as a model one could understand various bindings of different molecules. All of chemistry became open to theoretic treatment. The helium atom with its two electrons couldn’t be dealt with at all by the old quantum theory. Using various approximation methods, the new theory could understand in detail the helium atom and other multielectron atoms. Electrons in metals could be modeled with the Schrödinger’s equation, and soon the discovery of the neutron opened up the study of the atomic nucleus. The old quantum theory was helpless in dealing with particle scattering where there were no closed orbits. Such scattering was easily accommodated by the Schrödinger equation though the detailed calculations were far from trivial. Over the years quantum theory revealed more and more practical knowledge and most physicists concentrated on experiments and theoretic calculations that led to such knowledge with little concern about what the new theory meant in terms of physical reality.
However, back in the first few years after 1925 there was a great deal of concern about what the theory meant and the question of how it should be interpreted. For example, under Schrödinger’s theory an electron was represented by a “cloud” of numbers which could travel through space or surround an atom’s nucleus. These numbers, called the wave function and typically named ψ, were complex, of the form a + ib, where i is the square root of -1. By multiplying such a number by its conjugate a – ib, one gets a positive (strictly speaking, non-negative) number which can perhaps be physically interpreted. Schrödinger himself tried to interpret this “real” cloud as a negative electric change density, a blob of negative charge. For a free electron, outside an atom, Schrödinger imagined that the electron wave could form what is called a “wave packet”, a combination of different frequencies that would appear as a small moving blob which could be interpreted as a particle. This idea definitely did not fly. There were too many situations where the waves were spread out in space, before an electron suddenly made its appearance as a particle. The question of what ψ meant was resolved by Max Born (see Wikipedia), starting with a paper in June, 1926. Born interpreted the non-negative numbers ψ*ψ (ψ* being the complex conjugate of the ψ numbers) as a probability distribution for where the electron might appear under suitable physical circumstances. What these physical circumstances are and the physical process of the appearance are still not completely resolved. Later in this or another blog post I will go into this matter in some detail. In 1926 Born’s idea made sense of experiment and resolved the wave-particle duality of the old quantum theory, but at the cost of destroying classical concepts of what a particle or wave really was. Let me try to explain.
A simple example of a classical probability distribution is that of tossing a coin and seeing if it lands heads or tails. The probability distribution in this case is the two numbers, ½ and ½, the first being the probability of heads, the second the probability of tails. The two probabilities add up to 1 which represents certainty, in probability theory. (Unlike the college students who are trying to decide whether to go drinking, go to the movies or to study, I ignore the possibility that the coin lands on its edge without falling over.) With the wave function product ψ*ψ, calculus gives us a way of adding up all the probabilities, and if they don’t add up to 1, we simply define a new ψ by dividing by the sum we obtained. (This is called “normalizing” the wave function.) Besides the complexity of the math, however, there is a profound difference between the coin and the electron. With the coin, classical mechanics tells us in theory, and perhaps in practice, precisely what the position and orientation of the coin is during every instant of its flight; and knowing about the surface the coin lands on, allows us to predict the result of the toss in advance. The classical analogy for the electron would be to imagine it is like a bb moving around inside the non-zero area of the wave function, ready to show up when conditions are propitious. With QM this analogy is false. There is no trajectory for the electron, there is no concept of it having a position, before it shows up. Actually, it is only fairly recently that the “bb in a tin can model” has been shown definitively to be false. I will discuss this matter later talking briefly about Bell’s theorem and “hidden” variable ideas. However, whether or not an electron’s position exists prior to its materialization, it was simply the concept of probability that Einstein and Schrödinger, among others, found unacceptable. As Einstein famously put it, “I can’t believe God plays dice with the universe.”
Max Born, who introduced probability into fundamental physics, was a distinguished physics professor in Göttingen and Heisenberg’s mentor after the latter first came to Göttingen from Munich in 1922. Heisenberg got the breakthrough for his theory while escaping from hay fever in the spring of 1925 walking the beaches of the bleak island of Helgoland in the North Sea off Germany. Returning to Göttingen, Heisenberg showed his work to Born who recognized the calculations as being matrix multiplication and who saw to it that Heisenberg’s first paper was immediately published. Born then recruited Pascual Jordan from the math department at Göttingen and the three wrote a famous follow-up paper, Zur Quantenmechanik II, Nov, 1925, which gave a complete treatment of the new theory from a matrix mechanics point of view. Thus, Born was well posed to come up with his idea of the nature of the wave function.
Quantum Mechanics came into being during the amazingly short interval between mid-1925 and the end of 1926. As far as the theory went, only “mopping” up operations were left. As far as the applications were concerned there was a plethora of “low hanging fruit” that could be gathered over the years with Schrödinger’s equation and Born’s interpretation. However, as 1927 dawned, Heisenberg and many others were concerned with what the theory meant, with fears that it was so revolutionary that it might render ambiguous the meaning of all the fundamental quantities on which both the new QM and old classical physics depended. In 1925 Heisenberg began his work on what became the matrix mechanics because he was skeptical about the existence of Bohr orbits in atoms, but his skepticism did not include the very concept of “space” itself. As QM developed, however, Heisenberg realized that it depended on classical variables such as position and momentum which appeared not only in the pq commutation relation but as basic variables of the Schrödinger equation. Had the meaning of “position” itself changed? Heisenberg realized that earlier with Einstein’s Special Relativity that the meaning of both position and time had indeed changed. (Newton assumed that coordinates in space and the value of time were absolutes, forming an invariable lattice in space and an absolute time which marched at an unvarying pace. Einstein’s theory was called Relativity because space and time were no longer absolutes. Space and time lost their “ideal” nature and became simply what one measured in carefully done experiments. (Curiously enough, though Einstein showed that results of measuring space and time depended on the relative motion of different observers, these quantities changed in such an odd way that measurements of the speed c of light in vacuum came out precisely the same for all observers. There was a new absolute. A simple exposition of special relativity is N. David Mermin’s Space and Time in Special Relativity.)
The result of Heisenberg’s concern and the thinking about it is called the “Uncertainty Principle”. The statement of the principle is the equation ΔqΔp = ħ. The variables q and p are the same q and p of the Fundamental Quantum Relation and, indeed, it is not difficult to derive the uncertainty principle from the FQR. The symbol delta, Δ, when placed in front of a variable means a difference, that is an interval or range of the variable. Experimentally, a measurement of a variable quantity like position q is never exact. The amount of the uncertainty is Δq. The uncertainty equation above thus says that the uncertainty of a particle’s position times the uncertainty of the same particle’s momentum is ħ. In QM what is different from an ordinary error of measurement is that the uncertainty is intrinsic to QM itself. In a way, this result is not all that surprising. We’ve seen that the wave function ψ for a particle is a cloud of numbers. Similarly, a transformed wave function for the same particle’s momentum is a similar cloud of numbers. The Δ’s are simply a measure of the size of these two clouds and the principle says that as one becomes smaller, the other gets larger in such a way that their product is h bar, whose numerical value I’ve given above.
In fact, back in 1958 when I was in Eikenberry’s QM course and we derived the uncertainty relation from the FQR, I wondered what the big deal was. I was aware that the uncertainty principle was considered rather earthshaking but didn’t see why it should be. What I missed is what Heisenberg’s paper really did. The equation I’ve written above is pure theory. Heisenberg considered the question, “What if we try to do experiments that actually measure the position and momentum. How does this theory work? What is the physics? Could experiments actually disprove the theory?” Among other experimental set-ups Heisenberg imagined a microscope that used electromagnetic rays of increasingly short wavelengths. It was well known classically by the mid-nineteenth century that the resolution of a microscope depends on the wavelength of the light it uses. Light is an electromagnetic (em) wave so one can imagine em radiation of such a short wavelength that it could view with a microscope a particle, regardless of how small, reducing Δq to as small a value as one wished. However, by 1927 it was also well known because of the Compton effect that I talked about in the last post, that such em radiation, called x-rays or gamma rays, consisted of high energy photons which would collide with the electron giving it a recoil momentum whose uncertainty, Δp, turns out to satisfy ΔqΔp = ħ. Heisenberg thus considered known physical processes which failed to overturn the theory. The sort of reasoning Heisenberg used is called a “thought” experiment because he didn’t actually try to construct an apparatus or carry out a “real” experiment. Before dismissing thought experiments as being hopelessly hypothetical, one must realize that any real experiment in physics or in any science for that matter, begins as a thought experiment. One imagines the experiment and then figures out how to build an apparatus (if appropriate) and collect data. In fact, as a science progresses, many experiments formerly expressed only in thought, turn real as the state of the art improves.
Although the uncertainty principle is earthshaking enough that it helped confirm the skepticism of two of the main architects of QM, namely, Einstein and Schrödinger, one should note that, in practice, because of the small size of ħ, the garden variety uncertainties which arise from the “apparatus” measuring position or momentum are much larger than the intrinsic quantum uncertainties. Furthermore, the principle does not apply to c-numbers such as e, the fundamental electron or proton charge, c, the speed of light in vacuum, h, Planck’s constant. There is an interesting story here about a recent (Fall, 2018) redefinition of physical units which one can read about on line. Perhaps I’ll have more to say about this subject in a later post. For now, I’ll just note that starting on May 20, 2019, Planck’s constant will be (or has been) defined as having an exact value of 6.626070150×10¯³⁴ Joule seconds. There is zero uncertainty in this new definition which may be used to define and measure the mass of the kilogram to higher accuracy and precision than possible in the past using the old standard, a platinum-iridium cylinder, kept closely guarded near Paris. In fact, there is nothing muddy or imprecise about the value of many quantities whose measurement intimately involves QM.
During the years after 1925 there was at least one more area which in QM was puzzling to say the least; namely, what has been called “the collapse of the wave function.” Involved in the intense discussions over this phenomenon and how to deal with it was another genius I’ve scarcely mentioned so far; namely Wolfgang Pauli. Pauli, a year older than Heisenberg, was a year ahead of him in Munich studying under Sommerfeld, then moved to Göttingen, leaving just before Heisenberg arrived. Pauli was responsible for the Pauli Exclusion Principle based on the concept of particle spin which he also explicated. (see Wikipedia) He was in the thick of things during the 1925 – 1927 time period. Pauli ended up as a professor in Zurich, but spent time in Copenhagen with Bohr and Heisenberg (and many others) formulating what became known as the Copenhagen interpretation of QM. Pauli was a bon vivant and had a witty sarcastic tongue, accusing Heisenberg at one point of “treason” for an idea that he (Pauli) disliked. In another anecdote Pauli was at a physics meeting during the reading of a muddy paper by another physicist. He stormed to his feet and loudly said, “This paper is outrageous. It is not even wrong!” Whether the meeting occurred at a late enough date for Pauli to have read Popper, he obviously understood that being wrong could be productive, while being meaningless could not.
Over the next few years after 1927 Bohr, Heisenberg, and Pauli explicated what came to be called “the Copenhagen interpretation of Quantum Mechanics”. It is well worth reading the superb article in Wikipedia about “The Copenhagen Interpretation.” One point the article makes is that there is no definitive statement of this interpretation. Bohr, Heisenberg, and Pauli each had slightly different ideas about exactly what the interpretation was or how it worked. However, in my opinion, things are clear enough in practice. The problem QM seems to have has been called the “collapse of the wave function.” It is most clearly seen in a double slit interference experiment with electrons or other quantum particles such as photons or even entire atoms. The experiment consists of a plate with two slits, closely enough spaced that the wave function of an approaching particle covers both slits. The spacing is also close enough that the wavelength of the particle as determined by its energy or momentum, is such that the waves passing through the slit will visibly interfere on the far side of the slit. This interference is in the form of a pattern consisting of stripes on a screen or photographic plate. These stripes show up, zebra like, on a screen or as dark, light areas on a developed photographic plate. On a photographic plate there is a black dot where a particle has shown up. The striped pattern consists of all the dots made by the individual particles when a large number of particles have passed through the apparatus. What has happened is that the wave function has “collapsed” from an area encompassing all of the stripes, to a tiny area of a single dot. One might ask at this point, “So what?” After all, for the idea of a probability distribution to have any meaning, the event for which there is a probability distribution has to actually occur. The wave function must “collapse” or the probability interpretation itself is meaningless. The problem is that QM has no theory whatever for the collapse.
One can easily try to make a quantum theory of what happens in the collapse because QM can deal with multi-particle systems such as molecules. One obtains a many particle version of QM simply by adding the coordinates of the new particles, which are to be considered, to a multi-particle version of the Schrödinger equation. In particular, one can add to the description of a particle which approaches a photographic plate, all the molecules in the first few relevant molecular layers of the plate. When one does this however, one does not get a collapse. Instead the new multi-particle wave function simply includes the molecules of the plate which are as spread out as much as the original wave function of the approaching particle. In fact, the structure of QM guarantees that as one adds new particles, these new particles themselves continue to make an increasingly spread out multi-particle wave function. This result was shown in great detail in 1929 by John von Neumann. However, the idea of von Neumann’s result was already generally realized and accepted during the years of the late 1920’s when our three heroes and many others were grappling with finding a mechanism to explain the experimental collapse. Bohr’s version of the interpretation is simplicity itself. Bohr posits two separate realms, a realm of classical physics governing large scale phenomena, and a realm of quantum physics. In a double slit experiment the photographic plate is classical; the approaching particle is quantum. When the quantum encounters the classical, the collapse occurs.
The Copenhagen interpretation explains the results of a double slit experiment and many others, and is sufficient for the practical development of atomic, molecular, solid state, nuclear and particle physics, which has occurred since the late 1920’s. However, there has been an enormous history of objections, refinements, rejections and alternate interpretations of the Copenhagen interpretation as one might well imagine. My own first reaction could be expressed as the statement, “I thought that ‘magic’ had been banned from science back in the 17th century. Now it seems to have crept back in.” (At present I take a less intemperate view.) However, one can make many obvious objections to the Copenhagen interpretation as I’ve baldly stated it above. Where, exactly, does the quantum realm become the classic realm? Is this division sharp or is there an interval of increasing complexity that slowly changes from quantum to classical? Surely, QM, like the theory of relativity, actually applies to the classical realm. Or does it?
During the 1930’s Schrödinger used the difficulties with the Copenhagen interpretation to make up the now famous thought experiment called “Schrödinger’s Cat.” Back in the early 1970’s when I became interested in the puzzle of “collapse” and first heard the phrase “Schrödinger’s Cat”, it was far from famous so, curious, I looked it up and read the original short article, puzzling out the German. In his thought experiment Schrödinger uses the theory of alpha decay. An alpha particle confined in a radioactive nucleus is forever trapped according to classical physics. QM allows the escape because the alpha particle’s wave function can actually penetrate the barrier which classically keeps it confined. Schrödinger imagines a cat imprisoned in a cage containing an infernal apparatus (hollenmaschine) which will kill the cat if triggered by an alpha decay. Applying a multi-particle Schrödinger’s equation to the alpha’s creeping wave function as it encounters the trigger of the “maschine”, its internals, and the cat, the multi-particle wave function then contains a “superposition” (i.e. a linear combination) of a dead and a live cat. Schrödinger makes no further comment leaving it to the reader to realize how ridiculous this all is. Actually, it is even worse. According to QM theory, when a person looks in the cage, the superposition spreads to the person leaving two versions, one looking at a dead cat and one looking at a live cat. But a person is connected to an environment which also splits and keeps splitting until the entire universe is involved.
What I’ve presented here is an actual alternative to the Copenhagen Interpretation called “the Many-worlds interpretation”. To quote from Wikipedia “The many-worlds interpretation is an interpretation of quantum mechanics that asserts the objective reality of the universal wavefunction and denies the actuality of wavefunction collapse. Many-worlds implies that all possible alternate histories and futures are real, each representing an actual ‘world’ (or ‘universe’).” The many-worlds interpretation arose in 1957 in the Princeton University Ph.D. dissertation of Hugh Everett working under the direction of the late John Archibald Wheeler, who I mentioned in the last post. Although I am a tremendous admirer of Wheeler, I am skeptical of the many-worlds interpretation. It seems unnecessarily complicated, especially in light of ideas that have developed since I noticed them in 1972. There is no experimental evidence for the interpretation. Such evidence might involve interference effects between the two versions of the universe as the splitting occurs. Finally, if I exist in a superposition, how come I’m only conscious of the one side? Bringing in “consciousness” however, leads to all kinds of muddy nonsense about consciousness effects in wave function splitting or collapse. I’m all for consciousness studies and possibly such will be relevant for physics after another revolution in neurology or physics. At present we can understand quantum mechanics without explicitly bringing in consciousness.
In the next post I’ll go into what I noticed in 1971-72 and how this idea subsequently became developed in the greater physics community. The next post will necessarily be somewhat more mathematically specific than so far, possibly including a few gory details. I hope that the math won’t obscure the story. In subsequent posts I’ll revert to talking about physics theory without actually doing any math.
2 thoughts on “QM 1
Leave a Reply
WordPress.com Logo
Google photo
Twitter picture
Facebook photo
Connecting to %s |
c20d25a4561e0d23 | Monday, April 22, 2013
Listen to Spacetime
Quantum gravity researcher at work.
Achim calls it “a quantum version of yard sticks.”
1. Without having looked into Achim's work in detail, I wonder: how does it relate to Connes' reconstruction theorem, which proves that a Riemannian manifold can be recovered from its underlying spectral triple?
2. Yes I like this approach Bee.
The conversion process still has to have specifics and the theoretic involved in terms of it's construction would have to have some association for the correlation to work.
For example:
This conversion process is very important.
Another example would be:
HiggsJetEnergyProfileCrotale and HiggsJetEnergyProfilePiano use only the energy of the cells in the jet to modulate the pitch, volume, duration and spatial position of each note. The sounds being modulated in these examples are crotales (baby cymbals) and a piano string struck with a soft beater, then shifted up in pitch by 1000 Hz and `dovetailed'.
In HiggsJetRythSig we are simply travelling steadily along the axis of the jet of particles and hearing a ping of crotales for each point at which there is a significant energy deposit somewhere in the jet.
HiggsJetEnergyGate uses just the energy deposited in the jet's cells. At each time point (defined by the distance from the point of collision) the energy is used to define the number of channels used from the piano sound file. So high energy can be heard as thick, burbly sound whilst low energy has a thinner sound.
See: Listen to the decay of a god particle
It is exciting for me to see your demonstration in concert with the approach of quantum gravity.
3. You might like this link below as well.
We think of space as a silent place. But physicist Janna Levin says the universe has a soundtrack -- a sonic composition that records some of the most dramatic events in outer space. (Black holes, for instance, bang on spacetime like a drum.) An accessible and mind-expanding soundwalk through the universe. See: Janna Levin: The sound the universe makes
4. @saibod
Indeed the paper appears to describe techniques dual to those used by Connes in his noncommutative geometrical studies of the standard model and gravity. The two approaches are related through the fact that the Dirac operator can be thought of as the square-root of the Laplacian. Kempf prefers to use the Laplacian while Connes uses the Dirac operator in his spectral triples (A,H,D) to encode the spectral geometry. In Connes spectral triple, A is the operator algebra of functions over the given manifold, H is the Hilbert space on which it acts on and D is the Dirac operator whose spectrum is used to recover the structure of the manifold, much like Kempf uses the spectrum of the Laplacian to recover the "shape" of the manifold.
In Connes approach to the standard model and gravity, to recover the gauge group of the standard model he considers a product space M x F, where F is a finite geometry, related to the "sprinkling of points" mentioned in Kempf's paper that has a matrix interpretation. Specifically, Connes considers the algebra of functions A over the 6-point space in his model, where A=C+H+M_3(C). Here, C is the set of complex numbers, H the algebra of quaternions (transforming in M_2(C)) and M_3(C) the set of 3x3 matrices over the complex numbers, acting on the one, two and 3-point spaces respectively. Classically, the manifolds which these encode are the unit circle, CP^1 and CP^2, each discretized by the eigenvalues of the matrix operators in the algebra of functions over the finite geometry F.
In string theory, such a finite geometry also arises in the guise of internal worldvolume degrees of freedom. In this framework, gauge groups can be seen as the internal degree of freedom at every point on the world-volume of N-coincident branes. The gauge symmetry is the freedom that a fundamental string has in deciding which of the N identical branes it can end on. In Connes' model, there would be a total of six branes encoded by the spectral triple of his finite geometry F, giving the U(1), SU(2) and SU(3) symmetry groups of the standard model.
5. "The correlations of the quantum vacuum are encoded in the Greensfunction which is a function of pairs of points." Green’s function opens Newton (e.g., terrain gravitometer sweeps to reconstruct buried dense ore or low density petroleum). To my knowledge, Green functions are not validated for general relativity. Green functions are all coordinate squares, removing chirality (versus Ashtekar). Green functions are defective if they uncreate fermionic matter parity violations.
Quantum gravitation and SUSY will founder until somebody discovers why persuasive maths do not empirically apply. Euclid plus perturbation is terrestrial cartography, and still fails to navigate the high seas, because rigorously derived Euclid is wrong in context. Green functions for linearized theory are established. Green functions describe complete non-linear theory to any required accuracy. An odd polynomial to any number of terms is not a sine wave. It fails at boundaries.
6. Hi Saibod,
Achim submits the following: "Connes' spectral triple has much more information than just the spectrum of the Dirac operator. Namely, to know the spectral triple is also to know how the Dirac operator acts on concrete spinor fields. Having this much more information makes it way easier to reconstruct a manifold. The difficult part is to show under which conditions the spectrum (or spectra) *alone* suffice(s) to determine a manifold."
7. Greensfunction ---> Green function.
The first form is not correct and never was. The second is the preferred form now, e.g. Schrödinger equation, Maxwell equations, not Schrödinger's equation, not Maxwell's equation (though the possessive forms are grammatically correct).
I remember that Max Tegmark commented in a talk at the 1994 Texas Symposium in Munich that he had looked up the official recommendations and "Green function" is correct, though he found that rather funny.
8. Hi Phillip,
I also find that rather funny, but I'll keep it in mind. Though I'm afraid that if I would write "Green function" nobody would know what I mean, which somewhat defeats the purpose of language. It's like that, after some years of complaining about the way the Swedes write dates that nobody knows how to read, I found out that it's the "international standard" for dates they're using... Best,
9. I'm pretty sure that there is no-one who knows what a Greensfunction is but doesn't know what a Green function is. The fact that it is capitalized hints that it is a proper name.
10. This comment has been removed by the author.
11. This comment has been removed by the author.
12. Awesome post Sabine! Should we call you The Quantum Gravity Doctor? ;)
13. Nice picture Sabine...
I guess finally your mother's dream become true. You are a 'real' doctor now with a stethoscope:-)
14. This comment has been removed by the author.
15. Oops, sorry for my badly written comments. Anyway, "Green function" or "Green's function" are the terms that I know. Never heard of Greensfunctions...
16. Hi Christine,
He's only considering manifolds without boundary. I've been a little brief on the details for the sake of readability, but it arguably goes on the expenses of clarity, sorry about that. I can recommend Achim's paper though, I found it very well written and understandable. It's also not very long. Best,
17. Hi Giotis,
There are some Dr med's in our family. I don't think it ever was my mother's dream I join them. My younger brother and I, we'd sometimes sneak into the doctor's office on weekends and play with the equipment. I've always been more interested in basic research though. And my younger brother, he's a mechanical engineer now. Best,
18. Juan,
Yes, you can call me the Quantum Gravity Doctor. The patient is noncompliant :p Best,
19. This comment has been removed by the author.
the Dirac operator acts on concrete spinor fields."
This sounds like an interesting mathematical question but in terms of physics one needs the spinors anyway to have fermions and to be able to reconstruct the Standard Model.
The Laplacian alone will not do. One just gets the bosonic part of the spectral action.
Moreover, to do serious physics at least an almost commutative spectral triple is required
anyway, rendering the overall manifold non-commutative.
Also, if just considering the Laplacian, I don't think one gets the gauge fields which are part of the bosonic action.
Regarding spacetime in isolation I regard as a major step backwards and completely against the very spirit of unification (of spacetime and matter), in particular given the sheer success of the noncommutative standard model.
Well, that's all based on my limited understanding of the subject, so please correct me if I am wrong.
21. You go, girl and get that Quantum Gravity!
22. Quantum gravity is the theory supposed to bridge the dimensional scales of quantum mechanics and general relativity, i.e. the human observer scales. I can see no reason, why the common chemistry and biology couldn't fall into subject of quantum gravity as well.
23. Last line from Alan Lightman's review of Smolin's new book in the NY Times Book Review section of the Sunday paper.
"For if we must appeal to the existence of other universes - unknown and unknowable - to explain out universe, then science has progressed into a cul-de-sac with no scientific escape."
Science 1 ; pseudo-science 0
|
ff1cc77d2a69705f | I have been reading about how lasers function: A photon is used for stimulated emission of electrons from the metastable state to a lower energy state.
What I don't understand is: How can "giving" energy (in the form of photons) to electrons stimulate them to come to a lower energy state? After stimulated emission, the old photon exists along with the new one and with the same energy as earlier, so how did it actually stimulate the electrons to fall?
I know that the same question has been asked before, but the answer was overly simplified. I am looking for a detailed answer.
• $\begingroup$ Imagine you're happily jumping on a trampoline and suddenly someone pushes you from your back when you're upside. Would you fall on the trampoline again or would you be ejected in the same direction? $\endgroup$
– FGSUZ
Sep 22 '18 at 20:29
• 2
$\begingroup$ The answer is quantum mechanics. We can't explain stimulated emission without it, because it's a purely quantum mechanical phenomenon. Do you want a hand-wavy QM explanation or a detailed technical QM one with equations? $\endgroup$ Sep 22 '18 at 23:00
• 2
$\begingroup$ I don't know if this is an explanation that's too reductive and gets something wrong, but here it goes: When an atom is in the presence of an EM field, the Hamiltonian shifts, establishing a new eigenbasis as well. The previous state is a superposition of the new shifted states and this superposition likely has a dipole moment that oscillates in time. As a result of this acquired oscillating dipole moment, the ground state could transition to a new excited state or an excited state can transition to the ground state (or another excited state) depending on the exact EM field. $\endgroup$ Sep 27 '18 at 4:56
• $\begingroup$ When there is coupling, there is not just "giving", there is also "taking". If I write the Hamiltonian of a two-level system as a hermitian matrix, there are non-zero off-diagonal elements in both (0,1) and (1,0) entries, in the presence of non-zero coupling. I mean, it has to be by definition of hermiticity. $\endgroup$
– wcc
Dec 7 '18 at 4:03
• 1
$\begingroup$ Albert Einstein, who "invented" stimulated emission writes in Zur Quantentheorie der Strahlung, Phys. Z. 18 121-128 (1917) in §2b) that a resonator [the bound electron] in an oscillating electromagnetic field gains or loses energy, depending on the relative phase between the two. You don't need a quantized field or oscillator for this. It's like sitting on a swing: If you increase or decrease your energy depends on the phase between the swing movement and your body movements. $\endgroup$
– A. P.
Nov 28 '20 at 19:49
Do you want a technical quantum mechanics explanation, or a hand-wavy non-technical explanation?
The hand-wavy non-technical explanation: photons are bosons, and bosons like being together (unlike fermions, who are loners).
So suppose you're sitting in your apartment and wondering whether you want to go see the new Star Wars movie. Possibly you'll end up gathering enough energy to go to the theater and buy a ticket, but probably not. Then five of your friends knock on your door and say "let's go see the new Star War movie". You throw on your jacket and go with them.
That's more or less how stimulated emission works, but of course to be precise, you need to write down the quantum mechanical equations that tell you just how much photons like being with their friends, and see that it really works. And for photons, if thousands of their friends come by, they're much more likely to go than if just five of them do (while in real life, if thousands of your friends came by, the smart thing to do would be to decide that the theater was going to sell out and stay home).
There's an easy way to show that what you might expect naively—that there is no difference between rates of spontaneous emission and simulated emission—doesn't work. Recall that quantum mechanics is reversible. What this means is that absorption and emission should work the same. Now, suppose you have $n$ photons that illuminate an atom in the ground state. You would naively expect that the rate of absorption would be $n$ times the rate with one photon. And in fact, that's correct.
Now, suppose you have an atom in the excited state, and you shine $n-1$ photons on it. The process of its decaying and going to the ground state, with $n$ photons leaving it is exactly the reverse as the atom being excited when it's illuminated by $n$ photons, with $n-1$ photons leaving it. So the rate of stimulated emission should be $n$ times the rate of spontaneous emission. (This is complicated a little bit because there's only one mode an atom can decay into in stimulated emission, while there may be more than one for spontaneous emission.)
This explanation probably still isn't entirely satisfactory, because it doesn't justify the fact that $n$ photons will excite an atom in the ground state at $n$ times the rate that one photon will. The essential reason this happens is that the creation operator $a^\dagger$ satisfies $a^{\dagger } | n\rangle ={\sqrt {n+1}} \,| n+1\rangle$. Julian Ingham's answer explains this in more detail.
• $\begingroup$ Great analogy but please give a technical QM explanation.. $\endgroup$ Sep 23 '18 at 5:47
• 5
$\begingroup$ @SidharthShambu The QM explanation is the boson statistics discovered by Bose. This statistics states that the probability of photons to group together is higher than otherwise. So the probability of emission is higher when a stimulating photon passes by. Peter's answer is correct with one caveat. You cannot say (as you do in your question) that the stimulating photon is unaffected, because after the emission you cannot know which of the photons was stimulating and which was emitted. Either one is a superposition of both possibilities: en.wikipedia.org/wiki/Bose-Einstein_statistics $\endgroup$
– safesphere
Sep 23 '18 at 8:10
• 1
$\begingroup$ This is a great answer, and @safesphere's addition is great. $\endgroup$ Sep 24 '18 at 1:52
• $\begingroup$ Am I missing the point? The last three paragraphs address spontaneous emission, which does not address the question as far as I can tell $\endgroup$
– garyp
Dec 6 '18 at 11:52
• 1
$\begingroup$ @garyp: the third-to-last paragraph discusses absorption. The second-to-last and last paragraph discuss stimulated emission ("suppose you have an atom in the excited state, and you shine $n-1$ photons on it ..."), $\endgroup$ Dec 6 '18 at 12:22
Edit: I've edited this answer to add more intuitive explanations, see the end. The electrons don't receive energy from the photons; it's just that the initial presence of $N$ photons makes the probability of the electron emitting another photon more likely. "Dipoles" and "population inversion" are actually irrelevant.
Peter Shor's answer is a nice intuitive sketch, but here's the mathematical presentation he/OP requested.
Quick run-through of quantum electrodynamics, then it will be clear: recall that the interaction between charged fields and the photon is given by \begin{equation} \mathscr{V}_{int}=e\int (\hat{j}\hat{A}) d^3x \end{equation} We can decompose the free electromagnetic field into a sum of photon creation annihilation operators \begin{equation} \hat{A}=\sum_{n}\left(\hat{c}_nA_n(x)+\hat{c}^\dagger_nA^*_n(x)\right) \end{equation} As we know from the harmonic oscillator, each operator has matrix elements only for an increase or decrease of the corresponding occupation number $N_n$ (the number of photons of type $n$; by type we mean of a given frequency/wavevector, since we count the number of photons of different frequencies separately) which differ by one. That is, only processes of the emission or absorption of a single photon occur in the first approximation of perturbation theory. (Though again, in analogy with the harmonic oscillator, we know that at the $m$th order in perturbation theory, $m$-photon processes are possible ie matrix elements connecting $N_n$ and $N_n\pm m$. Quantitatively, the matrix elements of the operators $c_n$ are given by \begin{equation} \langle N_n|c^\dagger_n| N_n-1\rangle=\langle N_n-1|c_n|N_n\rangle=\sqrt{N_n} \end{equation} (The convention is that $c_n$ are the usual "$a_n$", but with a factor of $\sqrt{2\pi/\omega}$ absorbed into them).
Investigating the probability of an absorption/emission process requires perturbation theory. Let us assume for simplicity that the initial and final states of the emitting/absorbing system belong to the discrete spectrum. Then the probability rate is given by the Fermi golden rule \begin{equation} dw=2\pi |\mathscr{V}_{fi} |^2 \delta\left(E_i-E_f-\omega\right) d\nu \end{equation} We have adopted the normalisation of the photon wavefunction so that there is one photon per volume V, and the photon wavefunction is normalised by integrating over $d\nu$. The bottom line here is that the probability rate is proportional to the square of the matrix element of $\mathscr{V}$ between the initial and final state.
Okay so here's the punchline: if the initial state of the field already has a non zero number $N_n$ of the photons in question, the matrix element for the transition is multiplied by \begin{align} \langle N_n+1|c^\dagger_n|N_n\rangle=\sqrt{N_n+1} \end{align} ie the transition probability, which involves the square of the matrix element, gets multiplied by $N_n+1$. The 1 in this factor corresponds to the $\textbf{spontaneous emission}$ which occurs even if $N_n=0$. The term $N_n$ represents the $\textbf{stimulated or induced emission}$: the presence of photons in the initial state of the field stimulates the further emission of photons of the same kind. The hand waving explanation is exactly that photons are bosons, see Peter Shor's answer. This is also the same "$N+1$" phenomenon cited in a newer answer, which involves the example of a molecular toy Hamiltonian.
Incidentally, we can obtain the Einstein relations from here with minimal effort: the matrix element for the opposite change of state will be proportional to \begin{align} \langle N_n-1|c_n| N_n\rangle=\sqrt{N_n} \end{align} and so the emission and absorption probabilities for a given pair of states are related by \begin{equation} w_e/w_a=(N_n+1)/N_n \end{equation}
$\textbf{Edit:}$ $\textit{Some further questions elaborated.}$
As was stated in Peter Shor's answer, one way of thinking about this is that the factor of $(N_n+1)$ appearing in the probability rate is due to the fact that photons are bosons, and "like to group together" to go see Star Wars movies. Photons of a certain frequency in the initial state encourage there to be another photon of such a frequency in the final state, and the electron obliges by emitting this photon. There's an important point here too: which is that the photons of type $n$ ie frequency $\omega_n$ in the initial state encourage there to be more photons of the same type $n$ frequency $\omega_n$ in the final state. So the photon the electron spits out by stimulated emission is $\textit{in phase}$ with the original photons - ie of the same type. All this is simply a consequence of the algebra of bosonic creation/annihilation operators. It's not the case that energy has been "given to" the electrons in any way: clearly, it is the electron that has given up energy to the photon bunch, because it has emitted a photon. What happened is that the probability rate of the electron doing that has been increased.
Steven Sagona asks: $\textit{"why do atoms have such a Hamiltonian"?}$ The $j\cdot A$ Hamiltonian is the Hamiltonian of electromagnetism. All interactions between photons and matter are described by this Hamiltonian, as this is the only Hamiltonian allowed by gauge invariance and Lorentz invariance.
Another question is asking for the role of dipole moments and population inversion. Neither of these are actually necessary to understand the notion of stimulated emission, which is simply our factor of $N_n$, as explained. For completeness we'll give a quick explanation of the role of those terms in laser physics.
The way a laser works is essentially: you put energy into the system - "pumping" - and thereby drive the atoms into excited states. Population inversion is simply the situation when you have more atoms in excited states than in the ground state. Then you expose your excited atoms to photons, and the electrons are stimulated to drop back down to the ground state and spit out photons that are in-phase ("of the same type") as the incident photons, for the reasons explained above. Then those stimulatedly-emitted photons fly around bumping into more electrons, and cause them to undergo stimulated emission, and so on in a snowballing effect of more and more in-phase photons, until you gradually run out of your excited electrons. This gives you a whole bunch of coherent photons. Again, no dipoles necessary here.
If we wanted to calculate the emission rates more exactly, we'd have to calculate $\mathscr{V}_{fi}$. When the wavelength of the photon is large compared to the size of the atom, the dominant contribution to this matrix element is from dipole radiation. There are selection rules that determine whether an initial and final state can be connected by a dipole transition, https://en.wikipedia.org/wiki/Selection_rule. We can calculate $\mathscr{V}_{fi}$ more precisely by expanding our expression for $j\cdot A$ in a multipole expansion. I could step through all these details mathematically but it would be overkill - the basic point is that the symmetries of the states the electron is jumping between determine whether that process is allowed or not. Practically, for the snowball process explained above to work, you want the electrons to stay in their excited states for a long time (ie you want them to be metastable) so that the photons get a chance to reach them and snowball off them. The origin of metastable states is usually that: spontaneously jumping from that metastable state to the ground state is forbidden by a selection rule https://en.wikipedia.org/wiki/Metastability#Atomic_and_molecular_physics so falling out of the metastable state is unlikely. This means the probability of the electron spontaneously returning to the ground state is low, but the probability of it returning to the ground state via stimulated emission can be high due to that large factor of $N_n$ compensating. This is good: spontaneous emission spits out random out-of-phase photons, but we want stimulated emission so that we can have in phase photons (that's the point of a laser). So selection rules allow us to choose good metastable states, and that's what allows us to make the most of those excited atoms and get as many stimulated emission events out of them before they all de-excite. But this is a system dependent detail, and plays no essential role in the phenomenon of stimulated emission per se - it's a practical necessity needed to ensure the electrons in a laser stay excited long enough to undergo stimulated emission.
• $\begingroup$ An unanswered question from the OP: Where is the energy coming from? Spontaneous emission looks unintuitive because we say one photon simply creates a second photon. You explain this with perturbation theory and some formulas but there's very little physical intuition that makes sense of the interaction. Why do atomic states have such a hamiltonian? What about the atomic dipole? What about population inversion? $\endgroup$ Dec 5 '18 at 20:19
• $\begingroup$ Also, I'm pretty sure you don't square a bra-ket to get an expected value, so how are you getting that the probability of emission is (N+1) and not $\sqrt(N+1)$? $\endgroup$ Dec 5 '18 at 20:28
• 2
$\begingroup$ @StevenSagona, the probability rate is proportional to $|\mathscr{V}|^2$, see equation 4. As for the first question, I thought the intuition was well explained in Peter Shor's post, and I just went through the maths. I'll add an extra section to my answer filling in some of the intuitive details. $\endgroup$
– user213887
Dec 5 '18 at 20:58
• 1
$\begingroup$ @StevenSagona I think I answered your questions; when an electron emits a photon the energy comes from the electron. An electron in an excited state drops down and emits a photon - the point is that this photon is in phase with the photons already bumping into it because of that $N_n+1$ factor. The interaction is just the usual Hamiltonian of electromagnetism, I'm not sure what intuition is needed there. More detail is added at the end of the answer. $\endgroup$
– user213887
Dec 7 '18 at 5:13
The question called for a detailed answer, so I'll show an explicit calculation, using the Schrödinger equation, in a toy model that exhibits stimulated emission. Most of the effort goes into constructing the model and explaining what the various pieces mean. Once this is done, the calculation itself is relatively quick and easy, and the interpretation of the result is straightforward.
The model
A simple type of laser works by putting the molecules of the lasing material into a relatively long-lived excited state, one that would eventually decay on its own (releasing a photon) even if it were not "stimulated." If it does decay on its own, the emitted photon is in a superposition of different momenta, with no preference for momenta parallel to the long axis of the laser. The model will illustrate what happens when other photons, emitted by other previously-excited molecules, are already present. The model includes:
• a single two-level molecule;
• two different photon modes, representing two different momenta with the same magnitude and different (say, orthogonal) directions.
The model involves two parameters:
• a real parameter $\lambda$ that determines the strength of the interaction between the molecule and the photons;
• a real parameter $\omega$ representing the energy of the molecule's excited state (relative to the ground state). The same parameter $\omega$ also represents the energy of a single photon (either mode).
Units with $\hbar=1$ are being used here. Altogether, the Hamiltonian is $$ H = \omega\, a^\dagger a + \omega\, b^\dagger b + \omega\, c^\dagger c + \lambda \big(c^\dagger (a+b) + (a+b)^\dagger c\big), \tag{1} $$ where $a,b,c$ are operators having the following significance:
• $a^\dagger$ and $a$ are the creation and annihilation operators, respectively, for photons with one momentum;
• $b^\dagger$ and $b$ are the creation and annihilation operators, respectively, for photons with the other momentum;
• the operator $c^\dagger$ promotes the molecule from its ground state to the excited state, and the operator $c$ moves it from the excited state back to the ground state.
To ensure that the model involves only two energy levels for the molecule, the operators $c,c^\dagger$ are taken to satisfy the anticommutation relations $$ cc = 0 \hskip2cm c^\dagger c^\dagger = 0 \hskip2cm cc^\dagger+c^\dagger c = 1. \tag{2} $$ In contrast, the photon operators $a,b$ satisfy the usual boson commutation relations $$ aa^\dagger-a^\dagger a=1 \hskip2cm bb^\dagger-b^\dagger b=1 \tag{3} $$ and:
• $a$ and $a^\dagger$ commute with $b$ and $b^\dagger$
• $a$ and $a^\dagger$ commute with $c$ and $c^\dagger$
• $b$ and $b^\dagger$ commute with $c$ and $c^\dagger$
The interaction terms in the Hamiltonian, the terms multiplied by $\lambda$, are $$ c^\dagger (a+b) \hskip1cm \text{and} \hskip1cm (a+b)^\dagger c. $$ The first one describes the absorption of an $a$-photon or $b$-photon by the molecule, and the second one describes emission. Both terms must be present because the Hamiltonian must be self-adjoint. To complete the definition of the model, let $|0\rangle$ denote the state with no photons and in which the molecule is in its ground state, so $$ a|0\rangle=0 \hskip2cm b|0\rangle=0 \hskip2cm c|0\rangle=0. \tag{4} $$ Now, suppose that the molecule has been prepared in its excited state and that $N$ photons are already present in mode $a$, so the initial state of the system is $$ |\psi(0)\rangle = \big(a^\dagger\big)^N c^\dagger|0\rangle. \tag{5} $$ Working in the Schrödinger picture, the state evolves in time according to $$ i\frac{\partial}{\partial t}|\psi(t)\rangle = H|\psi(t)\rangle $$ with $H$ given by (1).
The calculation
At the initial time $t=0$, the right-hand side can be evaluated explicitly: \begin{align*} \left.i\frac{\partial}{\partial t}|\psi(t)\rangle\,\right|_{t=0} &= (N+1)\omega\,|\psi(0)\rangle + \lambda \big(a^\dagger\big)^N (a^\dagger+b^\dagger)|0\rangle \\ &= (N+1)\omega\,|\psi(0)\rangle + |A\rangle+|B\rangle \tag{6} \end{align*} with $$ |A\rangle \equiv \lambda \big(a^\dagger\big)^{N+1}|0\rangle \hskip2cm |B\rangle \equiv \lambda \big(a^\dagger\big)^{N}b^\dagger |0\rangle. \tag{7} $$ The interaction term involving $c^\dagger$ does not contribute to (6), because $(c^\dagger)^2=0$. The commutation relations for the photon operators imply $$ \frac{\langle A|A\rangle}{\langle B|B\rangle} =\frac{(N+1)!}{N!} = N+1. \tag{8} $$ To derive (8) quickly, notice that equation (3) says that $a$ acts formally like the "derivative" with respect to $a^\dagger$, so $$ a\big(a^\dagger\big)^n|0\rangle=n\big(a^\dagger\big)^{n-1}|0\rangle. $$
Now consider the significance of the result (6)-(8). The right-hand side of (6) is a quantum superposition of three terms:
• a term proportional to $|\psi(0)\rangle$ in which the molecule has not yet decayed,
• a term $|A\rangle$ in which the molecule has decayed by emitting an $a$-photon,
• a term $|B\rangle$ in which the molecule has decayed by emitting a $b$-photon.
Of course, this represents only the initial trend, because equation (6) is evaluated at $t=0$. But for the purpose of building intuition with relatively little calculation, this is sufficient.
First consider the case $N=0$, representing the situation with no photons present in the initial state, so the molecule decays on its own, without stimulation. In this case, equation (8) says that the $|A\rangle$ and $|B\rangle$ terms have the same magnitude, so equation (6) says that the photon is emitted in an equal superposition of both momenta, with no preference for either one. This is spontaneous emission.
Now consider the case $N\geq 1$, representing the situation with one or more $a$-photons present in the initial state. In this case, equation (8) says that the squared-magnitude of the $|A\rangle$ term is greater than the squared-magnitude of the $|B\rangle$ term by a factor of $N+1\geq 2$. Therefore, although the photon is still emitted in a superposition of both momenta because both terms are present in equation (6), it is now emitted preferentially with the $a$-momentum because the $A$ term in equation (6) has a larger magnitude than the $B$ term. The ratio $N+1$ says that the more $a$-photons are present in the initial state, the stronger this preference is. This is stimulated emission.
This simple model did not account for the walls that contain the lasing material, but we can suppose that the walls are designed (using mirrors, etc) so that photons in mode $a$ (say, with momentum parallel to the long axis of the laser) remain in the lasing cavity longer than photons in mode $b$. This introduces a slight tendency to have more $a$-photons than $b$-photons after the initially-excited molecules begin to decay, and then the stimulated-emission effect amplifies this tendency more and more strongly as the number of $a$-photons increases. Eventually, the number of $a$-photons being emitted (stimulated or otherwise) balances the number of $a$-photons being absorbed (the Hamiltonian (1) includes both terms), and the process plateaus.
Edit: These clarifications were posted as comments, but the trail of comments was becoming long, so I moved the clarifications into this appendix.
As a comment pointed out, this simple model is oversimplified in several respects. In particular, it includes only two photon momenta. A more realistic model should include many photon momenta, and a proof that lasing actually occurs would need to show that the effect of stimulation in a small fraction of those modes is sufficient. However, the purpose of the simple model presented here is not to try to prove that lasing occurs; the purpose is to illustrate the phenomenon of stimulated emission in a simple way.
Another concern was raised about treating the norm-squared of a term on the right-hand side of (6) as a transtition probability. That was not the intent. Equation (8) is only meant to say that in equation (6), the contribution of the $A$ term is (initially) growing faster than that of the $B$ term. With a single photon as the stimulator, the emission for that one mode will be enhanced relative to other modes; but emission in the other modes still occurs. Before we interrupt things with a measurement, all of these things are occurring continuously together as part of the quantum superposition according to the Schrödinger equation, but some contributions are growing faster than others, which will affect the distribtion of outcomes when a measurement finally does occur.
A comment by Steven Sagona mentioned that true single-photon sources are difficult to prepare. A more realistic source might prepare a state like $$ |0\rangle +\alpha a^\dagger|0\rangle +\frac{1}{2}(\alpha a^\dagger)^2|0\rangle +\cdots $$ with a relatively small magnitude of the coefficient $|\alpha|$, so that higher-order terms are negligible. To analyze stimulated emission when the simulating photon(s) come from such a source, we can simply replace equation (5) with a superposition involving different values of $N$ (such as $N=0$, $1$, and $2$). Since the Schrödinger equation is linear, this has the effect of replacing equations (7) with the corresponding superpositions. By comparing the norm of each term having a $b$-photon with the associated term that has an extra $a$-photon instead, we again conclude that the latter term is growing faster (at least initially) than the former in terms where at least one $a$-photon was present initially. The overall effect is weaker because the dominant term (the one with no photons present initially) does not include any stimulation, but the stimulated emission effect still occurs in the other terms (the ones that do have photons present initially).
That comment raises an interesting point. Even in this single-molecule model, and even with a true single-photon stimulus so that there is no entanglement in the initial state, the output light still comes out entangled with the molecule. This trend is already evident in equation (6), whose right-hand side is a superposition of two terms:
• A term with $N$ photons and an excited molecule (the term involving $\omega$)
• A term with $N+1$ photons and a relaxed molecule (the term $|A\rangle+|B\rangle$).
The entanglement is even more pronounced in a model with lots of molecules, because the final state is a superposition of many different numbers of molecules having emitted their photons. Since it's entangled, exactly what pure state (if any) best represents the output light (e.g., a coherent state) can be a tricky question, one whose answer probably requires careful consideration of "decoherence".
The original question was:
The key message of this answer is that stimulated emission is not about giving energy to the molecule. Energy must be given to the molecule in order to put it into the excited state in the first place; but the phenomenon of stimulated emission occurs because photons are bosons, as expressed by equation (3). This is what leads to the factor $N+1$ in equation (8).
• $\begingroup$ This is a nice answer, but aren't all the details with the molecule example etc answer ultimately unimportant, because the final answer is simply a ratio between a matrix element for a creation operator and a matrix element for an annihilation operator, giving the factor of $N$? $\endgroup$
– user213887
Dec 7 '18 at 4:11
• 1
$\begingroup$ @JulianIngham Yes, you are right. One of the best kinds of insight is knowing how to deduce a result from just a few general features, without needing to work through an example in detail. Your answer illustrates that nicely. Sometimes when I'm not yet confident in my own understanding (like I wasn't in this case), I like to check my understanding by devising the simplest explicit model I can that has those general features and seeing that it really does give the expected result. Since it seemed helpful to me (for whatever reason), I thought I'd post it in case it's helpful to anyone else. $\endgroup$ Dec 7 '18 at 5:04
• $\begingroup$ Oh cool; it's a nice example too and obviously gets a vote from me, I just wasn't sure if there was something technically specific I was missing with the example. $\endgroup$
– user213887
Dec 7 '18 at 5:16
• $\begingroup$ Just writing down two operators and saying that one represents particles with momentum in one direction and the other in the other direction IS hand-waving. There is no argument here that proves why there is a preference in direction, unless you show that for an arbitrary time, with normalized states and properly dealing with at least one dimension in space and a proper dipole coupling, your model does not represent yet stimulated emission. $\endgroup$
– ohneVal
Dec 7 '18 at 15:19
• 1
$\begingroup$ I must say your calculation is misleading in another sense, you are computing the norm squared of a prepared state after no time has passed, and interpreting the number you get as a transition probability, when it should be normalized to 1 if you want to call it a quantum state. $\endgroup$
– ohneVal
Dec 7 '18 at 15:19
A simple read on the Wikipedia article on stimulated emission should suffice (it includes the mathematical explanation, so I see no need to retype it here). You need a few specific ingredients for a laser, for that a reference I know, containing a nice down to earth chapter on this, is the book An Introduction to Atomic and Molecular Physics by Wolfgang Demtröder. I will proceed to brief you on the very basics, but the topic has a lot more details.
All the quantum mechanics necessary is that electrons have fixed angular momentum (fixed orbits), larger values of angular momentum correspond to higher energies (larger "orbits") and that there are certain selection rules for electrons to switch orbits (conservation of angular momentum). In order to switch orbits one must supply or extract some energy, this energy must coincide with the energy difference between the allowed states (orbits or angular momentum values if you will).
Stimulated emission, can be thought of as spontaneous emission differing in two ways:
1) The spontaneous emission happens at random times the stimulated one within a small time window (needs the presence of a photon background).
2) The latter is randomly directed with and has any allowed frequency (still according to selection rules) while in the stimulated case it happens in the same direction as the background photon field and light has the same frequency as the photons in the background (as long as they are tuned to an allowed transition).
For it to happen one needs a background field. Its role is to encourage the transition from the higher energy states to lower ones, producing extra photons on the way, but notice that the background photon is a "catalyst" so to speak, it is not consumed, so that you end up with two photons. Energy is still conserved since in total the electron comes from a higher energy state to a lower one, creating a photon having the energy difference.
So stimulated emission is the inverse process to absorption and both effects compete, it is just in environments where the starting distribution has been inverted (explained below), that there is amplification and an overproduction is achieved.
Coming back to lasers specifically. It is essential that you have a population inversion. That means you must have a configuration such that the number of electrons $N_k$ around a certain energy $E_k$ is bigger than that of another mode $N_i$ where $E_i<N_k$, in contrast to the usual thermal distribution where the higher the energy the less populated such mode is (details on how to achieve this configuration are not needed here). Then the population inversion is attained in this medium, called the active medium, by the use of an energy pump (what constitutes an energy pump varies and details can be found in the book mentioned). This together with an optical resonator (explained below) essentially induces an avalanche of photons through the stimulated emission.
The last ingredient for a laser is then the optical resonator, which basically enhances the production of a particular mode (frequency) you are interested in by creating and keeping the photon background necessary for the stimulated emission. Its job is to let light produced in other modes to escape while keeping in the modes of interest.
• $\begingroup$ The bounty and the OP want a mathematical explaination of the second and third paragraph. I think the concept of "catalyst" is understood. What's asked for in the bounty is how stimulated emission actually happens mathematically. $\endgroup$ Dec 5 '18 at 20:25
• 2
$\begingroup$ Stimulated emission does not require a "population inversion", but a laser does. $\endgroup$
– ProfRob
Dec 5 '18 at 23:50
• $\begingroup$ Very true Rob, thank you for the correction, i'll edit $\endgroup$
– ohneVal
Dec 6 '18 at 10:21
Your Answer
|
5b2fa62c7f9acd07 | Carlo Rovelli’s Helgoland
I’ve posted a lot over the years on interpretations of quantum mechanics. My writing has tended to focus on comparing the big three: Copenhagen, pilot-wave, and many-worlds. But there are a lot of others. One that has been gaining converts among physicists and others is Carlo Rovelli’s relational quantum mechanics (RQM) interpretation. This is an interpretation that comes up enough in conversation that I’ve always wanted to learn more. So when Rovelli’s book on it was announced, I decided I needed to read it. But Helgoland: Making Sense of the Quantum Revolution took a while to be available in the US, at least in Kindle format. My preorder finally came through last week, so I spent the last few days going through it.
Rovelli is clear at the beginning of the book that this is a partisan work, and he’s not kidding, although this type of partisanship is common in books on quantum physics. This book is about his particular interpretation. He does discuss many of the other major interpretations: many-worlds, pilot-wave, QBism, and physical collapse theories, but he makes clear that his coverage is cursory, and mentions multiple times that the reader can skip these if they want. (I read them anyway, just to see how he’d treat them.)
In another move that I’m starting to see as too common in these types of books, Rovelli’s partisanship includes his description of historical scientists. He sees his interpretation as fitting squarely within the tradition started by Werner Heisenberg, and his descriptions of Heisenberg seem pretty reverent. His view of Erwin Schrödinger, on the other hand, seems hostile, both intellectually and personally. As many authors have done, he describes Schrödinger’s polyamorous lifestyle, but goes a bit further by implying that Schrödinger had pedophilic tendencies. In contrast, in his biographical remarks about Heisenberg, he downplays Heisenberg’s collaboration with the Nazis. (He does mention another scientist who likely didn’t receive a Nobel prize because of Nazi affiliations. Heisenberg had the good luck to receive his Nobel before the Nazis came to power.)
Anyway, Rovelli’s sees Heisenberg’s chief contribution as focusing on observables and then building a theory of the relations between those observables. In his view, Schrödinger’s focus on real waves was a distraction, and the Copenhagen team were right to interpret his wavefunction as a probabilistic mathematical mechanism, a move Schrödinger himself was never happy with. (Although he did grudgingly come to admit the practical benefits.)
The main role of an interpretation is to explain what happens during the measurement process. Quantum objects move like waves, until they’re measured, then they behave like particles. In the classic Copenhagen interpretation, this is usually referred to as the wavefunction collapse. In the strong version of Copenhagen, involving a physical collapse, this was seen as problematic by Albert Einstein, because it involves an instantaneous collapse across all of time and space, leading to nonlocal “spooky action at a distance”, an issue made particularly vivid by quantum entanglement. Weaker versions of Copenhagen only have an epistemic collapse, resembling QBism, and so don’t consider themselves to have this issue.
The big question with Copenhagen is, when does the collapse occur? Niels Bohr’s answer was interaction with macroscopic systems, such as lab equipment, implying that there were different rules for microscopic and macroscopic phenomena. However, no one has managed to find any threshold where a collapse happens. Over the decades, scientists have managed to observe quantum effects in ever larger collections of quantum particles, molecules, and even tiny macroscopic objects. It looks increasingly unlikely that there is any such threshold.
This doesn’t represent an issue for non-collapse interpretations such as pilot-wave or many-worlds, but it does for most collapse interpretations. RQM is a collapse interpretation, but its innovation is to make the collapse a relative event. In RQM, what causes the collapse is an interaction with another physical system. However, the collapse only happens relative to the system interacted with, not with any other system. In other words, a quantum particle can be in superposition relative to one physical system while being collapsed relative to another.
So, if two quantum particles interact, they collapse relative to each other. But to the rest of the world, they remain in a superposition, and the interaction has left them entangled in some fashion. It also pertains to a quantum computing circuit. For each particle in the circuit, once it receives interactions, the circuit has collapsed. However, for the outside world, until there are interactions with the environment, the circuit remains in a superposition of all its possible states.
Making the collapse relative solves the question of when it occurs. It occurs on any interaction, but only relative to the particles involved in that interaction. Similar to many-worlds, this interpretation sees the entire universe as being quantum in nature. But also like many-worlds, it has radical implications. Physical reality exists in the relations, and only in the relations. This by itself isn’t too radical. It’s compatible with the ontic version of structural realism that we recently discussed. But it also implies that properties of physical systems don’t exist for another system at all until the interaction.
This is highlighted when considering RQM’s claim to local dynamics. Consider a couple of entangled particles, one held by Alice and one by Bob. Even if Alice and Bob are separated by light years, when they measure their particles at the same time, the particles collapse into compatible states. With an absolute collapse, this is a problem, because it implies faster than light communication.
But with a relative collapse, the most relevant collapse doesn’t happen until a comparison event, when the results of the measurements have been transmitted (at light speed or slower) to some party, say Charles, who does the comparison. Relative to Charles, the particles haven’t collapsed until he receives the results, even though relative to Alice and Bob their respective particles have collapsed. When Charles does receive the results, that’s when the collapse happens for him. Now we have a completely local interaction. But this only works because under RQM, the reality of the measurement outcomes don’t exist for Charles before he receives them.
So, while many-worlds implies a surplus ontology many find far too extravagant, RQM posits a radically sparse ontology that almost seems like separate interacting solipsistic realities, although centered on physical systems rather than just minds. In the latter parts of the book, Rovelli explores philosophy ranging from Marxist thought to eastern Buddhist thinking that resonates with this view.
Rovelli also veers into a discussion of consciousness. He dismisses ideas about the mind having anything to do with the collapse, or that mental processes are quantum, at least any more so than any other physical process, as well as a host of other quantum mystical notions. But in his view, seeing reality as being composed of interacting viewpoints, as RQM does, helps to close the gap between physics and the mind, eliminating a necessity to reconcile an objective view (which doesn’t exist) with subjective perspectival views. The idea is that both sides of the divide are now perspectival. It’s an interesting idea, but I suspect few troubled by the hard problem of consciousness will be convinced.
This is an interesting interpretation, but in my view it has a couple of drawbacks. One is that, as noted above, Rovelli takes a mostly anti-real stance toward the wavefunction. He notes that we never see a quantum wave, only the interference from it. That’s true but we also never see a quantum particle, only the effects it leaves in measuring equipment. And something causes the observed interference effects. The idea that the wavefunction can predict those effects with the accuracy it does, without modeling reality in some manner, seems implausible. But if we let that realism in, then RQM seems in danger of becoming many-worlds with blinders on. (It’s worth noting that an early name for many-worlds was “the relative state formulation”.)
I also see keeping the collapse postulate as a drawback. RQM does defang one of its worst implications, the instantaneous change in reality that concerned Einstein. But it also leaves in a level of indeterminism. Many will see this as a plus, preferring a physics where everything isn’t determined, and might argue that it’s a matter of taste. But I’m in the camp that sees determinism as something that works well everywhere else in science, and worked well overall for centuries before quantum physics. To me, it doesn’t seem like we should dispense with it lightly, particularly while there are options. (This doesn’t mean that quantum physics would ever be operationally deterministic.)
Finally, it shouldn’t be underestimated just how radical the sparse ontology proposed here is. The interpretation takes general relativity as an inspiration. But in the case of general and special relativity, the conclusions are a necessity driven by observation and mathematics. RQM requires a specific type of collapse postulate, a major assumption, albeit one many will consider justified given the alternatives.
But this is quantum physics. We won’t get by unscathed. Interpretations juggle things like determinism, locality, realism, the arrow of time, a single reality versus multiple realities, and now a sparse versus full reality. Every interpretation requires throwing one or more aspects of common sense reality under the bus.
What do you think of relational quantum mechanics? Do you feel like the sparse ontology is worth it?
44 thoughts on “Carlo Rovelli’s Helgoland
1. “RQM posits a radically sparse ontology that almost seems like separate interacting solipsistic realities, although centered on physical systems rather than just minds.”
It’s good to read that you were able to garner this fundamental concept from Rovelli’s book Mike, because it’s the underlying prime of motion and form at every level of interactions within systems not exclusive to just minds.
Liked by 1 person
1. RQM rocks…. It is the most pragmatic approach to quantum physics, one that eliminates the mystery and magic all of the other models imply. In addition, RQM compliments what we already know about classical physics plus, it corresponds concisely with my ontological model of Reality/Appearance Metaphysics (RAM). The sparse ontology thesis irrevocably reduces to an imperative; and that imperative is pansentientism.
RQM is not for everybody because there are those who are enthralled by the mystery of magic and things that cannot be explained. RQM dismantles that mystery with precision and at the end of the day I would expect RQM to be the prevailing model creating a paradigm shift of how we see the world and ourselves.
Party on
Liked by 1 person
2. I thought of you Lee when I read the passage about that aspect. I can see how it resonates with your philosophy.
I do think RQM retains some of its own mysteries, as I noted in the post, but far less than old school Copenhagen.
2. Consider the couple of entangled particles, one held by Alice and one by Bob. If they are quite a distance apart and the entanglement is relative, how does the communication between the entangled particles take place? Are they gravitational waves or emr waves and does one particle “beam” the signal only at the other particle or does it send out a spherical signal? If a direct beam, how does one particle “know” where the other is? If a spherical wave, the power of such a wave would diminish a great deal over distance, so where does that power come from?
Is great puzzlement!
Liked by 1 person
1. Supposedly under RQM, the signal isn’t required. Each measurement event is local and only relative to the local physical systems. The only way a discrepancy could come up is at a comparison event afterward. But for the comparer, Charles, neither measurement has collapsed until he receives the results (using normal transmission methods). When he does, the collapse happens for him, but at that point it’s all local for him. If he subsequently transmits the results to Alice and Bob, then for each of them, the other person’s results collapse when they hear about them, interacting with the already collapsed information of their local results.
This doesn’t work from an objective third person “God’s eye” view, but RQM denies that such a view actually exists.
3. I’ve read several of Rovelli’s books now, and I’ve been underwhelmed in all cases. To me Rovelli dabbles in far out fantasy. Nothing wrong with that, but I can’t take it very seriously until there are probative facts. (As we were just talking about, “Experiment is better than theory.” Also in light of what we were just talking about, all QM interpretations are at this point intuitions.)
“However, no one has managed to find any threshold where a collapse happens. Over the decades, scientists have managed to observe quantum effects in every larger collections of quantum particles, molecules, and even tiny macroscopic objects. It looks increasingly unlikely that there is any such threshold.”
I think there is a threshold, a Heisenberg Cut, but that we don’t understand its mechanism yet. All those ever larger numbers of quantum systems still require special conditions: low temps, EMF shielding, vacuum, and other ways of keeping out the world. Quantum effects seem to require isolation, which, to me, implies they don’t exist out of isolation. The world is decohered.
Liked by 1 person
1. Wyrd,
I think that you would have to agree that the quantum realm is beyond the reach of “probative facts” because of the measurement problem. However, if the system we know as mind is indeed a quantum system, then that system would have the capacity to bridge the gap between the classical world we experience and the quantum world we experience that is mind. That gap will be bridged by the explanatory power of the mind that is intrinsic to logical consistency. Therefore, the system of mind should not be hamstrung by the limitations of our own self-imposed intellectual constructs, a prevailing paradigm which insists that predictive power through experimentation (a posteriori) is superior to the intuitions of a priori.
It’s like Kant asked: Is there any knowledge outside of experience (a posteriori)? And the short answer is yes; because today’s a priori intuitions always become tomorrow’s a posteriori. A priori is summed up best by an intuitive insight of “I know not what, only that it is of high value”. Value always comes first in hierarchy.
1. Sorry, I don’t think we’re on the same page here, Lee. I don’t think I do agree the quantum realm is beyond reach of probative facts. As for our minds being quantum systems, the jury is out. Even if they are, that doesn’t require they have any special quantum understanding power. We don’t seem to have any access to our lowest-level processes.
“…the system of mind should not be hamstrung by the limitations of our own self-imposed intellectual constructs…”
This seems contradictory to me. Why would a mind hamstring itself if it has higher capacity? What would be the point? Sorry, Lee, but I find those intellectual constructs quite helpful. They weren’t made up from the whole cloth but from successful experience. We earned those constructs.
As for a priori versus a posteriori, my most recent post is about the value of intuition and the Yin-Yang of science-intuition. But in the end “Experiment is better than theory.” Always.
From where I sit, the problem isn’t hamstrung minds but minds that believe in stuff that has little chance of being real. Yet there are gems among that dross. Sometimes today’s a priori becomes tomorrow’s a posteriori, but from what I’ve seen that’s the exception, not the rule.
In any event, I think very few things are truly a priori. Math is one along with, per our buddy Kant, time and space. And even space might require observation. (IIRC, Kant did put time as most primal.)
1. “Why would a mind hamstring itself if it has higher capacity? What would be the point?”
I agree. But like it or not, this is exactly what we as individuals do to ourselves, it’s called subjectivity.
I agree. Transformational a priori intuitions are rare, and it is those exceptions to the rule that will transform our understanding of the world. And this transformation will occur one individual at a time.
1. What exactly is the issue with subjectivity? (Do you just mean personal bias?)
It is the rareness of correctness of intuitions that’s a problem. It’s Sturgeon’s Law for theories — most of them are crap. The difficulty is sorting out the few good ones.
Have you ever read Idiot America by Charles Pierce? You might enjoy it. A basic thesis is that culture used to be better at picking out the rare truly useful bits from the crackpottery and ignoring the rest. We seem to have lost our capacity for grounding our thinking in basic physical reasoning. These days we embrace all sorts of craziness.
2. This is only the second book of Rovelli’s that I’ve read. The first was Seven Brief Lessons on Physics, which I found too brief and too basic. I’d say this one could also have gone into more depth. In truth, this post was supplemented from stuff I remember from his SEP article. Probably some of his points only clicked because I’d already spent time trying to parse that article.
I think “far out fantasy” is too strong. Everything here follows from the postulate of a relational collapse. I do think that’s a major assumption and is the weak point. But if you buy it, the rest pretty much follows.
Definitely the world is decohered. The question is whether it’s also collapsed, and if so, what leads to that collapse.
1. Seven Brief Lessons on Physics was a Rovelli book I hoped I’d enjoy, but I ended up being rather underwhelmed. I suspect we’re not the audience for that one. Before that I’d read The Order of Time (his emergent time theory) and Reality Is Not What It Seems: The Journey to Quantum Gravity. Neither did anything for me. To be honest, I see Rovelli as something of a space cadet. I stand by what I said, that he “dabbles in far out fantasy.”
“I do think that’s a major assumption and is the weak point.”
Which is just another way to say “far out fantasy.”
“The question is whether it’s also collapsed, and if so, what leads to that collapse.”
The world is decohered because wave-functions collapse. As you know, I think the field has turned “collapse” into dogma, but I don’t think it’s quite as shocking as many make it out to be. In the canonical case of photons, for instance, the photon is absorbed by the electron. It physically vanishes. Something abrupt happens there, which to me suggests abrupt isn’t the issue. We just don’t have the math to understand how to modify Schrödinger at that point. A lot of the hand-wringing is just because we don’t have an equation. Yet.
We have a central mystery in QM: Why does this electron absorb the photon and not that one. Einstein’s spooky problem came with an example: a photon released at the center of a sphere will be absorbed by one of the electrons on the sphere’s inner surface, but nothing we yet know can tell us which one.
The lesser mystery is that “collapse” changes the probabilities. It’s lesser because that may be purely an epistemic issue. If so, then of course it changes. That’s what probabilities do when new information is available.
1. What gives you confidence that the math needs to be modified? Or if it does, that the missing variables will provide something like a collapse? All experimental evidence to date is compatible with the equations as they stand, reportedly to several decimal places. And adding variables, as pilot-wave does, reportedly messes up QM’s generalization into QFT.
1. Keep in mind an important primary fact about QM: We know it’s incomplete. There’s some possibility it’s wrong in some fundamental way, but I agree it would make QM a surprisingly effective epicycles theory. Still, we do know pieces are missing in our description of the quantum world.
As a first analogy consider how an “x-squared” equation models the parabolic path of a ballistic object. But it doesn’t contain anything about that object hitting something or just exploding. That requires an additional equation. As a second, consider how GR describes black holes, but they are eternal under GR. Hawking radiation adds a new interaction with reality in addition to GR under which black holes can evaporate.
It’s not a hidden variables thing, but the need to describe an additional situation. The Schrödinger equation describes the evolution of a system of one or more particles. That system interacting with some other system requires something describing that situation.
Also, in these situations, often a particle is created or annihilated. I’ve read that the Schrödinger equation can’t describe this, and certainly I’ve never seen any example in any lecture that does. They’ve all described systems of one or more existing particles that continue to exist.
If the Schrödinger equation gives us the probability of finding a particle in a given location or of measuring its momentum, spin, or other property, what should happen when that particle ceases to exist? All the probabilities have to drop to zero, suggesting either Schrödinger equation is epistemic or that an important piece is missing.
To me it points to collapse being a thing we don’t have a handle on yet. And as Einstein’s example illustrates, this observation/collapse thing is a central unsolved mystery.
Liked by 1 person
2. On particles being absorbed or emitted, I have to admit I’m not sure what the Schrodinger equation’s relations to that are. I haven’t read about those limitations. (I’d be interested in learning where they’re discussed.) But it’s worth remembering that Schrodinger is just one of the mathematical frameworks used to work with quantum physics, and they all reportedly reconcile with each other.
As I understand it, physicists are able to use QFT, QED, and QCD to predict when a particle will be emitted, when it will be absorbed, and what might emerge in a collision. Those are the theories obviously tested in the LHC and similar experiments. (The recent excitement is reportedly about a possible minute deviation from those predictions.) From what I understand, the Schrodinger structures are preserved, or at least affirmed, in all those other theories, even if Schrodinger itself isn’t the best way to do the calculation.
It seems like it’s possible the math might someday have to be amended. Gravity in particular might eventually lead to it. But raw quantum theory seems famously (infamously?) stubborn in making extremely accurate predictions.
Indeed, and that’s part of my point. The hand-wringing over the Schrödinger equation not having collapse seems unnecessary to me. Firstly, it describes the evolution of a particle system, so maybe like a parabola describing ballistics, we wouldn’t expect it to describe collapse. Secondly, as you say, there are other formulations, including various (unproven) collapse theories (even I have one).
As you go on to say, it’s QFT that describes particle creation and annihilation. Certainly the behavior of those particles is compatible with the Schrödinger equation, but my understanding is that it does require the extension of QFT to describe how they begin and end.
“…even if Schrodinger itself isn’t the best way to do the calculation.”
The point is that the Schrödinger equation may not be capable of doing the calculation. It may not address the situation.
(One reason I started getting into QM math is to find out if what I recall reading is true or to find out how to use the Schrödinger equation to fully describe a photon that is absorbed by an electron. As I mentioned, all I’ve seen so far, by lack of example, backs up what I read. There is also my understanding that the Schrödinger equation has terms for the particles and the energies that affect them, so what happens if terms need to be introduced or removed? It’s not uncommon for terms to have factors that are non-zero only in certain situations, so it’s possible something like that is going on. I need to learn more to know.)
“But raw quantum theory seems famously (infamously?) stubborn in making extremely accurate predictions.”
Except where it fails completely, such as in Einstein’s example of the omni-directional photon released from the center of a sphere. All it gives us is a smooth distribution of probability equal at every point.
There’s a similar situation with a radioactive material. QM gives us no way to predict which atoms will decay but by experiment and theory we know they always follow half-life curves overall. How do the atoms know? How do the electrons in the sphere know that one absorbed the photon, so none of the rest of us can?
Superposition, interference, and the apparently random nature of quantum interaction, are, in my view, the three big mysteries of QM. We observe them experimentally, and have math describing them in various ways (with a big hole regarding interactions), but we don’t understand them.
So I agree it’s a great theory, very effective, but to me it’s clearly not quite ready to come out of the oven. 🙂
Liked by 2 people
1. 🙂 It has its unsolved mysteries, but I’m actually charmed by its counter-intuitive nature. (For instance, some are quite alarmed by entanglement’s apparent end-run around the speed of light, but think it’s kinda cool how local realism isn’t true, but locality is.)
Liked by 1 person
4. Interesting! Based on your explanations, I now think I get what the RQM interpretation is saying. I find it less simple than Everett, assuming that collapse is supposed to be real and discrete, albeit relative. If collapse is not discrete, I’m not seeing how it’s different from Everett.
Liked by 1 person
1. I agree that Everett is simpler, since he just takes the raw quantum formalism and follows it to the bitter end.
I’m not sure if RQM takes a stand on whether the collapse is discrete. Making it relative seems like it provides an opening for it to not be discrete. We know decoherence isn’t discrete, but that’s separate from the collapse, if there is one.
5. Ok. Well. Um. There’s nothing for it, so ….
1. I finally finished it and thought it was a great read. Very non-technical, low math. Colorful imagery: I especially appreciated his variation on the two slit experiment (where you split particles into a left and right path. If there is no obstacle, all the particles go down. If you put your hand in one path, half the particles go up, half down.) Lots of background info I didn’t know. Etc.
2. While reading I compare against my understanding of metaphysics, and Rovelli’s tracks with mine almost perfectly, so RQM is my current best understanding of QM.
3. [Here we go …]. I do not see RQM as a collapse theory at all, and I was surprised that you do. There is no collapse, only interaction, described by the wave function. What does happen is that the interaction changes something such that where there would have been interference, there no longer will be. But that change happens only in reference to the thing that interacted. If you get Alice’s and Bob’s things together without interacting with them, they will interfere. But if you take out Alice’s thing and look at it/measure it/interact with it, you change it such that it will no longer interfere, but that only applies when YOU (or anything you’ve interacted with, etc.) mess with Bob’s. Same if you put your hand in the left path, you change the particles in the right path such that they no longer self-interfere. BTW, this only makes sense when you get rid of the idea that they are point particles. They’re not, and never were, and don’t become point particles. Whatever they are, they interact with one thing at a time, which is what a point particle would do, but that does not make them a point particle.
Liked by 1 person
1. I figured you’d like that book.
1. Many physicists use an interferometer setup to demonstrate what Rovelli is talking about in his modified double-slit. The intro article in the Arc Technica series I shared a while back has an example.
It’s all about wave mechanics.
3. As I understand it, RQM is not a physical collapse theory. Rovelli’s rejection of wave function realism seems to rule that out. It seems like more of an epistemic collapse theory.
The suppression of interference effects, in and of itself, is actually explained by decoherence, which Rovelli acknowledges in endnote 39 on page 209. But decoherence doesn’t explain the fate of the unobserved outcomes. With only decoherence and nothing else, we end up with Everett many-worlds. Something else is needed for only one of those outcomes to be reality. Of course, you can say there was ever only one outcome that we just became aware of (which is what an epistemic collapse amounts to), but then where did the interference effects come from?
On a particle being some kind of consistent entity that manages to have both wave-like and particle-like interactions, I don’t know. We can speculate that something like that exists, but coming up with a concrete proposal for it is another matter.
1. ” With only decoherence and nothing else, we end up with Everett many-worlds.”.
Actually I think Rovelli is often regarded as close to Everett, isn’t he? Aren’t the unobserved outcomes still out there ready to be measured from a different relative observer?
1. It might depend on who you ask. I know the Wikipedia interpretations comparison table lists RQM as agnostic on the other outcomes. And the first physicist I heard describe it took that stance. But Rovelli himself seems pretty strongly anti-Everettian. His anti-real stance on the wavefunction seems to rule it out.
1. BTW, have you read this paper, which is surprisingly readable in many sections (not so much in others) even with my limited understanding of the math.
He has several comments on Everett’s view which also has morphed into several different varieties. Towards the end he writes:
“There is a way of having (perspectival) branching keeping all systems on the same footing: the way followed in
this paper, namely to assume that all values assignments
are completely relational, not just relational with respect
to apparatus or Minds. Notice, however, that from this
perspective Everett’s wave function is a very misleading
notion, not only because it represents the perspective of
a non-existent observer, but because it even fails to contain any relevant information about the values observed
by each single observer! There is no description of the
universe in-toto, only a quantum-interrelated net of partial descriptions”.
That first sentence seems to reflect that his relational view could be thought of as a type of branching, although perhaps not the form usually thought of by Everettians.
BTW, I think he sums up his entire view in the most succinct manner possible in the paper.
“Main observation: In quantum mechanics different observers may give different accounts of the same sequence of events.”
Liked by 1 person
2. You probably would do better to read the linked paper but I understand the same sequence of events which result in two observers observing different end states.
However, before the main observation he writes this which may clarify:
Thus, we have two descriptions
of the physical sequence of events E: The description (1)
given by the observer O and the description (2) given by
the observer P. These are two distinct correct descriptions of the same sequence of events E. At time t2, in the
O description, the system S is in the state |1> and the
quantity q has value 1. According to the P description,
S is not in the state |1> and the hand of the measuring
apparatus does not indicate ‘1’.
Thus, I come to the observation on which the rest of
the paper relies.
3. As you say, I would have to read the paper. It’s interesting that it’s apparently a case of |1⟩ versus not |1⟩, rather than say |1⟩ versus |0⟩.
4. I took a peek at the paper and read section IIA, which seems to state his thesis. It appears to me as a version of Wigner’s Friend. Observer O, after observing system S, creates a superposition of outcomes [equation (2)].
The condition at t2 is that observer P has not observed either system S or observer O, so they “observe” a superposition of those outcomes. I say “observe” because P doesn’t actually observe anything; the situation is how perceive the situation.
Part of it is that when P does make an observation (at time t3), of S and/or O, they can only collapse a wave-function where that system and observer agree.
He does see the MWI as different. Wave-functions collapse from the point of view of the interacting systems. He’s doing a Schrödinger’s Cat thing where observer O is the cat, system S is the mechanism inside the box, and observer P is the scientist who opens the box. Or O is the Geiger counter and P is the cat. Or O is the scientist and P is someone outside.
This all depends on the view that QM describes the classical world in a meaningful way. (That’s kind of where I get off the bus.)
Liked by 1 person
5. Thanks. I haven’t read that paper. My main source of information, aside from the book, has been Rovelli’s SEP article (co-written with someone else). It was last revised in 2019 and probably gives a more current snapshot of his views:
I think the differences between Rovelli and Everett is the degree of quantum state antirealism vs realism and whether it makes any sense to reconcile the relative interactions, which as you note could be seen as equivalent to branching, into a unified view of reality. Rovelli asserts that the unified view doesn’t exist, that it’s meaningless to attempt it.
Liked by 1 person
6. “Rovelli asserts that the unified view doesn’t exist, that it’s meaningless to attempt it”.
Yes, I think that’s it. All views are relative. It is just applying the perspective of Einstein’s relativity to the quantum.
Liked by 1 person
6. Interesting, Mike. I appreciate you taking the time to share this as I really didn’t know what Rovelli’s stance was or what relational QM was all about. I’m still a little confused but think I have an inkling. I’ll have to check the book out at some point.
Before I do though, if I understand you, the crux is this: when any two physical systems interact they each register a definite observation of the other. But to a system that hasn’t interacted with them, those two systems or particles could still be in a superposition relative to all other systems. And a third system that interacts with either of the first two could, in principle, register a different observed state of the first two particles than the particles themselves registered when they interacted.
Is that correct?
The part that’s sort of difficult to me is the entanglement example. Bob and Alice fly off to different parts of the universe and take a peek at their entangled particles. Bob observes spin up and Alice spin down let’s say. They each see what they see. A third party, Charles, has not interacted with either of the two entangled particles, so for him the two particles are still entangled. He could observe Bob’s particle to be spin up, as Bob did, or he could observe it to be spin down. And his probability of seeing either one is given by the wave equation or the matrix formulation of QM, etc.
What’s confusing to me is that Charles doesn’t actually measure the particles, he reads the results from Bob and Alice. And it sounds like you’re saying that when he receives and reads the results, then the particles are no longer entangled from his perspective, BUT, he could actually observe that the Bob’s and Alice’s particles are different from what Bob and Alice observed? That’s the part that is hard for me to understand. He’s not actually observing the system, he’s reading a text message.
What’s wrong with my restatement of this?
And if I’ve restated it correctly, how the heck do the communications from Bob and Alice permit Charles to observe anything different? He’s reading a statement of what they saw, no? Or is that where I’ve gone awry?
If he can’t observe anything different, and no other physical system can observe something different, than it’s hard to say they’re still entangled for Charles until he receives the measurements from Bob and Alice. And if he IS able to observe something different, then is the problem that this thought experiment is oversimplified and it’s not that he gets a text message from Bob with Bob’s observation, but that in truth he has to physically interact with the photon pair himself to make his own measurement, and THAT could be different?
Thanks in advance, Mike.
Liked by 1 person
1. Thanks Michael. It sounds like you’re on the right track, but the full implications of Rovelli’s view needs to set in for the picture to click.
On the entanglement part, when considering Charles’ comparison, remember that you have to evaluate that event in terms of RQM, not Copenhagen. In Copenhagen, Charles making the comparison is a macroscopic event, and therefore not quantum. The collapses happened at Alice and Bob’s individual measurements, not when Charles receives and compares the information.
However, with RQM, the entire universe, including macroscopic systems, are all quantum. (It shares this trait with the Everett many-worlds interpretation.) That means that, for Charles, prior to him receiving one of the measurements results, that measurement and everything that results from it is in a superposition of all the possible results. The whole thing only collapses, for him, when he receives the information. (This might be a little easier to accept when we recall that Rovelli rejects wavefunction realism, so we’re not necessarily talking about entire worlds that disappear when Charles gets his results.)
Now, on the question of him seeing something different than what Alice and Bob see, remember that, relative to Charles, what Alice observes from her measurement and its subsequent transmission, is all in a superposition of multiple results, until Charles gets the information, at which point the whole thing collapses, again, relative to Charles. The same thing happens between Charles and Bob.
In other words, there is never an opportunity for Alice, Bob, or Charles to compare their results in such a manner that they’ll be different (or incompatible). On the face of it, it seems like this says reality can be inconsistent. But that inconsistency only arises from an objective “view from nowhere” that RQM rejects the validity of.
If we reconcile it anyway, that leads to another interpretation, Everett. But if we accept Rovelli’s contention that there is no “God eye” view to reconcile, then we end up with a sparse ontology of interacting viewpoints, with each physical system being the center of a viewpoint, and there being no objective reality beyond that.
As I noted in the post, this is a radical view.
Liked by 1 person
1. “…we end up with a sparse ontology of interacting viewpoints, with each physical system being the center of a viewpoint, and there being no objective reality beyond that.”
It might be a radical view, but I think Rovelli is on the right track. This reminds me of how individuals cells that repair cuts, broken bones or bruises work. Those cells could care less about what the rest of systems within the body are doing or are even aware that those other systems exist. The cells that repair the body work 24/7 as solipsistic systems in complete isolation expressing their own unique qualitative properties until their job is done.
Rock on Rovelli…..
Liked by 3 people
2. Thanks, Mike. I think I have a clearer picture now. The key for me was understanding that, in my example, because Charles hasn’t previously interacted with Bob’s or Alice’s photon, (or Bob or Alice for that matter), he is just as likely to get a message from Bob that reads “spin up!” as he is “spin down!” and that whichever occurs, upon receipt of this information his picture of reality will be consistent. It is possible for Bob to have perceived the opposite result when he first observed the photon, but his reality will be consistent as well because what’s true in his world, is what he will report in his note to Charles, and so on and so forth. It does have an MWI feel to it for sure…
Liked by 1 person
1. That was my impression too when I first read about it. And some physicists seem to agree. But Rovelli sees RQM as very distinct from MWI. I think he leans on wavefunction antirealism to rule out the other worlds. Although he does say in the SEP article that RQM is “metaphysically neutral”.
Your thoughts?
You are commenting using your account. Log Out / Change )
Google photo
Twitter picture
Facebook photo
Connecting to %s
|
4353a455cd3bc255 | For my May 2006 diary, go here.
Diary - June 2006
John Baez
June 2, 2006
It's been a hectic 72 hours. On Wednesday I gave a colloquium at the Perimeter Institute (in Waterloo, Ontario). Later I had dinner with Fotini Markopoulou and Lee Smolin at a great restaurant called Jane Bond; we talked about quantum mechanics. Thursday morning I got up early and rented a car. Waiting for the car rental company to pick me up, I ran into Jeffrey Bub, who turned out to have given a talk on Tuesday about the the importance of our inability to duplicate quantum information - also a theme of my talk. He'd asked a good question at my talk, but I hadn't recognized him!
Anyway, I drove an hour and a half to the University of Western Ontario (in London, Ontario) where I spoke to Dan Christensen about homotopy theory and gave a lecture on where we stand in fundamental physics. I had dinner at a Thai restaurant with Dan, his student Igor Khavkine and his postdoc Josh Willis. We talked about spin foam models, especially Josh's new paper. I spent the night in a hotel in London, and today (Friday) I had breakfast with Dan and we talked more about our joint math projects. Then I drove back to Waterloo, took a cab to Toronto, and flew to Boston.
Insane, really - I'm not really practiced enough to stay completely calm while trying to make so many connections. I easily imagine all the things that could go wrong. But it all somehow worked, despite getting lost about 4 times while driving to London, and a flight delay due to thunderstorms in Boston.
Now I'm in Cambridge Massachusetts, in Kendall Square - right next to my old grad school, MIT. I'm here for some top-secret business that I've love to talk about, but can't. I'm staying at the Kendall Hotel. I don't think it was here back when I was a grad student (1982-1986). It may have still been a firehouse. Kendall Square was pretty dumpy back then, but part of why I wanted to come here was to see how Cambridge has changed.
I can already tell it's gotten gentrified, just like everyone says. As I was checking in here, someone walking out asked their friend "Did you know this is the most trendy boutique hotel in Cambridge?"
Woooh! I feel like a bigshot now. They probably pay some guy to keep walking in and out, saying that. Back when I was a longhaired grad student, I don't think the phrase "boutique hotel" had even been invented. There were fewer rich people; fewer poor people too.
I need some sleep, even though internet access makes me want to stay awake and have fun....
June 4, 2006
My father had a stroke. It sounded very scary in the email I got from my sister yesterday. When I called my mom yesterday she said he had already recovered to the point of being able to talk and walk. She was making him do lots of exercise. Today I called her again and my father answered. "Hi!" he said, "What a surprise!" He was expecting my uncle. I was the one who was surprised - shocked, in fact, that he sounded so hearty, and so obviously not just faking it. Whew - amazing! I was and am planning to visit them in two days. I'm relieved that it won't be a tragic occasion.
June 5, 2006
An interesting article on the rise of people who plan to remain single all their lives:
Some statistics:
June 13, 2006
I'm back at the Perimeter Institute - back from visiting my parents in DC. I was immensely relieved to find my dad hadn't suffered visibly from that stroke, or whatever it was - it's not even clear what it was. He's not much changed from how I saw him last. Unfortunately, this means that he is forgetful, arthritic, and very weak; he needs a walker to get around, and moves very slowly. He only gets out of the house when my mother drives him to the library or to his physical therapist. He finds this depressing - he says it's like he's already entered the afterlife. Somehow he manages to soldier on. I naturally found myself thinking about his future, and mine... how we'll probably all wind up in nursing homes.
When we're young we do a great job of ignoring these issues. When we're middle-aged it's easy to lose ourselves in work and raising of children. It's surprising how long we can go on pretending old age and death are things that happen to other people. But the hand of time hangs heavy on us all.
I could say much more, but I'm not quite sure how personal I want this diary to be. Here's a picture of my parents' house:
You can also see a closeup - my mom helped design this house, and she's very proud of it. Also: my dad, my mom, and a necklace my mom made - she spends a lot time creating jewelry these days.
Here are some notes from the clash of civilizations, written while reading the Washington Post when I was visiting my parents in DC:
I've been reading a quirky and fascinating book on the history of Chicago and its architecture: The energetic optimism of Chicago in the late 1800s was something really unique. It was picked up by Sullivan and others... though they rejected aspects of its rampant commercialism. It's nice thinking back on the Chicago architecture tour that Tom Fiore took me on not long ago. We saw some buildings by Sullivan.
Today I went on a little tour of Institute for Quantum Computing with Scott Aaronson. Raymond LaFlamme showed me his nuclear magnetic resonance lab, and also the lab where they create entangled photons for quantum cryptography. With any luck, at the end of June they'll beam pairs of entangled photons to the IQC and Perimeter Institute from a taller building somewhere between the two. This will allow them to communicate in a way that nobody can intercept without it being noticeable. Not that the IQC and Perimeter Institute have anything secret to talk about! Just a demonstration.
After Indian food and lunchtime discussion at the IQC, I felt a bit listless from lack of sleep the previous night, which I'd spent writing "week234". Luckily, John Moffat came by my office to talk about a fiendishly clever attempt to solve the cosmological constant problems using parastatistics. Alas, my technical understanding of parastatistics is almost zilch, but we still had an interesting conversation.
Then I whiled away the rest of the day correcting the dissertation of my student Toby Bartels and attaching emails about music theory to the Addenda of "week234".
Right now I'm listening to Miles Davis' E.S.P., wondering yet again why more people don't say this is his greatest album.
The fact that I'm sitting here listing the things I did today, instead of actually doing something, is yet another sign that I'm feeling low-energy.
June 14, 2006
At 11 am I had an appointment to talk with Howard Burton, executive director of the Perimeter Institute. Among other things, we discussed the future of fundamental physics. We agreed that dark matter, dark energy and other cosmological issues are where it's at. He wondered: will we understand them better in 20 years or so? None of our current theories seem to be making much of a dent in these questions.
I tried out my latest idea on him: finding a real solution to these questions might require years of fumbling around with crude theories that seem "insufficiently elegant" to people raised on the Standard Model, string theory or loop quantum gravity. Something more like Balmer's formula or the Bohr atom than Schrödinger equation. Balmer was a teacher at a girl's school in Switzerland who dreamt up a formula for some of of the frequencies of light emitted by hydrogen. Later Rydberg generalized it to get the other frequencies.
If some high school teacher proposed this formula today, would we dismiss it as mere coincidence, noting that it doesn't work for other atoms? We seem to think physics has progressed beyond this point now... but has it, really? MOND (modified Newtonian dynamics) has a similar jury-rigged quality: it does surprisingly well as a competitor to dark matter for explaining the anomalous rotation of many galaxies, but it does badly on other things. Maybe it has a kernel of truth. Maybe it will take a Bohr to spot that kernel of truth, and then a Schrödinger or Heisneberg to formalize it.
Later, John Donghue gave a talk about quantum gravity corrections to the 1/r2 force law, derived from effective field theory. Nice stuff! Any solid piece of information about quantum gravity is a precious gem.
Another low-energy day - apart from the above, I mainly kept myself occupied by adding comments to the Addenda section of week234, which was about the math of music. It's fascinating how many of my math friends had deep things to say about this. It seems to support the stereotype that a lot of mathematicians are into music. Like math, music can take us outside ourselves, into a beautiful world of abstract patterns, where everything is right. For a while, at least, it lifts that hand of time that lays so heavy on us.
June 15, 2006
Dan Christensen came by and we continued our work on smooth homotopy theory. The ups and downs of research: we almost decided to give up on this project, when I mentioned an idea we had at the end of our last session... we got excited, talked a bunch more, and when we had to quit, things seemed to be working just fine!
We took break for listening to talks about loop quantum gravity and black entropy by Danny Terno, Saurya Das and Arundhati Dasgupta. I think I've put in too much time working on this subject to find it interesting or even bearable anymore. It doesn't help that I have a headache.
Martin Rees writes:
This is from: He also writes:
The decisions that we make, individually and collectively, will determine whether the outcomes of 21st century sciences are benign or devastating. Some will throw up their hands and say that anything that is scientifically and technically possible will be done - somewhere, sometime - despite ethical and prudential objections, and whatever the laws say - that science is advancing so fast, and is so much influenced by commercial and political pressures, that nothing we can do makes any difference. Whether this idea is true or false, it's an exceedingly dangerous one, because it's engenders despairing pessimism, and demotivates efforts to secure a safer and fairer world. The future will best be safeguarded - and science has the best chance of being applied optimally - through the efforts of people who are less fatalistic. And here I am optimistic. The burgeoning technologies of IT, miniaturisation and biotech are environmentally and socially benign. The challenge of global warming should stimulate a whole raft of manifestly benign innovations - for conserving energy, and generating it by novel 'clean' means (biofuels, innovative renewables, carbon sequestration, and nuclear fusion). Other global challenges include controlling infectious diseases; and preserving biodiversity.
But, even in this 'hyper-extended' timeline - extending billions of years into the future, as well as into the past - this century may be a defining moment. The 21st century is the first in our planet's history where one species has Earth's future in its hands, and could jeopardise life's immense potential. I'll leave you with a cosmic vignette. We're all familiar with pictures of the Earth seen from space - its fragile biosphere contrasting with the sterile moonscape where the astronauts left their footprints. Suppose some aliens had been watching our planet for its entire history, what would they have seen? Over nearly all that immense time, 4.5 billion years, Earth's appearance would have altered very gradually. The continents drifted; the ice cover waxed and waned; successive species emerged, evolved and became extinct.
But in just a tiny sliver of the Earth's history - the last one millionth part, a few thousand years - the patterns of vegetation altered much faster than before. This signaled the start of agriculture. The pace of change accelerated as human populations rose.
But then there were other changes, even more abrupt. Within fifty years - little more than one hundredth of a millionth of the Earth's age, the carbon dioxide in the atmosphere began to rise anomalously fast. The planet became an intense emitter of radio waves (the total output from all TV, cellphone, and radar transmissions.)
If they understood astrophysics, the aliens could confidently predict that the biosphere would face doom in a few billion years when the Sun flares up and dies. But could they have predicted this unprecedented spike less than half way through the Earth's life -these human-induced alterations occupying, overall, less than a millionth of the elapsed lifetime and se emingly occurring with runaway speed?
The answer depends on us.
Simple stuff, but worth remembering. This is from:
June 17, 2006
Reading a copy of The New York Review of Books in a cafe on a hot day here in Waterloo, sipping a raspberry-cranberry smoothie, I was struck by a couple of poems from this book:
Tonight, for the first time in many years
there appeared to me again
a vision of the earth's splendor:
in the evening sky
the first star seemed
to increase its brilliance
as the earth darkened
until at last it could grow no darker
And the light, which was the light of death
seemed to restore to earth
its power to console. There were
no other stars. Only the one
Whose name I knew
as in my other life I did her
injury: Venus,
star of the early evening,
to you I dedicate
my vision, since on this blank surface
you have cast enough light
to make my thought
visible again.
June 18, 2006
This was my last weekend in Waterloo. My student Jeff Morton showed up today - he couldn't make it sooner, since final exams just ended at UCR - and we talked a bit with Aristide Baratin about Freidel and Baratin's new paper describing a spin foam model that gives ordinary quantum field theory on Minkowski spacetime. I'm pretty excited, because we conjecture that this spin foam model is the same as Crane and Sheppeard's spin foam model based on a gadget I invented called the Poincaré 2-group. Higher category theory may finally be sneaking into ordinary physics!
But alas, in my conversations with Baratin and Freidel, we only made a little preliminary progress on proving this conjecture - and now I have to go. I return to Riverside on Tuesday, where Lisa awaits me. On Friday she leaves for Wuhan, for a conference on Chinese archaeology. A bit more than a week later, on Monday July 3rd, I'll meet in her in Shanghai, where we'll spend the summer.
So, Jeffrey and my other student Derek Wise will have to do their best to make sense of this stuff with Laurent and Aristide. But, I have some tricks up my sleeve which may allow me to make some progress while I'm in Shanghai.
Lisa and I hope to have wireless internet access in our apartment in Shanghai, by the way. So, with any luck, this online diary will continue. It should be an adventure - a summer in the biggest city in China!
June 20, 2006
I got back home yesterday. Ah, it's nice just to see my back yard again...
It's so peaceful here.
In the news today, the Editorial Projects in Education research center reports that the 2006 graduation rate for US high schoolers is only 70%! In Los Angeles, the figure is only 44%! I'm curious how this compares to European countries. Does anyone know? Apparently the US dropout rate has been underestimated by the states - you can see details here. So, European figures could also be misleading....
On the bright side, a study by Julio Licinio et al reports that suicide rates in the US have dropped by about 15% since 1988 - the year that Prozac went on the market. Suicide rates had been fairly stable, around 12.9 per 100,000 per year, all the way from 1870 to 1988. Since then the rate has dropped to 10.9. Nobody knows if this drop is due to the introduction of Prozac and other selective serotonin reuptake inhibitors, but it's a plausible hypothesis. It would be really, really cool if suicidal despair could be reduced by rejiggering serotonin levels in the brain.
June 21, 2006
Ever wonder why the US is bickering so much with Hugo Chávez, the President of Venezuela?
One reason is that Chávez is a leftist who likes to throw his weight around. But another is that Venuzuela is sitting on top of lots of heavy oil. This is a gooey substance - a form of "unconventional oil" - that our economy will naturally turn to as conventional oil supplies start running out. Let me quote a little of this paper:
Unconventional oil is an umbrella term for oil resources that are typically more challenging to extract than conventional oil. While many unconventional oil resources cannot be economically produced at the present time, two exceptions are extra-heavy oil from Venezuela's Orinoco oil belt region and bitumen - a tar-like hydrocarbon that is abundant in Canada's tar sands. These resources are already being economically produced and are likely, in coming years, to become increasingly important to global oil supplies generally, and to U.S. oil security in particular, given their close proximity to U.S. markets.
In 2002, the Oil and Gas Journal accepted Canada's classification of 174 billion barrels of oil sands as established reserves and Canada became the second largest oil reserve-holding nation in the world after Saudi Arabia. If the 235 billion barrels of extra-heavy oil that Venezuela considers recoverable, but that are not currently acknowledged as established or proven, are re-classified in the same way as Canada's oil sands, Venezuela would be credited with the largest oil reserves in the world.
Just to give you some sense of what this means: as of 2006, the Oil and Gas Journal said the total proven worldwide oil reserves were 1,293 billion barrels. (This counts the Canadian oil sands listed above, but not the Venezuelan heavy oil.) The Energy Information Administration, run by the US government, guesses that these reserves will grow by 730 billion barrels over time, and throws in a guess of 939 billion extra completely undiscovered barrels, for a guess of 2962 billion barrels of oil left worldwide.
In 2003 the world used 29 billion barrels of oil per year. By 2030, the EIA predicts this demand will grow to 43 billion per year.
They predict that oil use will peak sometime between 2055 and 2065, and crash quite rapidly after that. If something like this comes to pass, Venezuela will be very important in the years to come... and Canada too, but I'm sure the US feels more threatened by Venezuela!
For more information, try this:
I can't see the EIA prediction that oil use will peak around 2055-2065 on their website. I found it here: The time at which peak oil will occur is highly controversial, and nobody else seems to think it will occur so late. A lot of people think it's happening soon, or even that it's already happened! I can't tell who's right. It's one of those questions that's so important that everyone likes to tell their own story about it:
June 22, 2006
Let's ponder that chart up there. Most people are arguing about when peak oil will happen, not whether. And, if we take the long view, the disagreements are minor: everyone who contributed a line on the chart says sometime between now and 2070. An updated version of the chart shows even better agreement.
So, the question is: what next?
This is actually a huge interlocking network of questions. How much does the whole "growth is good" philosophy of economics rely on the assumption of ever greater energy usage? When we hit the wall, what will happen? Can economic growth occur in ways that don't require greater energy usage?
Will we decide that perpetual economic growth is an unreasonable goal for occupants of a finite planet? Or could we revamp our concept of "economic growth" to make it a bit subtler and less destructive? There are, of course, vast untapped reaches of ethical, spiritual and intellectual growth waiting to be explored. Why are they almost neglected in our current definition of "economics"? Can we change this? Will we?
Or: are we so locked into our current course that the carbon burning economy gets pushed to its logical limit, despite the cost of global warming? On December 18, 2005 I mentioned an article in Wired listing various forms of carbon we have left to burn, measured in oil barrel equivalents. Here are the biggies:
You can see where the pro-growth folks will wind up: digging for methane hydrates under the Arctic permafrost and the bottoms of seabeds. If we burn all this stuff, we'll have a burst of carbon dioxide emission that makes what we're seeing now look puny. You can see how carbon dioxide goes hand in hand with global temperatures:
We see here the last 4 glacials (or "ice ages") in the last 400,000 years BP - "before present". Notice the incredible red spike at the far far right of the graph: that's what we're doing now! If we burn through all the methane hydrates, this will shoot way off the graph, and so will global temperatures.
To get a feel for some numbers: in 2003, people around the globe consumed about 440 quintillion joules (420 quadrillion BTU) of energy, mostly fossil fuels. This is the energy equivalent of 72 billion barrels of oil, and it caused the emission of roughly 8 billion tons of carbon into the atmosphere.
Doing this sort of thing for about a century caused the red and blue spikes on the edge of that graph. Of course, energy usage started out much lower a century ago... so multiply all the numbers in the previous paragraph by about 20 or 50, and you'll get the figures for the last century.
But: to get the figures for what'll happen if we burn all the methane hydrates, you have to multiply those numbers by about a thousand!
Of course, we wouldn't burn this stuff all of a sudden, so there will be time for some CO2 to get eaten up by various processes.
Nonetheless, we're talking about a major disruption of the climate if we don't end our carbon addiction. Something orders of magnitude greater than what we've seen so far.
The moral: the oil peak may be upon us, but the end of cheap oil won't save our climate, because the carbon peak will be much bigger - unless we move towards other energy sources, or less energy consumption.
(Here are my calculations and sources, so you can catch my mistakes if you want: there are lots of weird units involved. About 420 quadrillion BTU of energy were used in 2003, according to the EIA, which doesn't use metric. A barrel of crude oil equals roughly 5.8 million BTU. So, the energy usage was equivalent to 72 billion oil barrels. The actual oil usage was about 150 quadrillion BTU, or 25 billion barrels, or 36% of all energy usage. Burning a quadrillion BTU of fossil fuel causes the emission of roughly - roughly - 20 million tons of carbon. Of course it actually depends on how much hydrogen the fuel contains - so, 26 million tons for coal, about 20 million for petroleum, versus only 15 million for natural gas. But, I'm just trying for rough estimates here, so I'm cutting all sorts of corners: I should subtract the amount of energy not coming from fossil fuels, for example - about 10% or so. More carefully prepared statistics on carbon dioxide emissions are available from the IEA. Finally, a BTU is 1055 joules, so 420 quadrillion BTU is about 440 quintillion joules, or 4.4 × 1020 joules.)
June 23, 2006
Lisa left for Wuhan at 2 a.m. today - she's going to a conference on Chinese archaeology. I spent the day catching up with James Dolan, who has been thinking a lot about an intricate web of ideas related to Dynkin diagrams, including Vaughan Jones' work on subfactors and its relation to the McKay correspondence.
I was happy to see that International Astronomical Union has officially approved names for the two newly discovered moons of Pluto - Nix and Hydra. Here's a picture of them taken by the Hubble space telescope:
While visiting my sister in DC a while ago, we saw a bunch of sparrows living in the huge mall at Tysons Corner. This made me wonder - yet again - about why some animals seem so much better than others at living around humans. Sparrows, rats, pigeons, cockroaches and coyotes do well. Turtles, frogs, manatees, passenger pigeons and lions don't. I believe all animals that don't do well around us will either go extinct or wind up living at our sufferance in zoos or game reserves.
So, we are selecting the animal kingdom for certain traits. Animals either need the traits that let them eke out an existence in a human dominated world, or they need to be cute enough that we'll take care of them. Otherwise they will die.
This is a strange new kind of selection pressure. It's part of what Bill McKibben calls The End of Nature.
So, what traits do animals need to survive well around us? My sister just sent me an interesting article about this:
Greenberg's noticed that animals differ vastly in their "neophobia" - their tendency to shy away from new things. A chestnut-sided warbler will not eat its favorite food if a new object is placed nearby. A bay-breasted warbler chows down happily:
Greenberg hypothesized that since humans create a rapidly changing environment, animals will less neophobia will fare better around us.
But, it turned out that some species closely associated with us are among the most neophobic of all! Mallards, which get along well with people, are more neophobic than wood ducks. Norway and black rats, ravens, crows, and house sparrows are all highly neophobic! This is why it's hard to trap or poison these critters. And that's part of why they do well around us.
In short, "persecuted commensals" - animals that require human presence to do well, but which we keep trying to kill - must balance adaptability with neophobia. They need to keep adapting to new environments and trying new foods, but avoid our sneaky traps. They need to be curious... but still cautious.
That's what Greenberg says. And it makes me wonder: does this balance require a kind of intelligence? Are we selecting for intelligence?
June 24, 2006
I drove to the coast with James Dolan to visit my friends Chris Lee and Meenakshi Roy. Among other things, we went on a long walk on the beach from Playa Del Rey almost down to Hermosa Beach. Chris and Meenakshi study cool stuff like alternative splicing in human genes and evolution of drug resistance in HIV. But, Chris wants to do more theoretical work on bioinformatics, and he's writing a book about it that starts with the fundamentals: Bayesian reasoning, entropy, and so on. So, we mainly talked about that sort of stuff. Chris described a conjecture about entropy maximization, and Jim came up with an interesting idea for deriving the maximum entropy principle from Bayes' law! I need to find out if someone has already worked on these ideas....
According to Chris, people in bioinformatics are expected to run "labs", following the pattern in other branches of biology. They spend lots of time managing grad students, applying for grants, and so on - leaving little time to talk with colleagues and dream up new ideas. Each lab is like a little business competing with the rest in cranking out data. It's very different in math and theoretical physics. There are reasons for this, to be sure, but it seems that now there's enough data in biology to create a niche for "theorists" who spend some time thinking about what it all means.
June 25, 2006
More about animals living with people:
A sad thing about visiting my parents' beautiful house in Great Falls, Virginia was seeing how deer have overrun the woods. With no natural predators to keep their population down, they eat every last little bit of plant life they can get find; their population must be limited by starvation. So, the forest has no brush in it... and no new saplings! It's a dying forest.
I mentioned how coyotes have moved into this area. Unfortunately, coyotes don't eat deer. At least, not often - maybe occasionally they grab an unlucky doe, but they prefer much smaller food, like mice.
Luckily, my sister said that mountain lions have entered the area! I hope they eat lots of deer and not too many people. Here's an article on a similar phenomenon in New England:
Tracking the Cats
Mountain Lions Roam Region's Forests - Origins a Mystery
Wendy Williams
Northern Sky News
June 2002
In September 2000, less than 150 miles north of Boston, hunter Roddy Glover was following a wildlife trail through the woods when a tawny-colored animal caught his eye. At first he thought it was a deer, but he soon realized it was some kind of cat. As the cat came closer, Glover saw that it was much too big for a bobcat, the only wild feline known to roam that area.
He lay low in the ferns to watch. “Then—it kinda shocked the hell out of me—I realized it was a mountain lion. And she had a kitten with her.”
Mountain lions were extirpated from New England by early in the last century, often hunted for the bounty placed on their tails. For decades, sightings of mountain lions roaming in New England’s north woods have been steeped in controversy. Those who believe in the presence of mountain lions have often been considered apt to believe in Bigfoot. Today, most wildlife biologists agree that there is increasing evidence of mountain lions in the area. But whether or not the animals —also known as catamounts, pumas, cougars or panthers—are breeding here remains unclear.
As for Roddy Glover, he wanted proof that he wasn’t crazy. Seeing tracks left by the female mountain lion in the mud, he called state biologist Keel Kemper, who arrived at the Monmouth, Maine site within the hour, looked at the tracks, took photos and made a plaster cast.
“This is a big cat print,” says Kemper of his plaster cast. “But if I had only this cat print, I would be foolish to say there was no doubt it was a mountain lion. I have Roddy Glover, experienced outdoorsman, who watched the cats for at least five minutes, from only 50 yards away. I’m about as convinced as I could be.”
A week later at the same location, Glover found another set of what he believed were mountain lion prints left on railroad ties. This time a biologist who had done mountain lion research out west came. “Yep,” Glover quotes the biologist as saying, “those are mountain lion prints.”
In over 60 years, this is the first sighting of a mountain lion roaming free through New England’s forests that is officially confirmed by accompanying physical proof (the last was a lion killed in northwest Maine in 1938). But there have been a number of credible sightings and several other tantalizing occurrences in recent years. In 1997, near Massachusetts’ Quabbin Reservoir, wildlife tracker John McCarter found a deposit of large scat covered with debris in the fashion of a mountain lion. McCarter, and tracker and teacher Paul Rezendes, sent the scat to a DNA sequencing lab at New York’s Wildlife Conservation Society. Those tests showed it to be mountain lion scat, a finding later confirmed by a second qualified DNA testing lab at Virginia Polytechnic Institute.
Rezendes, author of Tracking and the Art of Seeing, has been following up on McCarter’s finding: “We’re going to make more of a concerted effort to find something. Now we’re going to set a track line out this winter... We will be following up any credible sightings. Anybody who has tracks, scat, anything like that that sounds credible—if we find something, I’ll be ready to go.”
Massachusetts state biologists accept that the scat was probably mountain lion, but question the animal’s origins. “One could speculate that a captive cougar escaped or was released in the area and survived long enough to feed on a beaver and leave this tangible evidence,” wrote Massachusetts wildlife biologist Susan Langlois.
Throughout northern Maine, Vermont and New Hampshire, an increasing number of sightings by very credible and experienced outdoorsmen have been reported. None of these have been confirmed by physical evidence, however. Some observers have followed tracks in the snow. In the Brattleboro-Putney area of southern Vermont, in the winter of 2000, a number of independent sightings were reported over a series of several days. But to date, nothing has been confirmed.
“We have a semiformal policy of taking all sightings and all calls,” says Vermont state wildlife biologist Doug Blodgett. “We’re documenting everything we get, including misidentifications. We’re putting it on a data base and we’re keeping track.” Blodgett says that when biologists follow up on many of the calls, the animal turns out to have been a bobcat, a feral house cat, a coyote—or even a deer.
Because of the similarity in coat color, it’s quite common for the most experienced people to mistake a deer in a low-crawl for a mountain lion. “I had that experience myself once,” says Blodgett. “One night I was certain I was seeing a mountain lion, but when I checked the tracks it was a deer.” Biologists across the continent tell similar stories of mistaken identities.
Nevertheless, many regional experts agree that, on at least a few occasions, observers are reporting valid sightings. But, says Blodgett, it is not clear where the lions are coming from. “We have a lot of people who are quite cranked up about this, who really want to believe that the lions are here,” he says. “Some have speculated that there have been some intentional releases. They’re commercially available—you can buy them on the Internet.”
I keep hearing that there are mountain lions in the park behind our house, but I've never seen one - which is just fine with me. Do you know what to do if you meet one?
Some good news: Santa Monica has banned styrofoam and other non-recyclable plastics for businesses like fast-food restaurants. This stuff is virtually indestructible and accumulates on beaches and elsewhere. It's made of petroleum, so it's getting more expensive, and people are naturally turning to cups and plates made from corn starch, sugar cane, and other biodegradable materials.
Some bad news: this summer we'll probably see lots of wildfires in the western USA. It's just as dry as it was in 2002, which was the worst wildfire season ever, and the sky here was full of smoke and ash for days - it looked like Hell.
Of course, wildfires may not be all bad in the grand scheme of things. It's hard to tell... hard to tell what the "grand scheme of things" really is! That's part of what I'm trying to figure out in this diary.
From Thin Ice, where the author was interviewing climate scientist Lonnie Thompson: " There was a time about 3.5 billion years ago when there was no oxygen in the atmosphere, and a kind of anaerobic bacteria occupied all the oceans of the world. They produced oxygen just by living, the same way we produce CO2, and they multiplied until they occupied every part of the earth. But the oxygen they gave off was poisonous to them, so they eventually changed the atmosphere to the point that they killed themselves off [....]"
"I think humans are like every other organism: they try to maximize the system to their advantage, take every resource they can use to make whatever it is they're trying to produce, and they will keep doing it until that resource is no longer available to them. Our economic system is based on that: maximum production. And every country in the world wants to be like the Western countries - same lifestyle, same air-conditioning, same TVs. We have fine universities, we train people to think; but actions speak louder than words, and as long as we stay on this path I don't think we're any smarter than bacteria. We're behaving the same way they did. You can do that until you exceed the boundaries of the system, and then it will collapse."
"You mean the whole system will fall apart?" I asked.
"Oh no, the system will keep working. I'm very optimistic about the system. The system will take care of itself. This is like a cancer growing on the surface. The planet will react in a way as to stop that cancer."
"The earth will stay healthy?"
"Yes. It might be big storms; it might be wiping out Bangladesh or Africa; the world will go on, and there will be creatures that will multiply in that new world. Plants like CO2; maybe the world will be dominated by plants. Whenever a creature exceeds its resource base, its population collapses - think of lemmings - and I think that's ultimately what will happen to humans."
June 28, 2006
Yesterday I talked to Danny Stevenson and Alissa Crans about representations of Lie 2-algebras and Lie 2-groups. We were mainly battling with the puzzle of giving our 2-category of 2-vector spaces a nice tensor product and hom. The last few days I've also been talking with James Dolan about the McKay correspondence and ambidextrous adjunctions between 2-vector spaces.
In the first reported case of fatal hilarity, the Greek fortune-teller Calchas is supposed to have died of laughter on the day he was predicted to die, when the prediction didn't seem to be coming true.
Google has a new mirror site. Make sure to type in your entry backwards.
You can find many other strange things on Wikipedia:
June 30, 2006
I'm gradually gearing up for my trip to Shanghai on Monday July 3rd. This may be my last diary entry for a while, but Lisa has found an apartment with broadband internet access - apparently quite common there - so I should be back in business once we get set up.
It'll be an adventure! My 2003 summer in Hong Kong was great, so I'm not scared, but it will be quite something living in such a huge city. We'll be near Fudan University, not the heart of town. You can see it near the top of this map.
Somehow I got a subscription to Cell magazine. One issue had a neat article on the genetic origins of left-right asymmetry in vertebrates, which I've summarized in the Addendum to week73. But even more cool are these two articles:
The first article describes how bacteria communicate using chemicals. For example, in a process called quorum sensing, bacteria emit traces of a chemical, which rises to a level they can detect only when their population density reaches a certain threshold. The chemical then affects their behavior! For example, a bioluminescent bacterium in the ocean called Vibrio harveyi glows only when it reaches a certain density - and in an extreme case of this phenomenon, a glowing patch of the Indian Ocean 15,000 square kilometers in size was visible from space for three nights!
But the phenomenon of quorum sensing has recently turned out to be far more common in less exotic circumstances. It causes "competence" in Streptococcus, a state in which bacteria can pick up DNA molecules and change their genetic properties. It also controls virulence factor secretion, biofilm formation and sporulation. These are various spooky tricks bacteria like to play....
The article describes many other forms of inter-bacterial communication. For example, bacteria in water send water-insoluble molecules to each other in little packages called vesicles. And, some of these packages are fatal to bacteria of other species!
As if this weren't enough, it turns out that advanced life forms like us - eukaryotes, to be precise - are able to pass on traits not just using their DNA and RNA, but also using a trick called histone methylation. In eukaryotic cells, DNA is wound around around proteins called histones. Adding one, two or three methyl groups to these proteins controls whether and how gene will be expressed in a given cell. This is one way cells in our body get to be very different even though they have the same DNA! It's quite complicated and interesting - and in a surprising twist reminiscent of Lamarckian evolution, a mother can apparently do histone methylation to genes in her child's embryo! So, traits picked up during her life, encoded not in DNA or RNA but in histone methylation, can be passed on to her offspring.
In short, besides genetics we must also study epigenetics - the science of reversible but heritable changes in gene expression that can occur without any changes in our DNA!
Evolution is like a game that life has been playing for billions of years. The strategies in play are surely far deeper than we've been able to fathom so far. We're like kids watching grand masters play chess. We should continue to expect surprises....
For my July 2006 diary, go here.
The [...] spirit will soar eagerly into the heavenly spheres, but rarely stays there: it returns to the workaday world: it insists that ideals shall be translated into action, precept into practice, the spiritual applied to the physical, the abstract to the concrete. - Hugh Schonfield
© 2006 John Baez |
dbb6053ff5d4a8b4 | Science —
Direct measurements of the wave nature of matter
New experimental techniques map out wave properties only known previously from theory.
The estimated wave function of electrons in solid nitrogen.
The heart of quantum mechanics is the wave-particle duality: matter and light possess both wave-like and particle-like attributes. Typically, the wave-like properties are inferred indirectly from the behavior of many electrons or photons, though it's sometimes possible to study them directly. However, there are fundamental limitations to those experiments—namely information about the wave properties of matter that is inherently inaccessible.
And therein lies a loophole: two groups used indirect experiments to reconstruct the wave structure of electrons. A.S. Stodolna and colleagues manipulated hydrogen atoms to measure their electron's wave structure, validating more than 30 years of theoretical work on the phenomenon known as the Stark effect. A second experiment by Daniel Lüftner and collaborators reconstructed the electronic structure of individual organic molecules through repeated scanning, with each step providing a higher resolution. In both cases, the researchers were able to match theoretical predictions to their results, verifying some previously challenging aspects of quantum mechanics.
Neither a wave nor a particle description can describe all experimental results obtained by physicists. Photons interfere with each other and themselves like waves when they pass through openings in a barrier, yet they show up as individual points of light on a phosphorescent screen. Electrons create orbital patterns inside atoms described by three-dimensional waves, yet they undergo collisions as if they were particles. Certain experiments are able to reconstruct the distribution of electric charge inside materials, which appears very wave-like, yet the atoms look like discrete bodies in those same experiments.
Researchers typically deal with this behavior using wave functions. The wave function is a mathematical description of the external attributes of a particle: its position, momentum, and rotational characteristics. Much of quantum mechanics involves calculating wave functions and their evolution using the Schrödinger equation, named for the same guy famous for the cat thought experiment.
The wave function contains two pieces: an absolute piece called the amplitude and a relative component called the phase. When the amplitude is squared, it gives the probability of the outcome of certain measurements, but the phase is not directly accessible. In other words, there's always an aspect of the wave character that cannot be obtained experimentally without resorting to some kind of cleverness. That's a disappointing proposition for those of us interested in direct comparisons between theory and measurement.
However, full knowledge of the wave function is important for understanding chemical reactions and material properties on the atomic or molecular scale. Understanding at that level of detail is especially significant for the next generation of materials and molecular design.
A Stark contrast
Hydrogen is the simplest of atoms, consisting of just one proton and one electron. That means its wave function can be calculated, as it is by innumerable physics and chemistry students at universities every year as class exercises. Since its electron is charged when a hydrogen atom is placed in a uniform electric field (such as exists inside a large capacitor), its wave functions change. That change results in different responses to light, which is known as the Stark effect.
The wave functions in the Stark effect have a peculiar mathematical property, one which Stodolna and colleagues recreated in the lab. They separated individual hydrogen atoms from hydrogen sulfide (H2S) molecules, then subjected them to a series of laser pulses to induce specific energy transitions inside the atoms. By measuring the ways the light scattered, the researchers were able to recreate the predicted wave functions—the first time this has been accomplished.
The authors also argued that this method, known as photoionization microscopy, could be used to reconstruct wavefunctions for other atoms and molecules. Since the Stark effect is a general response to external influences, the technique would be very handy for studying atoms' responses to other electric and magnetic fields—essential for understanding the behavior of materials under a wide variety of conditions.
Just a phase
Lüftner and colleagues took a different approach, examining the wave functions of organic molecules chemically attached (adsorbed) on a silver surface. Specifically, they looked at pentacene (C22H14) and the easy-to-remember compound perylene-3,4,9,10-tetracboxylic dianhydride (or PTCDA, C24H8O6). Unlike hydrogen, the wave functions for these molecules cannot be calculated exactly. They usually require using "ab initio" computer models.
The researchers were particularly interested in finding the phase, that bit of the wave function that can't be measured directly. They determined that they could reconstruct it by using the particular way the molecules bonded to the surface, which enhanced their response to photons of a specific wavelength. The experiment involved taking successive iterative measurements by exciting the molecules using light, then measuring the angles at which the photons were scattered away.
Reconstructing the phase of the wave function required exploiting the particular mathematical form it took in this system. Specifically, the waves had a relatively sharp edge, allowing the researchers to make an initial guess and then refine the value as they took successive measurements. Even with this sophisticated process, they were only able to determine the phase to an arbitrary precision—something entirely to be expected from fundamental quantum principles. However, they were able to experimentally reconstruct the entire wave function of a molecule. There was previously no way to check whether our calculated wave functions were accurate or not.
Quantum physics of solace
When we discuss quantum physics, the weirdness of the theory is often emphasized. However, quantum mechanics is the basis of most of modern technology, and these experiments highlight how much we actually understand about it. The wave functions generated by these experiments are exact matches to theoretical predictions. The physics works as expected.
In both the molecular and hydrogen cases, the method used to reconstruct the wave functions could be applied to other systems. As researchers work to understand chemical reactions and material properties on the molecular and atomic levels, such techniques would be very powerful, perhaps leading to new insights about how to control them.
Physical Review Letters, 2013. DOI: 10.1103/PhysRevLett.110.213001 and
PNAS, 2013. DOI: 10.1073/pnas.1315716110 (About DOIs).
You must to comment.
Channel Ars Technica |
1b787286be9b8b28 | The Hartree-Fock method
From Scholarpedia
Paul-Henri Heenen and Michel R. Godefroid (2012), Scholarpedia, 7(10):10545. doi:10.4249/scholarpedia.10545 revision #129749 [link to/cite this article]
Jump to: navigation, search
Post-publication activity
Curator: Paul-Henri Heenen
The Hartree-Fock (HF) method is a variational method that provides the wave function of a many-body system assumed to be in the form of a Slater determinant for fermions and of a product wave function for bosons. It treats correctly the statistics of the many-body system, antisymmetry for fermions and symmetry for bosons under the exchange of particles. The variational parameters of the method are the single-particle wave functions composing the many-body wave function. We will focus the present article on the Hartree-Fock method for fermionic systems.
Let a many-body system be described by a non-relativistic Hamiltonian composed of a one-body term, denoted \(t\), representing the kinetic energy and possibly a central potential \(V\) like the Coulomb attractive potential between the electrons and the nucleus in an atom, and a two-body interaction \(v\): \[\tag{1} \hat{H}= \sum_{\alpha \beta} t_{\alpha \beta} a^{\dagger}_{\alpha} a_{\beta} +\frac{1}{2}\sum_{\alpha \beta\gamma \delta} v_{\alpha \beta \gamma \delta} a^{\dagger}_\alpha a^{\dagger}_{\beta} a_{\delta} a_{\gamma} \; , \] where the indices \( \alpha \beta \gamma \delta \) label the single particle states in a complete orthonormal basis. The one and two-body matrix elements are denoted by \( t_{\alpha \beta}= (T_{\alpha \beta}+V_{\alpha \beta})=\langle \alpha |T +V|\beta \rangle \) and \( v_{\alpha \beta \gamma \delta}=\langle \alpha \beta|v|\gamma \delta \rangle \). Three-body terms can be also included requiring only straightforward changes in the equations. This system is characterized by its one-body density whose matrix elements \(\rho_{\beta \alpha }\) are given by\[\tag{2} \rho_{\beta \alpha } =\langle \Phi | a^{\dagger}_{\alpha } a_{\beta}|\Phi \rangle \; \] For a system of \(A\) fermions (note: This notation is natural in nuclear physics, since \(A=N+Z\), where \(N\) and \(Z\) are the numbers of neutrons and protons, respectively. In atomic physics however, \(A\) has nothing to do with the so-called mass number \(A\) and must be understood in the present context as the number of electrons \(N_e\)), a Slater determinant composed of \(A\) orbitals \({\{\vert \phi_\alpha \rangle = a^{\dagger}_{\alpha} |0 \rangle ; \alpha = 1,\ldots,A \}}\) chosen in the complete basis can be written in second quantization: \[ \tag{3} \left |\Phi \right \rangle = \prod_{\alpha=1}^A a^{\dagger}_{\alpha} \left | 0 \right \rangle \; . \] The density operator associated with a Slater determinant has a very simple form\[ \tag{4} \hat {\rho} =\sum _{\alpha=1}^A | \alpha \rangle \langle \alpha |= \sum _{\alpha=1}^\infty n_{\alpha} | \alpha \rangle \langle \alpha | \; , \] which shows that the operator \(\hat {\rho}\) has \(A\) eigenvalues equal to 1, all the others being zero. Note also the idempotence property of the density operator\[ \tag{5} \hat {\rho}^2 =\hat {\rho} \; , \] demonstrating that the operator \(\hat{\rho}\) is a projector. One can show that if a density matrix has the property that it is equal to its square, the associated wave function is a Slater determinant (Ring & Schuck, 2000).
The exact ground state corresponding to the Hamiltonian (1) can in principle be determined by using a linear combination of all possible Slater determinants constructed for A fermions with the single particle states of the basis. However, this combination should already be infinite for a two-fermion system and a truncation scheme has to be introduced. The simplest approximation is a single Slater determinant. The ground state energy might be poorly approximated if an arbitrary single-particle basis were chosen. The Hartree-Fock (HF) approximation enables one to determine the best—in the meaning of giving the lowest energy—set of single particle states that is optimized for each Hamiltonian and for a given number of particles. Since a single Slater determinant corresponds to non-interacting particles, the method is also called the independent-particle model. Note that the reduction of the many-body wave function to a single determinant inevitably breaks some symmetries of the original Hamiltonian. One cannot for instance construct a Slater determinant that is invariant by translation except for the trivial case of single-particle states that are plane waves. The HF single-particle states are obtained by a linear transformation of the orthonormal basis with creation operators \(a^{\dagger}_{\alpha}\) to a new basis with operators \(b^{\dagger}_{i}\). The matrix element of the Hamiltonian for a Slater determinant composed of orbitals labeled by an index \(i\) can be calculated explicitly in terms of \( \rho \) (Ring & Schuck, 2000) and (Blaizot & Ripka, 1986). \[ \tag{6} E^{HF} [\rho] = \left \langle \Phi | \hat{H} | \Phi \right \rangle = \sum_{ij} \left ( T_{ij} + V_{ij} \right ) \rho_{ji} + \frac{1}{2} \sum_{ijkl} \rho_{ki} \bar{v}_{ijkl} \rho_{lj} \] where \(\bar{v}_{ijkl}\) is the antisymmetrized 2-body matrix element \((v_{ijkl}-v_{ijlk})\). The HF wave function is the Slater determinant for which the expectation value of the Hamiltonian is stationary with respect to variations in the single particle wave functions \(\delta\phi_i\) \[ \tag{7} \delta E = \delta \left \langle \Phi \vert \hat{H} \vert \Phi \right \rangle = 0 \; . \] subject to orthonormality constraints \(\langle \phi_i \vert \phi_j \rangle = \delta_{ij} \) that are imposed to keep a simple expression of the expectation value (Froese Fischer, 1977). Varying the energy functional (6), with respect to the single particle wave functions and introducing a matrix of Lagrange multipliers \(\epsilon_{i,j}\) associated with the orthonormality constraints, one obtains the \(A\) Hartree-Fock integro-differential equations in the configuration space (Ring & Schuck, 2000), \[ \tag{8} \begin{array}{lcl} \epsilon_k \phi_k \left( \vec{r} \right) & = & \left( -\frac{\hbar^2}{2m}\Delta + V \left( \vec{r} \right) \right) \phi_k \left( \vec{r} \right) + \left( \int d^3 \vec{r}^{\prime} v( \vec{r} - \vec{r}^{\prime} ) \sum_{j=1}^A | \phi_j \left( \vec{r}^{\prime} \right) |^2 \right) \phi_k \left( \vec{r} \right) \\ & - & \sum_{j=1}^A \left( \int d^3 \vec{r}^{\prime} v (\vec{r} - \vec{r}^{\prime} ) \phi_j \left( \vec{r}^{\prime} \right)^* \phi_k \left( \vec{r^{\prime}} \right) \right) \phi_j (\vec{r}) - \sum_{j=1, j\neq k}^{A} \epsilon_{k,j} \phi_j (\vec{r}) \; . \end{array} \] The diagonal element of the matrix of Lagrange multipliers, \(\epsilon_{k,k}\) has been rewritten \(\epsilon_{k}\) for simplicity. Each of these equations has a form similar to a Schrödinger equation for the single-particle states. The second term, on the right-hand side, called the Hartree potential, is the average potential: \[\tag{9} U(\vec{r}) = \int d^3 \vec{r}^{\prime} v \left( \vec{r} - \vec{r}^{\prime} \right) \sum_{j=1}^A \left | \phi_j \left( \vec{r}^{\prime} \right ) \right |^2 \] which has the simple interpretation of the potential generated by the density distribution of the particles. The third term of equation (8) called the Fock term is the exchange potential. Both terms define the mean field. The last term arises from the orthogonality constraints. The HF equations constitute a set of coupled integro-differential equations and are therefore not trivial to solve. These equations form a self-consistent problem in the sense that the wave functions determine the mean-field which in turn determines the wave functions. In practice, these equations are solved using an iterative procedure. The HF Hamiltonian is a one-body operator: \[ \tag{10} \hat{h}^{HF} = \sum_{lk} h_{lk}^{HF} a^{\dagger}_{l} a_{k} \] that is defined by equation (8). Since there is an equivalence between a Slater determinant and a density matrix that is a projector, the HF hamiltonian can equivalently be determined by a variation of the energy with respect to the matrix elements of \(\rho\): \[ \tag{11} h_{lk}^{HF} = \frac{\partial E^{HF}[\rho]}{\partial \rho_{kl}} = t_{lk}+ \sum_{ij=1}^A (\bar{v}_{likj}\rho_{ji}) \] From the symmetry of the interaction, the HF operator is hermitian. One can easily show that the Hartree-Fock Hamiltonian and the density operator commute and can be diagonalized simultaneously. However, this requires that one can define an orthonormal set of eigenvectors of \(h ^{HF}\), which is possible only if the non-diagonal elements of the matrix of Lagrange multipliers \(\epsilon\) vanish. One can then find a set of orbitals that diagonalizes the Hartree-Fock Hamiltonian : \[ \tag{12} h^{HF} | i \rangle = \epsilon_i | i \rangle \; , \] defining a single-particle basis with corresponding single-particle energies \(\{\epsilon_i \}\): \[\tag{13} h_{ji}^{HF}=\epsilon_i \delta_{ji} \; . \] Equations (12) are sometimes referred to as the canonical form of the Hartree-Fock equations. In the HF approximation, the many-body Slater determinant is built on the wave functions corresponding to the \(A\) lowest eigenvalues of \(\hat{h}^ {HF} \) to which the eigenvalues \(1\) of the density operator are attributed. These states are the occupied \((o)\) or hole \((h)\) states while all the others corresponding to the eigenvalues \(0\) of the density matrix are virtual \((v)\) (also called unoccupied) or particle \((p)\) states. The diagonal form of the density matrix implies that the matrix elements of the HF Hamiltonian vanish between virtual and occupied states: \[\tag{14} \langle v \vert h^{HF} \vert o \rangle = 0 \; . \] Combining (14), with the completeness relation, \(\sum_i \vert i \rangle \langle i \vert = 1\), it is easy to show that the HF operator only produces occupied orbitals when acting on an occupied orbital: \[\tag{15} h^{HF} \vert o \rangle = \sum_{o'} \vert o' \rangle \langle o' \vert h^{HF} \vert o \rangle \; . \] The energy given by eq.(6) can be rewritten in the HF basis. Taking into account that the density matrix is diagonal in this basis, with eigenvalues equal to \(1\) and \(0\), one has: \[ \tag{16} E^{HF} = \sum_{i=1}^A t_{ii} + \frac{1}{2} \sum_{i,j=1}^A \bar{v}_{ij,ij} \] One can also use the single particle energies: \[\tag{17} \epsilon_i = t_{ii} + \sum_{j=1}^A \left ( \bar{v}_{ijij} \right ) \] to rewrite the total energy in the form: \[ \tag{18} E^{HF} = \sum_{i=1}^A \epsilon_{i} - \frac{1}{2} \sum_{i,j=1}^A \bar{v}_{ij,ij} \] One sees that the total energy is not equal to the sum of the single-particle energies. Indeed, these energies include a term generated by the two-body interaction of a given particle with all the others. When the single-particle energies are added, these interactions are counted twice, leading to the second term in eqn. (18). The two forms of the energy calculated according to eqns. (16) and (18), are equal only for the solution of the HF equations and this equality constitutes a very stringent test of convergence. The two-body interaction depends for some systems on the local density of particles. This is often the case in nuclear physics, where this density dependence can be justified by the elimination of the very repulsive core of most bare nucleon-nucleon interaction (for a recent account on this problem and the clarification of some misconceptions, see Ref. (Bogner & Furnstahl & Schwenk). The Skyrme forces, today more often labeled Skyrme energy density functionals, are a popular example of such density-dependent interactions (Bender & Heenen & Reinhard, 2003). In this case, an extra term appears in the HF hamiltonian defined in equation (11) that is given by \(\frac{1}{2} \sum_{ijpq}\langle pq \vert \frac{\partial v}{\partial \rho_{kl}} \vert ij \rangle \rho_{ip}\rho_{jq}\). The HF equations (8) have been formulated without any assumptions on the symmetry properties of the single-particle wave functions. They are the basic equations of the so-called unrestricted Hartree-Fock (UHF) formalism, a terminology that is not used in nuclear physics but is common in atomic and molecular physics (Shavitt & Bartlett, 2009). The only variational quantity of the HF method is the total energy. There is therefore a priori no reason to impose to the Hartree-Fock wave function the symmetries of the exact Hamiltonian if breaking symmetries lowers the HF total energy. A state of broken symmetries does not carry the quantum numbers of the eigenstates of the Hamiltonian (Blaizot & Ripka, 1985, Ring & Schuck, 2000). The interest of symmetry breaking is that it allows to incorporate many-body correlations without loosing the simple independent-particles picture. It has led in nuclear physics to the very powerful concept of deformed nuclei that enables to describe in an economic way many experimental data (Hamamoto & Mottelson, 2011). Symmetries of the exact state are usually imposed during the energy optimization in atomic (Froese Fischer, 1977) and molecular (Helgaker, Jørgensen and Olsen, 2000) HF calculations. In this formalism known as the restricted Hartree-Fock (RHF) theory, the total non-relativistic wave function used in the self-consistent field variational process is an eigenfunction of the total \(\bf{S}^2\) and projected \(S_z\) spins, of the total \(\bf{L}^2\) and projected \(L_z\) angular momenta for atoms, and is required to transform as an irreducible representation (IR) of the appropriate point group for molecules. This symmetry-adaptation is accomplished by:
1. requiring the atomic or molecular \(m_s = \pm 1/2\) spin-orbitals to have the same spatial parts,
2. the spatial orbitals to transform according to the IR of SO(3) for atoms and of the molecular point group for molecules,
3. to write if necessary the wave function as a Configuration State Function (ie. a symmetry-adapted linear combination of Slater determinants), rather than a single Slater determinant
By assuming a spherical symmetry, the atomic RHF equations are rewritten in spherical coordinates and reduced to a system of coupled radial equations, one for each \(nl\)-subshell, independently of the \(2(2l+1)\) projection states \((m_l m_s)\) of the orbital angular momentum \(l\) and spin \(s\) of the particles. This RHF has its natural extension in the relativistic Dirac-Fock scheme (Grant, 2007) for which the orbital energies are degenerate for each value of \(j\) as a function of \(m\). The Hartree-Fock wave function has some interesting properties, as first illustrated by Koopmans' theorem. Making the assumption that the mean-field is unchanged by the addition or the removal of a single fermion to a system with an even number of particles, the wave-function of the odd system is given by: \[ \tilde{\Phi}_{o} = b_o | \Phi ^{HF} \rangle \] for the removal of a particle. The only change \(\delta \rho_o\) in the density matrix is then the removal of the contribution coming from the occupied state \(o\). The energy of the odd system is then: \[\tag{19} E^{A-1}_o = E[\rho -\delta \rho_o] \] \[\tag{20} E^{A-1}_o = E^A - \sum_{ij}h_{ij} (\delta \rho_o)_{ij} +\frac{1}{2} \sum_{ijkl} \frac{\delta ^2 E}{\delta \rho_{ij}\delta \rho_{kl}} \delta (\rho_o)_{ij}\delta (\rho_o)_{kl} \] Expressed in the HF basis, the second term of this equation is the HF single particle energy of the orbital $o$ and the third term vanishes because of antisymmetry. Note that extra term appears for density-dependent interactions. One concludes from this expression that the energy required to remove a particle from the state \(o\) is equal to \(\epsilon_o\), ie. the corresponding HF single-particle energy. The derivation of Koopmans' theorem supposes that one can neglect the rearrangement of the mean field due to the removal or the addition of a particle. Its validity is limited, especially in nuclear physics but it has been widely used for estimating ionization potentials or electron affinities in atomic and molecular systems. Brillouin's theorem is another property of the Hartree-Fock solution that can explain its relatively high quality in atomic and molecular electronic structure calculations. It implies that there is no first-order mixing of the Hartree-Fock solution with states obtained from single substitutions of the type \[ \tilde{\Phi}_{o \rightarrow v} = b^{\dagger}_v b_o | \Phi ^{HF} \rangle \; , \] demonstrating that the HF method partially takes into account the particle-hole part of the interaction. One can show that the off-diagonal Lagrange multipliers \(\epsilon_{ij}\) can be eliminated when the fermion system corresponds to a closed shell system in which all j-shells are fully occupied. In practice, this case is the only one in nuclear physics for which the HF approximation is valid (see below). In atomic and molecular physics, the symmetry restrictions imposed in the RHF formalism to describe open-shell configurations for electronic systems with point-group symmetry have some unavoidable consequences (Nesbet, 2005). With this respect, it is worthwhile to stress that the off-diagonal Lagrange multipliers cannot always be eliminated, as it has been shown in the very first applications of the HF approximation for the Lithium atomic ground state, yet described by a single Slater determinant (Slater, 1960). However, replacing the differential equations by systems of non-linear equations and generalized eigenvalue problems, projection operators can be applied to solve this issue (Froese Fischer, 2011). The use of symmetry-adapted N-electron functions in the RHF approximation also requires some adaptation of Brillouin's and Koopmans' theorems (Froese Fischer, 1977). The Hartree-Fock method has some intrinsic limitations, mostly due to the basic assumption that particles move independently in some average potential produced by all the particles. In nuclear physics, many-body correlation effects are captured through the symmetry breaking of the UHF solution in the sense that a Slater determinant built on deformed single-particle wave functions can be expanded as a function of Slater determinants in a spherical basis and includes very excited spherical particle-hole excitations. This approach has the advantage of preserving the simple picture of independent particles. The rotational symmetry can then be restored afterwards by projecting the deformed mean-field wave function on good total momentum, leading to a projection after variation method. For atoms and molecules however, the variation after projection method is more often adopted and electron correlation is strictly defined as the difference between the RHF energy and the exact eigenvalue of the non-relativistic Schrödinger equation for the many-body system. Note however that some electron correlation is implicitly included in the HF approximation through the use of antisymmetric wave function that prevents two electrons having the same spin projection to occupy the same space region. This effect is known as the Fermi correlation.
The Hartree-Fock wave function can be obtained by solving numerically the radial integro-differential HF equations (Froese Fischer, 1977). One can also use algebraic approaches. They consist in expanding the one-fermion orbitals in some suitable set of analytical basis functions, reaching the numerical Hartree-Fock limit provided the basis is large enough and complete with respect to square-integrable functions. For molecular polyatomic systems, the molecular orbitals are expanded in a set of atomic orbitals whose expansion coefficients are the variational parameters. In this scheme, the Hartree-Fock equations are reformulated as the Roothaan-Hall self-consistent field equations (Helgaker, Jørgensen and Olsen, 2000).
In addition to Fermi correlation recognized in the HF model, the instantaneous correlation in electron motions due to their mutual repulsion that is neglected in the average field picture of the HF model is often crucial to get an accurate description of the electronic phenomena in atoms, molecules, and solids. For atomic and molecular systems, the HF solution often accounts for more then 99% of the total electronic energy, and has an overlap of 95% with wave functions obtained using more sophisticated methods. The 5% left in the latter case are difficult to capture and can dramatically affect other observables than energies such as isotope shifts, hyperfine structures and transition probabilities (Froese Fischer, Brage and Jönsson, 1997).
The Hartree-Fock method is often applied to get an approximate description of excited states that are not the lowest of their symmetry (Froese Fischer, 1977). In this case, one determines a stationary energy through the selection of the orbital solution having the desired number of radial nodes (Froese Fischer, Brage and Jönsson, 1997). Although the accuracy required today in the description of nuclei, atoms and molecules cannot be obtained by the HF method, the latter is often used as the starting point of several more elaborated methods (see for instance (Shavitt and Bartlett, 2009; Ring and Schuck, 2000)). Amongst the latter, let us quote in particular:
1. the configuration interaction (CI) method that diagonalizes the total Hamiltonian in the configuration space enlarged by including multiple excitations \(( b^{\dagger}_{v'} b^{\dagger}_v \ldots b_{o'} b_o \ldots ) | \Phi ^{HF} \rangle \) from the single Slater determinant or reference configuration state function, without further orbital optimization,
2. the multiconfiguration approach (MCHF/MCSCF) (Froese Fischer, 1977, Helgaker, Jørgensen and Olsen, 2000, Grant, 2007), optimizing both the orbital set and the CI mixing coefficients, particularly adapted to describe the electron correlation due to near degeneracies that make the single-configuration HF model inappropriate,
3. the generator coordinate method, mainly used in nuclear physics, has some similarity with the two previous methods. The total Hamiltonian is diagonalized in a basis defined by mean-field wave functions generated by a constraint on a collective variable, like the quadrupole moment of a nucleus (Ring and Schuck, 2000 and Bender & Heenen & Reinhard, 2003). This method allows also to restore symmetries that could be broken at the mean-field level of approximation.
4. the many-body perturbation (MBPT) and Coupled-Cluster theories (Shavitt and Bartlett, 2009), also existing in the relativistic version (Lindgren, 2011),
5. the HF-BCS and Hartee-Fock-Bogoliubov methods that enable to introduce the correlations due to supraconductivity,
6. the random phase approximation (RPA) that has been developed to describe collective excitations that are a coherent superposition of single particle excitations,
7. the density functional theory that leads to equations that have a form similar to mean-field equations but that incorporates correlations not included in an HF scheme. It has been shown to be very successful in condensed matter physics but also in computational chemistry,
8. the non relativistic framework presented in this article has been extended to incorporate the relativistic Dirac-Fock method for nuclei, atoms and molecules and the Relativistic Mean Field method for nuclei that extend the non relativistic framework of the present article.
• Ring, P and Schuck, P (2000). The Nuclear Many-Body Problem Springer-Verlag, Berlin, Heidlberg. ISBN 3-540-09820-8.
• Blaizot, J-P and Ripka, G (1986). Quantum Theory of Finite Systems MIT Press, Cambridge. ISBN 0262022141.
• Froese Fischer, C (1977). The Hartree-Fock Method for Atoms: A Numerical Approach John Wiley and Sons, New York. ISBN 047125990X.
• Bogner, SK; Furnstahl, RJ and Schwenk, A. (2010). Progress in Particle and Nuclear Physics 65 94: .
• Bender, M; Heenen, P-H and Reinhard, P-G (2003). Review of Modern Physics 75 121: .
• Shavitt, I and Bartlett, R-J (2009). Many-Body Methods in Chemistry and Physics: MBPT and Coupled-Cluster Theory Cambridge Molecular Science Cambridge University Press, Cambridge. ISBN 9780521818322
• Helgaker, T; Jørgensen, P and Olsen, J (2000). Molecular Electronic-Structure Theory Wiley, Chichester. ISBN 0471967556.
• Grant, I-P (2007). Relativistic Quantum Theory of Atoms and Molecules Springer Series on Atomic, Optical, and Plasma Physics, Volume 40 Springer, New York. ISBN 978-0-387-34671-7
• Nesbet, R-K (2005). Variational Principles and Methods in Theoretical Physics and Chemistry Cambridge University Press, Cambridge. ISBN 0521675758
• Slater, J-C (1960). Quantum Theory of Atomic Structure, Vol. 2 Cambridge University Press, New York.
• Froese Fischer , C (2011). Computational Physics Communications 182 1315: .
• Froese Fischer, C; Brage, T and Jönsson, P (1997). Computational Atomic Structure: An MCHF Approach, Institute of Physics, Bristol. ISBN 0750304669
• Lindgren, I (2011). Relativistic Many-Body Theory: A New Field-Theoretical Approach Springer Series on Atomic, Optical and Plasma Physics, Volume 63 Springer, Berlin. ISBN 1441983082
Personal tools
Focal areas |
36104e5acce89d93 | Share This Article:
DOI: 10.4236/jss.2017.51001 770 Downloads 1,067 Views
1. Introduction
“Schrödinger’s Cat” is the paradox of the Austrian physicist Schrödinger in 1935 to prove the idea of a “superposition state” in quantum mechanics. In this experiment, the protagonist is a cat that is used as an experimental object, yet people do not know the result of experiment, people don’t know whether the cat is survive or die, which describes the truth of quantum mechanics: In a quantum system, an atom or photon can exist in a combination of states at the same time, and these different states may correspond to different, even contradictory, results. The whole experiment was carried out in a box with a cat, and a small amount of radioactive material. In about an hour, about 50% of the probability of radioactive material will decay and release the gas to kill the cat, the remaining 50% probability of radioactive material will not decay and the cat will survive. According to the law of exclusion of the classical physics, one of these two results is bound to happen in the box, and the inside and outside observers can only know the result inside by opening the box. But in the world of quantum mechanics, whether the cat is alive or dead, we only can know when the box is opened, the external observer to “measure” the specific circumstances to know; when the box is closed, the entire system has remained uncertain state, and cat is both dead and alive. We can only know exactly whether the cat is dead or alive at the moment of opened the lid. At this point, the cat’s wave function from the superimposed state immediately shrinks to an eigenstate. If the cat is on the reality, specific, macro world, we cannot find a cat with both the dead and the alive. Before you opened the lid, the cat or die, or live, how could not die not alive, or both live and die? This is simply a great impact on our existing ideas.
In the quantum world, many theories defined by classical physics are not applicable, a typical example is the experiment that leaded by Thomas Young named double-slit interference experiment. This experiment was originally developed to prove that light is composed of waves. The double-slit interference experiment tells us that because of the observable interference effects, superposition occurs at the subatomic level where a single particle can be superimposed at several positions at the same time with a certain probability in. Into reality, we cannot see an object that exists both in A and B.
Copenhagen Explanation adheres to the dualism of the theory of physics; it is a lack of thoroughness. It argues that there is a strict boundary between the microscopic quantum world and the daily macroscopic world, so that we need both quantum mechanics to describe the microscopic world and classical mechanics to explain the macroscopic world. As for the relationship between the two, Copenhagen explained that the grasp of the microscopic world needs the help of macroscopic instruments and the special role of the observer. Therefore, the status of classical mechanics is more basic. Thus, quantum mechanics is fragile, non-basic, and only can apply to the micro-closed system. Once the microscopic world is measured, then the nature of the micro-object―quantum superposition, will be disappear in the form of “wave function of random collapse” and get a definite observation result. Therefore, in the transition from the microscopic world to the macroscopic world, the wave functions are randomly collapsed, and the wave function is regarded as the probability wave. What causes the collapse of the wave function, finally comes down to the special status of the observer. To solve the problem with the subjective factors, and then comes down to spiritual problems, through this way to let the quantum measurement problem as a philosophical problem.
According to Copenhagen Explanation, if a “macro cat” in the undetermined state of death and dynamic coherent superposition, then the cat’s life and death will not be independent of the laboratory objective of existence, but depends on the experimenter The cat’s wave function collapses into a dead cat or a live cat only when someone opens the box to announce the result, and the cat is in the same state of death and live before the box is opened. This paradoxical contrast between the existence of quantum superposition and the “daily observation test” makes the “radical Hagen explanation” in an awkward position, so a big wave of “Copenhagen” explanation about whether quantum measurement is correct and whether quantum mechanics It applies to both micro and macro discussions.
Thus, a large wave of discussion of whether the “Copenhagen interpretation” on the quantum measurement is correct or not and whether the quantum mechanics can be applied to both micro and macro existence or not is caused. Different scientists and scholars have put forward their own views and opinions, Multi-world Interpretation is one of the most important one. In 1957, Everett in his doctoral dissertation “quantum mechanics” of the relevant state “interpretation”, the first time proposed quantum correlation state interpretation, and in modern times to obtain a more Multi-scientist recognition, his proposal and development has been in the orthodox status of quantum mechanics of the “Copenhagen Interpretation”, to solve the problem of quantum mechanics provides a new method. Heisenberg and Bohr in the interpretation of Copenhagen in the macro world we do not see the existence of two superimposed on the measurement when they have collapsed, with the difference is, Everett proposed another ideas-superposition state does affect our world, we just did not notice it. As he pointed out, the mathematics in quantum theory states that when we encounter a particle with superposition states and say it is there, the superposition state will also act on ourselves, dividing us into a person who sees the particle here and a person who sees the particle there. In fact, from the point of view of future generations, it is Everett proposed in quantum physics in a universe can be divided into parallel coexistence of “multiple worlds”. At the same time, scientific experiments are being explored, and physicists hope that future experiments will test the modified Schrodinger equation more directly to the point of view of the collapse of wave functions, but unfortunately in this respect our experiments capacity is still far from enough. Although some scientists have grand plans to look for evidence of superposition collapse in macroscopic objects, such as those containing 1 million particles, the best record of the number of particles in a quantum superposition experiment is currently only about 1000.
As an important aspect of the philosophy of physics of quantum mechanical problems, it has been always cause a great concern, around it, on one hand, is the measurement problem, on the other hand is the interpretation of quantum mechanics problems. The quantum system is in accordance with the Schrödinger equation before the evolution, the process is deterministic, reversible; in the measurement, the quantum system mutations occur, the superposition state collapses randomly into an eigenstate, and its process is non-deterministic and irreversible. When we associate the microscopic state of the experiment with the macroscopic visible result, the determinism of the micro state described by quantum mechanics in the evolution process and the non-deterministic nature in the measurement process are magnified in the macroscopic world, It seems that the state of the object is determined by the subjective measurement of the observer, that is, the cat is alive or dead by the person to open the cage before they can be determined, or else in the same state of death between the unknown, Which is contrary to the phenomenon of common sense. This paradoxical phenomenon requires a scientific explanation to solve the puzzlement, the perfect description of measuring instruments, measurement systems and subjective experiment is what kind of relationship, that is what the interpretation of quantum mechanics. So what is the measurement problem? In short, in the quantum world, particles exist in the form of a superposition, such as an electron, in a non-measuring process in the form of superposition state, with different positions, momentum and spin. However, during the measurement, once the measurement is completed, only a definite result can be obtained, that is, only one of the states of the superposition state can be obtained after the measurement, but not all. This is very different from the macro world in which people have never observed the presence of superposition states. The Schrödinger equation describes the evolution of the wave function of a quantum system over time, and the evolution itself is decisive and reversible in time. In the process of measurement, the mathematically rigorous deduction of the superposition state is collapsed into one state, thus breaking the wave function evolution in mathematical continuity. Copenhagen explained that in dealing with measurement problems can be attributed to two main points, that is, macroscopical and microscopic natural separation, respectively, follow different laws; the other point is that the problem of collapse is only given a probabilistic interpretation of its essence is ignorant.
2. The “Multi-World Interpretation” Theory and Its Development
From 1927 Boer and Heisenberg presented the famous “Copenhagen Interpretation” to the multi-world interpretation of quantum mechanics proposed by Everett in 1957, the “Copenhagen Interpretation” has been in an orthodox position. Since 1950, Albert Einstein questioned the principle of complementarity has been a lot of philosophers concern to Schrodinger at a Berlin seminar publicly questioned the principle of complementarity. In 1952, the United States physicist Bohm proposed a great sensation in the physics of the hidden variable theory, interpretation of the Copenhagen caused a great impact. In the late 1950s, Gunther Ludwig presented thermodynamic explanations, which he used as a thermodynamic system, so that in the quantum mechanics, the measurements had definite results. More importantly, the 1950s physicists began to focus on cosmology and general relativity, they want to use quantum mechanics to solve the problem of gravity, but Bohr led by “Copenhagen Interpretation” advocated by the principle of complementarity does not solve these problems problem.
In 1957, Everett first explained in his doctoral dissertation “quantum mechanics” the relevant state “state”, proposed quantum measurement of the relevant state interpretation (Everett, 1957) [1] . But the theory did not cause concern in the physical world, just get the support of its mentor Wheeler, More than a decade of silence has made it known as one of the “best kept secrets of this century (20th century)” by Max Jammer (Jammer, 1987) [2] , the famous quantum mechanics historian. At the request of the advisor, Everett visited Bohr at Radhaal, but Everett’s point of view was nothing more than a heresy for Glass and the rest of Hagen, and Everett was not Want to express his rebellion against the traditional quantum theory, he is hoping to provide a new more comprehensive theoretical explanation. The new theory is not based on any radical departure from traditional forms. The special hypothesis of dealing with observations in the old theory was ignored in the new theory. This is a modified theory, thus gaining a new characteristic. Because in the past quantum mechanical form system, any interpretation of quantum mechanics must acknowledge the measurement process of wave packet collapse phenomenon and make the corresponding explanation, Everett is from this point of view to re-put forward the quantum. In explaining mechanically relevant states, he combined macroscopic and microscopic worlds to account for measurement problems in order to resolve traditional interpretations of micro- and macro-segregation. He sees the system, the measuring instrument and the observer as a quantum system and describes it with a cosmic wave function, so that the macroscopic object is also included in the quantum system. He assumes that all systems follow the Schrodinger equation, wave function collapse does not occur, so that by eliminating the “wave packet collapse” perfect to avoid the quantum world of non-determinism, adhere to meet the determinism, so that the relative state of interpretation in physics and philosophy at the same time to gain advantage.
As a result of Everett’s understanding of the branch of the world is different, in 1973, De Weite and Graham’s Everett based on the draft of the doctoral thesis developed EWG theory, the late 1980s, Squire He also developed a multi-view interpretation, Albert and Roy proposed a multi-spiritual interpretation, Grievous made a consistent historical interpretation, Gellman and Harto proposed decoherence historical interpretation, and so on. In the explanation of relative states, the existence of different branches in quantum superposition states is “relative”. In the EWG theory, the world is divided, and there are many different worlds in the universe. In the multi-view explanation, the measurement is the “I” “multi-branch”, “multi-world” and “multi-mind” are multiple cosmological choices in the same interpretation of history. In the interpretation of the multiple minds, the division occurs at the level of the individual observer’s mind, history, the decoherence of historical interpretation depicts a dynamic split picture, and so on. These complicated names and theories show the difference and independence between different kinds of explanations of the world, which indicates that there are differences in the interpretation and development of Everett’s theory. It is precisely because of the many researchers have different interpretations of Everett’s understanding, all will produce many different versions, or even hostile, these theories are the rich development of Everett’s theory. On the multi-world interpretation, Everett himself never put forward the word “multi-world”, which is a summary of his theory.
The proposed of Quantum mechanics multi-world interpretation was nearly 60 years, and more and more widely accepted by people. Since 1988, a political scientist Robert in 72 quantum physics and cosmology home survey about the world explain is right or wrong, among them, the proportion of the thought that the explanation is right is more than 58%; the proportion of disagreeing with multi-world explanations is 18%; the proportion of people who think that “Maybe it is right, but now I’m not sure” is 13%; and the proportion of “I don’t know” is 11%. In July 1999, a conference on quantum computing was held at the Newton Institute in Cambridge, where the quantum mechanics explanation was again voted to refresh the Quantum Mechanics Interpretation Rankings. Among them, more than 30 interpreters adhere to the multi-world interpretation: Copenhagen interpreted as 4; modified quantum dynamics (GRW) 2; hidden variable interpretation of 2; other explanations (including the unanswered) 50 (Zhang, 2010) [3] . In February 2001, Wheeler and Max Tegmark published an article commemorating the centennial of quantum discovery. In this paper, they argue that the decoherence theory and the latest experiments show that multi-world interpretations have superseded the orthodox 120 Copenhagen interpretation and become the new orthodox explanations of quantum mechanics that most physicists have endorsed (Tegmark and Wheeler, 2001) [4] .
There are also some problems in the interpretation of multi-world explanations to the problem of quantum mechanics. Everett, for example, has claimed to have solved the “measurement problem”, that is, how a certain classical reality emerges from the quantum uncertainty, and still holds great objection. The key to the problem lies not in his mathematics or logic, but in a sense of hinting (hence get the name “multi-world” theory) that he realizes all possibilities. According to multi-world explanations, all possible outcomes coexist before measurement. Each possible result is independently and disjointly present in the respective divisions of the universe, and the laboratory does not need any possible results appears or does not appear and annoyance, this completely uses the physics to explain the quantum mechanics survey, does not have the magic type “collapses” appears. But, as David Lindley queried, “If this independent universe is completely non-interacting, it is impossible to do experiments in one world to reveal the existence of other universes” (Lindley, 1996) [5] . The result is that the basic idea of a multi-world interpretation theory can be free of any test, so that a multi-world explanation will fall into the transcendental metaphysical position. Everything, then it actually cannot explain anything, so John Hawthorne (John Hawthorne), represented by the metaphysicians strongly criticized Everett’s interpretation of the “ambiguity” and “do not interpretive” (Hawthorne, 2009) [6] .
Therefore, from this point of view, there is a certain ambiguity in the interpretation of the world, which is one of his shortcomings, in order to become a comprehensive scientific theory, multi-world interpretation also need to improve many aspects, in addition to his explanation, but also need to address the theory embodied in a superfluous ontology and scientific fantasy.
3. The “Orthodox Position of the Theory of Multi-World Interpretation”
Through the study of quantum mechanics, Copenhagen and multi-world explanations, we have deepened our understanding of “multi-world explanations”, so we can get a clearer understanding of the rationality of “multi-world explanations”. Compared with the “Copenhagen explanation”, we can explain “multi- world interpretation” that has a strong orthodoxy from three aspects.
3.1. Overcoming the Dualism Thought with Universality
Copenhagen explanation insists that the theory of physics should be adhered to the dualism, it is lacks of thoroughness. It argues that there is a strict boundary between the microscopic quantum world and the daily macroscopic world, so that we need both quantum mechanics to describe the microscopic world and classical mechanics to explain the macroscopic world. This is a way of using the dualism to explain the world, this is fundamentally unscientific and contrary to the principles of physics, macroscopically. It gives a clear line between macro and micro, but the Copenhagen interpretation does not give a clear distinction between micro and macro.
On the contrary, in the multi-world explanation, any additional conditions imposed by man-made are not “should”, and the boundary between “macrocosm” and “microscopic world” is not consistent with the laws of physics. This involves the central paradox of quantum theory, “the unique role that spirit plays in deciding the real process”, as Copenhagen’s interpretation insists. According to Copenhagen, the observed behavior makes the potential reality of the electrons superimposed together into a single concrete reality, while leaving the observer’s atom alone cannot make any choice. The multi-world explanation treats the wave function as a real physical existence from the perspective of realism, and the whole universe can be described by the wave function. Quantum mechanics is universally applicable to the whole universe, not just to the microscopic quantum world. We can deduce the classical physics from the principles of quantum mechanics in logic and dynamics. We can describe the microscopic and macroscopic physical world uniformly. We do not need to rely on the principle of complementarity to refer to the concept of classical physics to describe the microscopic quantum world, thereby returning the objective reality to the world of physics. In this sense, the interpretation of quantum mechanics is no longer a vague “scholarly” debate, and truly become a part of quantum mechanics. Many-world explanations cancel wave function collapse by decoherence theory, adhere to the evolutionary model of monism. Eliminating the collapse of the wave function caused by the mutation, it is not only adhered to the determinism, but also in line with strict causal relationship.
3.2. Substituting Positivism with Quantum State Realism
Multi-world theory has always insisted on solving the quantum measurement problem from the standpoint of physics and objectivity, there is no additional hypothesis. It breaks the Copenhagen interpretation of the dualism of the microscopic world and the classical world with physics’ sown factors and solves the problem of the relationship between the quantum world and the classical world. It affirms that the quantum superposition state is the “most true” state of the entire physical world, insisting that the quantum state is an objective, invisible, independent representation of reality. The entire universe can be described by the Schrodinger equation, and there is never a collapse of the wave function. But not caused by the “glance” of the subjectivity of the observer. The universal quantum reality in the universe provides a unified description of the microscopic and macroscopic world, thus providing us with an objective realism picture. Multi- world interpretation in the field of microscopic, macroscopic and cosmopolitan view, all insist on objective certainty to ensure the true certainty of the world, especially for the determination of the daily macro-world. The purpose of this research is to study the quantum mechanics entity, which inherits the traditional idea of exploring the physical reality, is objective and deterministic, and ultimately to understand the real objective world.
3.3. Many Worlds Replace a Single Classic World
According to the “random collapse of wave functions” explained by Copenhagen, there is only one possibility that will become reality, and the other possibilities are collapsing randomly. Therefore, in the macrocosm of the world to observe the quantum state, it will only get a certain result, which we live in a single, classic world. Although the quantum world itself is full of diversity, with countless possibilities, but the measurement will “control” the choice of quantum, the numerous possibilities into a unique reality. On the contrary, defenders of many world interpretations have completely abandoned the artificial hypothesis of quantum theory from the objectivity of quantum theory. They believe that quantum mechanics is universal, so that the formal system of quantum mechanics can truly describe the objective reality of things. In the macroscopic field, the superposition of the quantum states does not disappear, but in the process of measurement, the measured particle measuring instrument (including the observer) splits. As time goes on, the state vectors will break down in a mutually perpendicular direction. Therefore, the universe will continue to split into unobservable, but equally true multiple worlds, that the universe split into parallel universe. While the wave function is seen as the ultimate reality of the whole universe, thus in the holistic sense, so that the composition of many of the world to obtain the determinism of the results. In this reality, the universe continued to split into a number of “parallel universe”. These parallel universes are not connected in physics, but are equally authentic.
4. Conclusion
Everett was not the first physicist to criticize the collapse of the wave function in Copenhagen’s interpretation. But he did “open up a new territory from the quantum mechanical system of equations to obtain an inherently consistent theory of cosmic wave functions”. Everett’s attempt to perfect the idea of Copenhagen’s interpretation in his reply to DeWitt is clear: “Copenhagen’s interpretation” is incomplete and hopeless because it depends a priori on classical physics and on philosophy. It is absurd because the concept of reality in the macrocosm is completely rejected in the microscopic world. Although the first multi-world theory is not as valued as a bizarre hypothesis, there are essential differences. As we have seen, it is a natural conclusion that multi-world theory is based on a rigorous system of quantum mechanics. Compared to Copenhagen’s explanation, the multi-world explanation is simple and serious, it does not require the wave function to disappear at will, but the wave function continues to split into other wave functions, forming bifurcated trees, each of which represents a complete universe. But in most cases, the coherence between these wave functions is lost due to environmental perturbations (wave packet collapse is equivalent to environmental perturbation).
Conflicts of Interest
The authors declare no conflicts of interest.
Cite this paper
[1] Everett, H. (1957) Relative State Formulation of Quantum Mechanics. Reviews of Modern Physics, 29, 454-462.
[2] Jammer, M. (1987) Philosophy of Quantum Mechanics. 509.
[3] Zhang, L. (2010) Quantitative Measurement of Multi-World Interpretation. The Review Philosophical Trends, 7, 85-90.
[4] Tegmark, M. and Wheeler, J.A. (2001) 100 Years of the Quantum. Scientific American, 284, 68-75.
[5] Lindley, D. (1996) Where Does the Weirdness Go? Why Quantum Mechanics Is Strange, but Not as Strange as You Think. Basic Books, 166-167.
[6] Hawthorne, J. (2009) A Metaphysician Looks at the Everett Interpretation. Oxford University Conference, July 2009, 263-264.
comments powered by Disqus
Copyright © 2018 by authors and Scientific Research Publishing Inc.
Creative Commons License
|
3b6e73ac62297929 | How far is it around the earth?
How far is it around the earth?
Well, if you look it up on Google the circumference of the earth is near enough to 40,000 km. So it would seem to be that the distance from Wokingham to Wokingham is 40,000km.
But this is only part of the answer. I did not ask for the shortest distance from Wokingham to Wokingham, which would after all be zero, and there is no rule that says we have to complete just one orbit. After two orbits and the corresponding 80,000km we would find ourselves back in Wokingham. But why stop there? We could opt for three or more orbits, in fact there is no limit as to how many orbits we could count before we deem our task to be complete.
This means that in effect there are an infinite number of discrete distances around the earth all of which lead back to our point of departure. Equally we could go in the opposite direction, in which case we can regard the distance as being negative. Again there are an infinite number of such distances. We can write a simple formula to calculate them:
Equation 1 Equation 1
Where n=-∞…-5,-4,-3,-2,-1,0,1,2,3,4,5,…∞
Even this falls short of a complete answer because in our imaginary orbiter we can travel as fast or as slow as we like. The distances we have measured so far are measured at a low speed where the effects of relativity are negligible. But if we were to travel much faster, at close to the speed of light, then the distance we perceive is reduced or foreshortened.
It was Einstein who gave us our present understanding of how relativity affects distance. He did so initially for objects travelling at constant speed in what is now called Special Relativity, Special because it deals with the special case of things moving at constant speed. Later on he was to deal with objects that are accelerating or decelerating in what has come to be known as General Relativity. Here we need only concern ourselves with the special case since our orbiter is assumed to be traveling at constant speed, that is it has constant tangential velocity.
What Einstein showed was that distances measured in the direction of travel are foreshortened or compressed, those at right angles to the direction of travel are unaffected. The extent of this foreshortening is governed by a factor called the Lorentz factor. The Lorentz factor is usually referred to as Gamma (γ) given by a simple formula and tells us the extent of foreshortening for a given speed.
Equation 2
Where c is the speed of light.
If we plot the value of Gamma against speed we see that for very low speeds it has a value of 1, but that it diverges rapidly to infinity as we approach the speed of light.
Figure 1
Figure 1
So for example if we are traveling at 86.6% of the speed of light, where Gamma has a value of 2, then the distance that we see from our moving perspective is half that seen by a stationary observer. So from our orbiter the earth would seem to be only 20,000km around. Of course as before it is also 40,000km and 60,000km and so on depending on how many orbits we decide to complete before we arrive back at our departure point. By choosing the right speed and number of orbits we can arrange to make the distance around the earth anything we care to choose.
So what about the distance around the earth? Just how far do I have to travel to get from Wokingham to Wokingham? Well the short answer to the question; How far is it around the earth? Has to be: How far do you want it to be?
Let’s say we want to always travel from Wokingham to Wokingham covering a distance of 400km, how many different ways can we find to achieve this?
We might achieve this by completing one orbit at a speed where Gamma has a value of 100, but we could equally well complete two orbits where Gamma equals 200 or three orbits where Gamma equals 300. Once again we can write a simple formula which describes all the possible cases:
Equation 3
Where n=1,2,3,4,5…∞
There are in fact an infinite number of ways in which the distance around the earth can be arranged to be 400km and Equation 3 represents the complete set. Each successive strategy involves an integer multiple of the value of Gamma in the first or base strategy. We can regard these solutions as being associated with a quantisation of the value of Gamma in increments of the base value. This is despite the fact that Gamma is in all other respects a continuous variable.
Relativity not only affects the observer’s perception of the distance travelled but also the time taken to travel it. For such a moving observer time is dilated or slowed down. The extent to which it is slowed is the same factor Gamma as affects the perception of distance. For an orbiter travelling at 99.995%c, a speed where Gamma equals 100, a stationary observer would measure the time of a single orbit as being roughly 133 msecs. For the observer travelling in the orbiter the time taken to complete each orbit is slowed down by the factor Gamma, effectively divided by Gamma and so would appear to be 1.33msecs. This change in the perception of time has a knock on effect on the perception of frequency. Orbital frequency is the reciprocal of the orbital period, so the stationary observer will see the orbital frequency as 7.5Hz, whereas the moving observer will see his orbital frequency as 750Hz for the case where Gamma equals 100. In the case where Gamma equals 200 the moving observer would see the frequency as 1500Hz and so on.
Here we are dealing with speeds where the orbital velocity is very close to the speed of light and while there is a difference in the orbital period between successive choices of Gamma. The speed we need to be travelling for Gamma to equal 100 is 99.995% c, so the dynamic range is very small. To all intents and purposes it is travelling at c, which means that the orbital frequency seen by the stationary observer remains more or less constant for all of our various strategies.
This is not the case for the orbital frequency seen by the moving observer. He or she is moving at close to the speed of light where time is slowed by the factor Gamma and so sees the orbital frequency as increasing directly with the value of Gamma or the number of orbits completed in our strategy. Taken overall then, these frequencies form a harmonic series.
If we turn this on its head and look at the wavelength of the waves whose frequency this represents. When n equals 1 and with Gamma equal to 100, the distance travelled during one orbit of our orbiter is 400km. In the next state, where n equals 2 and Gamma is equal to 200, the distance it takes two orbits to complete the 400km target distance, so the distance around a single orbit, the wavelength, is 200km. With n equal to 3 the distance around a single orbit is 133.3km and so on for n equals 4 it is 100km, n equals 5 it is 80km.
We have seen that relativity affects the perception of distance travelled, of the time taken to travel that distance and, in the case of an orbtining object, it also affects the perception of orbital frequency. There is one further effect of relativity on orbiting objects which has some importance and that is its effect on the perception of angular displacement. For the stationary observer the orbiter circumnavigates the earth every 133msecs and in doing so its angular displacement is 360 degrees or 2π radians for each orbit. Hence the total angular displacement for each of the strategies we have described is 2πn radians. This is not the case for an observer on board the orbiter. For such an observer the orbital radius is at right angles to the direction of travel and so is unaffected by relativity. The strategy chosen is such that each orbit is some 400km or one hundredth of the actual circumference of the earth and so the angular displacement seen by the observer on the orbiter is 2π/100 for all of the strategies described. Putting this another way it is 2π/100n for each orbit.
Now let’s take a look at the hydrogen atom.
During the 18th and 19th century it was discovered that when shining white light through a gas the resulting spectrum contained dark lines. These were located at wavelengths which were specific to the type of gas and later formed the basis of spectroscopy. Work by a Swiss mathematician and numerologist, Balmer, led to a formula that linked six of the various wavelengths for hydrogen. Using this Balmer was able to predict a seventh spectral line, which was later found by the German physicist Fraunhofer. However Balmer’s formula did not predict all of the spectral lines of hydrogen. The Swedish physicist Johannes Rydberg was able to generalise Balmer’s formula in such a way that his new formula was able to predict all the spectral lines of hydrogen. The atom is seen as occupying one of a number of discrete energy states, that energy being carried by the orbiting electron. Transitions between a high energy state and a low energy state result in the release of energy in the form of a photon. Those from a low energy state to a high energy state are the result of energy being absorbed from an incident photon.
The Rydberg formula is most often written as:
Equation 4
It is important to understand that Rydberg’s formula is based on the results of experiment and observation. It does not seek to explain the spectral lines, rather it seeks to describe them and it is complete, that is it describes objectively all of the spectral lines for hydrogen. As such it is a sort of gold standard which any successful model for the hydrogen atom must satisfy in order to be valid.
The first such model was described by Niels Bohr around 1912. It simply balances the electrical force of attraction between the hydrogen nucleus and the orbiting electron with the centrifugal force tending to separate them. Bohr needed a second equation in order to solve for the two unknown quantities of orbital velocity and orbital radius. He found one in the work of a colleague, John W Nicholson. Nicholson had observed that Planck’s constant had units or dimensions which were the same as those of angular momentum and so suggested that the angular momentum of the orbiting electron was equal to Planck’s constant. He went one step further and argued that angular momentum could only ever take on values which were an integer multiple of Planck’s constant. In other words he argued that angular momentum was quantised. Armed with this assumption, Bohr was able to solve his equations in such a way that the differences in energy between the various energy levels exactly matched those of the Rydberg formula.
Job done you might think, but there was a problem, in fact there were a number of problems with the Bohr model. The most alarming was that the model required that the electron should be capable of moving from one orbit to another without ever occupying anywhere in between, the so called quantum leap. But this was not the only problem. The model failed to take account of the recently described phenomenon of relativity. It failed to explain why the orbiting electron did not emit a type of radiation called synchrotron radiation, which is characteristic of all other orbiting charges. It failed to explain a phenomenon called Zero Point Energy, which is the residual energy present in each hydrogen atom, even when it is cooled to a temperature of absolute zero where all Brownian motion has ceased. And it predicts that the size of the atom increases as the square of the energy level. Since there is no theoretical limit on the order of the highest energy level this would allow for atoms where the nucleus is in one location and the orbiting electron tens of metres if not hundreds of metres away. This change in physical size of the atom presents another problem: The hydrogen atom has the same physical and chemical properties irrespective of its energy level. It is difficult to imagine that this can be the case when these properties depend on the morphology of the atom and if that morphology can vary over such a large dynamic range.
The idea that angular momentum is quantised in units of Planck’s constant has pervaded physics ever since. It forms an integral part of later work by the French physicist Louis de Broglie in his wave/particle duality and the Austrian physicist Erwin Schrödinger in his eponymous wave equation. And yet there is considerable evidence to suggest that angular momentum cannot be quantised in this way. This in part comes about because it turns out that the spectral lines are not single lines at all, but closely spaced pairs of lines. The explanation for this is that the electron itself is spinning on its axis and that the sense in which it is spinning can be either the same as that of the electron orbit or the opposite. Hence the angular momentum of the spinning electron either adds to that of the orbiting electron or it detracts from it. This alters the energy associated with each energy level, but only by the smallest amount some ten thousand times less than that of the orbit. If angular momentum were only ever to take on values which are integer multiples of Planck’s constant then the angular momentum associated with the electron spin could only ever be equal to Planck’s constant or a multiple of it. That means that the total angular momentum could only ever be Planck’s constant plus Planck’s constant or Planck’s constant minus Planck’s constant. It clearly isn’t and the only sensible explanation is that the angular momentum associated with the spin of the electron is at least ten thousand times less than that of the electron orbit, which is supposedly equal to Planck’s constant. Hence angular momentum cannot be quantised in the way suggested since there exist entities whose angular momentum is less than Planck’s constant.
Back to Rydberg: Rather than use the somewhat obscure wave number (1/λ), the Rydberg formula can be expressed in terms of the energy emitted or absorbed when a transition takes place. This is achieved by multiplying both sides of Equation 4 by c, the velocity of light and by h, Planck’s constant. Gathering terms and substituting the analytical value for RH gives:
image015 Equation 5
Where m is the rest mass of the electron and α is a constant known as the Fine Structure Constant of which Richard Feynman once said:
Well we are about to find out.
Equation 6
Equation 7
It is reasoned here that the electron orbiting the atomic nucleus must do so at the constant radius, that is at the same orbital radius for every energy state. Anything other than this would imply the existence of the physically impossible ‘quantum leap’, the ability to move from A to B without occupying anywhere in between. This in turn means that there can be no change in potential energy when the electron transitions from one energy state to another energy state. All changes in energy must therefore be kinetic in nature. Hence the energy of the electron in the nth energy state must be
Equation 8
Where vn is the orbital velocity in the nth energy state.
And it is the difference between the energy ceiling and the energy in the nth energy state that is expressed in the Rydberg series. Combining Equation 6, Equation 7 and Equation 8 to calculate the energy potential in the nth energy state gives
Equation 9
Equation 9 can be simplified to give
Equation 10
The term on the left hand side of Equation 10 will be recognised from Equation 2 as the Lorentz factor Gamma (γ) and hence
Equation 11
Since 1/α = 137.03 we can rewrite Equation 11 as
Equation 12
Where n=1,2,3,4,5…∞ We can calculate the orbital velocity in the base energy state using Equation 2 and 137.03 as the value of Gamma. This gives us an orbital velocity close to the speed of light at 99.997331% of c. This means that the dynamic range of the orbital velocity between the lowest or base energy state and the energy ceiling of the atom is extremely small.
A comparison between Equation 3 and Equation 12 is obvious and shows them to be identical in form. Indeed had we chosen to circumnavigate the earth in 291.9km instead of 400km they would have been identical. This is suggests that the mechanism that underpins the dynamics of the hydrogen atom is centred on the effects of relativity and not on the arbitrary quantisation of angular momentum as in the current standard model.
We also see that Planck’s constant takes on a special significance. If the orbital angular momentum of the electron is taken to be Planck’s constant then it is seen to be substantially unaltered between the various energy levels. It has a value equal to the product of mass, orbital radius and orbital velocity and this latter varies over an extremely small dynamic range, close to c, for all energy levels. Turning this on its heads reveals that the orbital radius is indeed constant for all energy levels and has a value of
Equation 13
The orbital angular momentum does not change as the energy level changes, so rather than being quantised in the way that the Standard model suggests it has the same value for all energy levels.
Louis de Broglie was first to suggest that the orbiting electron could be regarded as a wave and this lead eventually to the idea of the wave particle duality. De Broglie struggled to reconcile the wave solution, ultimately expressed in terms of the Schrödinger equation, and the particle solution, represented by the Bohr model. He recognised that there were two solutions but believed that these could be expressed as one solution in the wave domain and effectively a different solution in the particulate domain, hence the expression the wave particle duality.
Here we see that the wave characteristics of the particle derive directly from its orbital motion, the wavelength being the orbital circumference, the amplitude is the orbital diameter and the frequency is the orbital frequency. This is true both in the domain of the stationary observer and that of the orbiting electron. However the properties of these two views of the same wave are different and the difference is brought about by the effects of relativity. The stationary observer sees the orbital frequency as being more or less constant while the orbiting electron sees the frequency as being dependent on the energy level and forming a harmonic sequence.
It is a similar story with the properties of the particle. Here the orbital path length is seen by the stationary observer as being more or less constant while that of the orbiting electron is seen as being a fraction of that and dependent on the energy level.
Hence there is not so much a wave particle duality, indeed this can be more accurately described as a wave particle identity, since the relationship between wave and particle is consistent across each domain. There is however a wave duality and a separate but related particle duality.
The picture that emerges of the hydrogen atom is one in which the electron orbits at near light speed and at a constant radius, irrespective of energy level. This is consistent with the atom having the same physical and chemical properties for all energy levels.
Thus far we have obtained a possible description of the hydrogen atom, one which is far more rational than that proposed by Bohr or in the standard model. We have also provided an explanation for the mysterious Fine Structure Constant, done away with the quantum leap, restored the status of the electron to that of a discrete particle in the classical sense, explained the dual nature of the wave like behaviour of the electron and the dual nature of its particle like behaviour and explained the phenomenon of Zero point energy.
However this still falls short of what is necessary. If this model is to be deemed correct, it must further explain precisely how and why the value of Gamma is quantised in the way that we have seen. To do so it is necessary to describe the mechanism that underpins this quantisation and that is what I plan to cover in the next post. It is this lack of a mechanism to describe the quantisation of angular momentum in the Standard model that is its Achilles Heel.
Leave a Reply
You are commenting using your account. Log Out / Change )
Google+ photo
Twitter picture
Facebook photo
Connecting to %s
|
7bb28c1fc32d26e5 | Bestand wählen
Lecture 10. Particles on Rings and Spheres... a Prelude to Atoms
Zitierlink des Filmsegments
Embed Code
Automatisierte Medienanalyse
Erkannte Entitäten
welcome back the chemistry 131 a last time we talked about more realistic oscillators we talked about the 12 potential we talked about the Morse potential and then we got away from one-dimensional system systems which is 1 variable which we're calling you the extra or whatever and we get a particle in a box in 2 dimensions and we
discovered that under certain conditions there was too generous see if the lengths of the box were the same and so forth and 3 dimensions is the same and the main weapon we had was that we took a 2 dimensional equations that had 2 variables and and we separated into 2 one-dimensional equations both of which we already knew the solution to and we just substituted and the solutions 2 days rather than talking about park on a box I wanna talk about a particle confined to a ring 1st of all what should be an interesting thing to solve and particle confined to the surface of the sphere and both these things will be important as a prelude to Adams but both of them are completely artificial because if you say Well I've got a particle confined to a range if -dash yourself what kind of potential does that have that the particle stays just on a circular rank and that's not very realistic-looking potential that if you just get off if you're on the ring that you're just there with kinetic energy and then if you're off the ring by Absalon its infinite or something like that and so we could get some kind of anomalous looking behavior but really what it means is we're gonna set up the equation in 2 dimensions and they were going to freeze 1 dimension and solve the other which will be an angular variable and will leave the radius of the rank as something we fix at the outset and then we go from there and will leave the radius of the sphere is the same thing and luckily what we do real Adams at least for simple ones where we don't have too many electrons was just 1 being the right amount
of we can actually factorize the thing and so we can use the solutions for the ring this sphere but and then just pasted together look exactly like an onion like growing and onion and shells we argued early on that a particle confined to a ring which we can think of is sort of like an electron and classical or orbit I would have to have a wavelength but said and the idea is that every time it goes around it has to match up perfectly because of it doesn't it cancels out and in fact there's nothing left for the probability because it's just basically minus itself half the time and so that was an argument that the 2 broadly wavelength had to match and are qualitative conditions them is that we have to have an integral number of wavelengths and Orlando as to equal the circumference of the rain which is to buy are if that condition is met that we can go around like that and we can come back and will be at the same place and that means that we've got a stable standing wave of probability pattern is not changing In space at time not that the broadly wavelength is related to age divided by the momentum therefore we can just substitute for that and time stage upon P is equal to 2 pi are and then we can write that in a very suggestive way we can divide both sides by 2 turn the agent arranged by and we can all by both sides by and then we we have an age bar is equal to our times that's interesting because ah times PE is going to have something to do with angular momentum and if we have a particle on a rainy it we automatically think of angular momentum whenever we've got anything going in a circle are confined like that we think of the angular momentum of of such a particle we learned about that and classical physics let's have the ring oriented in the XY plane then
that means but the particle as an angular momentum vector J is equal to are that the vector cross product of P in this case are points to the rain and is the way the particles going so on are always at right angles in fact J is equal to are Cross P points in the sea direction if we've got the particle in the ex-wife planes and so we can take this component of the animal manner and related to the particles and that means that we've got which turns out to be the times P time signed data signed dated and signed 90 degrees signed pile to that's 1 the JC Is and that's legal NH bar by the condition that the interval number of wavelengths fit and now sort of by this backdoor route we have that angular momentum seems to be quantified photons Cayman units each new angular momentum comes in units of H bar now I time-dependent showed your equation for the particle honoring Is the following as the kinetic energy With respect to ax plus the kinetic energy with respect to wine 2nd partial derivatives same thing as a proper article two-dimensional boxes EPA times simex wife and the problem is X and Y in this problem are very very very awkward variables and that's because our is the square root of EC squared plus y squared and so on and so forth and if you just tried it just bowled bully your way through this equation using Cartesian coordinates which were set up for square problems but you'll never get anywhere what you 1st have to do as you 1st have to change your variables so that you're in the same kind of house mirror as the problem areas and then the problem will seem very easy and that's what we're going to do we've got a circular problem for gonna use polar coordinates we're going to set up 2 new variables rather than X and Y we're going to set up are like Kazaa constant that's the perfect thing to have an awareness derivative with respect to R 0 0 Dolores constant and then the other variables just where I am on the rain what's called outside that I have a relationship that X is equal to our times co-signed find that this leg of the triangle and why is equal to are assigned finalists that violated the triangle intercourse are swears sequel X squared plus square which is always constant now we have to make unfortunately a transformation not only of the variables x and y but the derivatives and that's trickier so we need some actual multi variable calculus to do their transformation and I'm not going to go through the transformation over and over and over because it gets very tedious to go through it but I want you to see 1 time exactly how you do it so that you'll understand where these terms come from I to do this then here's what we have to do we have to use the change rule from calculus and we have to understand that if we have a function of more than 1 variable and I change something I want figure out the change I should figure out the slope but it does with respect to the ax directions if I change acts and then the slope does with respect to the wider action if I change wine and that'll give me the total change in the function I need to take to things and add them up and for each 1 of them if I use a different variables I can use the chain rule so here we go the derivative of side the wave function with respect to R has to the side the X times TEXT are you can think of these things just like fractions where users cancel out the VAX as I can remember the chain the PCI instincts IDX TEXT that's the 1 but I have to it's a function of 2 variables so I have the PCI BYD widely and now I have a formula for a wide with respect to ah and so at what I get is I get keeps IDX co-signed class deep the Y signed for the 2nd derivative Is the derivative of this thing and it gets messy fast but it's a really good exercise and thinking clearly because what you do is you just take each of these terms and called the side DX some other things and then put that in the formula and then expanded back out very carefully annual seeds don't skip any steps let's take another derivative the 2nd year of sigh with respect R Squared is just d by the art of what we got Dietz IDX cos fly Deeside the fine and I think I can put that in by factoring out a co-signed fight and get the 2nd derivative of outside the EC squared EXT plus a 2nd derivative of sigh with respect X and then with respect to wide the white yacht and you might say have what about the order of the X and Y and the answer is for wave functions and nice things that we deal with any kind of function we have we don't really care about the order of we take the derivative partial derivatives with respect to Why 1st or the partial derivative with respect ex-first Bowser equal so we don't worry too much about that and then we have signed 5 times again to terms both of them were the 2nd derivative with the DAX Yardy widely or if we add all that up we get Coast squared Kansas 2nd intervals sigh with respect X work plus signs with 5 times a 2nd round above sigh with respect Why squared plus 2 signed ficus 5 times the mixed partial derivative the derivative with respect to fly I'm going to leave as a problem we do it the same way we start with the size of the 5 and we either as the chain rule and then whenever we have X over ah or something like that we expressed in terms of co-signed find signed 5 and what you'll find is that you get a little bit longer expression here by about the same length but you get miners are times decidely the AA plus our squared times 3 terms which look very similar to the other 3 accepted with a slightly different ordering and now we can get the relationship we need between the Cartesian and the polar 2nd derivatives because now we have the 2nd derivatives with respect to R and with respect to fight and we know what they are in terms of with respect to on the effects and why it's still quite a bit of algebra and again I'll leave that for for you I'll "quotation mark the results so you can see what it is but there's still quite a bit of algebra to do and what you find out then is the the 2nd with respect to side with respect at square plus the 2nd reading sigh with respect Why squared is equal to the SEC Nunavut of sigh with respect our squared plus 1 over are times decidely plus 1 over our squared times the 2nd year sigh with respect to fight with respect to the angle and we can check that we haven't made any mistakes because we can put in units these all have units of length squared on the bottom forget about the wave function units for the time being we have 2nd derivative D Our square on the bottom 1 over R D R 1 over our squared the firefighters and the length of the angle is just a ratio of things because it's in and now at this point we make the totally artificial assumption which is say Hey particles are ring let's just freeze are and so where we see a derivative with respect to ah let's just say
0 because ours not changing and that's great because now we just have 1 over our squared times the 2nd riveted with respect to find that's looking except 4 fights instead of access looking awfully similar to things we've already done therefore the equation simplifies to this instead of having the Cartesian coordinates we have minor stage square over to an times won over our squared times a 2nd derivative with respect to fly of site which is not only a function of 5 because ours fixed it's a and that's equal at the Times side "quotation mark this is beginning to look really good and here's why we know that end times our it is the moment of inertia of a classical particle orbiting around or a bit of radius r then I were doing a rotational problem by keeping the particle honoring and we found that we just got the rotational constants that moment of inertia coming into the problem just naturally just by the way it worked out the differential equation if we write it in terms of the moment of inertia times the energy is just the 2nd derivative with respect to fight physical the minus ii upon a spot squared times 5 times sometimes and that the solution of immigration like that not surprisingly is an exponential so we can write down but the solution and we can write down look it's a I knew fly was B E to the minus minus-5 new fly use the square to ii upon each part we can verify if we show that in that we solve this Schrödinger equation we get the answer we need to have a high because we have a minus after we take
the derivative twice we get minus that could be I squares minus 1 A-minus squared go round the other side to minus 1 but we can't have any real exponential since so these these functions there are things that court screwing around 1 way or the other and when they corkscrew around they come back and meet and they could go the other way but they meet the quantise Asian arises because the wave function has to meet itself on the way around and that means that if we add to pot the wave function it has to be the same thing if that were true the wave function would wouldn't be single valued in space we couldn't depending what variable we picked college it would have a different value and so but at the same point on the range the the rest of the same value so that means it passed exactly come around and that means that new times to apply physical the too in so the end is an integer so if we had to fight to the angle it has to be 2 pi times a manager and conventionally rather than using anchors and is used for other things to do with energy we used and where M is an integer and end this call the magnetic quantum number wise it magnetic because if we've got a charge on a ring and we imagine is moving around a charge honoring them is a current loop and a current loop makes a magnetic field that's exactly how you make an electromagnet you take a tunnel wiring you wandered around the core and then you could put some current through it and you can pick up all kinds of stuff and that's quite a fun thing to do when you're in elementary school and I remember spending considerable time doing exactly that and seeing what I could and couldn't pick UP In fact we can make another connection with classical mechanics so the angular momentum els e is just ah crossed P z and we can put our quantum operators we can put an X had PY had minus white-hat PX that that's a disease component of our cross P is if we set it up and that's equal to minus ii H bar times the derivative with respect to why minus Y the derivative with respect to act so are operator and we can take that and we can convert that 2 polar coordinates by exactly the same tricks is what we did before and if you take that particular combination and you're very careful when you convert polar coordinates you find out it comes out to be this real simple things minus ii H bar d by the fire just the 1st derivative with respect to the angle that's what we get well that's really interesting because what that means is that but when we did the energy we said what we could have the 2 the I M fly he did the miners I am and we could have a of that must be of that we could have any amount of each but if we want the particle to be an enlightened state not only of energy but of angular momentum then that means it's 1 corkscrew it's the ETA the plus science going 1 way or another the minus and fly going the other way and so what we do when we set up these problems for neatness is Weidar set a equal to 0 and say that it's negative or we set equals 0 we says positive them and so am can vary From -minus some value to plus some value and anywhere in between including apparently 0 and the interpretation is that the energy sequel because particle going this way and some raid particle going the other way but some rate while they have the same Connecticut Energy this then is our final solution he did the I am flying embers of any positive or negative integer and apparently 0 could be
0 why not that solves it as well and that certainly doesn't have a problem if there's no twist at all this just flat course it meets up if we normalize the wave function over over the ring that means that the probability that the particles somewhere on the ring is 1 if we do the Annagrove since the the I 5 times minus and fires 1 year goes to Part we don't integrate over Arkansas last nite in the problem any longer it's fixed just integrated over 5 then we can get our normalized and wave functions for the particle on the rain 1 over the square to at times to giant the lowest energy here is an is equal to 0 which is 0 and this seems to run counter to what I've been saying which is that whenever you have a confined particle you should have some 0 . energy quiet because you want to satisfy the uncertainty principle and the reason why that this seems to violate the uncertainty principle this ,comma glossed over in the book but the reason why it seems to violate the uncertainty principle is 1st of all we just threw are out we really had a 2 dimensional problem but we froze 1 of the variable who says you can freeze 1 of those variables exactly like that that's . 1 and . 2 Is that 5 seems to have an artificial range we say 5 goes from 0 to pie but it would be the same if I went from 0 to infinity because it's keeps wrapping around over and over and over and so if you if you argue Well you don't fight could be anywhere between minus infinity and insanity and it would still be somewhere on the then you could have the momentum the 0 of course you have to have the momentum B 0 if you have the energy 0 but any could still have the position In terms of the actual value of 5 billion determinant kind of a mathematical Dodge but you have to be careful if you set up a problem and then impose a constraint that might not be physical say it has to be exactly on the range and then get in a tizzy that something doesn't seem to be quite right because the arguments may be quite subtle at that point the probability density for an energy and angular momentum is independent of the angular variable 5 the probability density is flat and of course it would have to be because there's no reason why I should expect to find the particle more on 1 side of the ring than the other side of the ring when there's no difference between the sides of the ring so it's gotta be flat and that makes perfect sense but if there's nothing to distinguish the 2 sides how are we going to how we gonna tell and quantise again arises from the fact that the wave function Hester matchup has to be single value now the next step is to expand to a sphere analysts fear you could argue as fears about your and that might be a good way to look at it in fact is pilot brings it's an artificial again to assume that the particle can go anywhere on the sphere but can't move radially at all but nevertheless it's a really good steppingstone to getting to the point where we can write down atomic orbitals and figure out what's going on we've got a sphere we know if we write down the Connecticut Energy and because we've got a sphere were keeping it on the sphere we've got no potential energy except this completely arbitrary potential energy that's keeping the particle right on the sphere we have to convert to spherical polar coordinates In accordance system here His is equal we have 2 variables we have fire which goes around the sea axis changes X to Y and so forth and we have stayed which starts here and goes from the North Pole where the data is 0 to the South Pole where the it is please or 180 degrees but we don't go around again because at every time we go here we make a ring like a tree rings and then we go here we make another ring like a tree ranked only go here to the equator we do that 1 and we go down here and we finished down here and if we won around again we becoming all the rings again twice so far I varies from 0 to for 360 degrees around and the other 1 very data varies from suggests 0 2 1 8 we adopt a right-handed coordinate system and that means that X crosswise the thumb points in the policy direction and that's by far the easiest way to do it because if somebody draws a figure were extra shooting this way and why is that way and you try to read oriented in your mind so access to the right wise and the paper it takes a long time mentally it's quite a gymnastic Regis grab your hand and do it you can figure it out and be careful that because what you tend to do when you're doing problems is you have the if you're right-handed you have the pen in your right hand anyone figure out which way some skull and use your free hand and that's the wrong hands this he do Alexander in physics especially you just get marked wrong because you did news right in accordance system and I've watched that happen to people at various stages of my career Oh here's a right-hander coordinate system failure and fire set out and our position is given by 3 numbers are the distance out faded the distance down from the north pole and fight the distance around from the
x axis what's very confusing is that if you take a course in math for some reason that I cannot fathom the variable said and fire and so it favors the 1 this way and Pfizer 1 this way and I think that's because in math when you two-dimensional problems you always Fader and they just like to keep data for the same variable and use 5 the other ones but in physics it's fly around the sea
access and fade away from the sea axes and you have to keep them straight because if you open the wrong book you'll get things backwards and you get a terrible mess that let's try a practice problems let's consider a volume element this is a physical 2 DX the wide easy and Cartesian coordinates it's a little too little Q what's the volume element for spherical polar coordinates so let's take a radius r angle the from the z axis was the formula for the volume well we don't we don't care but because it's always the same here but we do care would say that is and here's why the size of the onion rings here at the top near the North Pole is tiny 1 here Wednesday at a small estate gets smaller the total size of the ring gets smaller and smaller as we go toward the equator the same ring around 360 degrees and fight is much bigger In terms of the volume that is the little hold if we take a little slice and are and so we have to wait the volume elements by how close we are to the North Pole like imagining you can walk around the North Pole in a little circle on you've walked around all possible longitude starvation try to do it the equator it's a very long walk the same thing so here's the beauty of calculus is that we actually draw this thing as a because of large changing and so the inner part is smaller than the other partners like like a little Conan but when we have just very tiny differences then it's like a Q and all we need to do is take the size of the king and that's the beauty of taking very small things DX is that no matter how Kirby something is if you take it's small enough it's a straight line that's why calculus is so great and so we can figure out the distance here out is far signed faded and so the distance along if I moved by the 5 is this distance fears are signed dated the fire and the distance the other way is just park is as the full distance times the fatal and the distance and the 3rd direction is just D or R neither the identify changes this so we can take those multiplied together and we get the volume it is far squared signed dated Dr Dee Dee said that 85 that's important to know because you're going to have been goes to do with sigh and so forth in them and they're going to have DVD because you have been great overall space and you need to know what DVD is in terms of these variables and now you know what it is the exactly the same way but we did before but I don't wanna take an hour to go through it carefully and I'll just leave it to you if you're interested to work it out once we can take the but Cartesian a 2nd derivatives and 2nd interval with respect acts 2nd the respect why plus a secondary respected and we can cast in the following form secondarily with respect or are close to or over our are times the 1st derivative with respect plus 1 over our square Kansas operator I've called Landis square and Landis squared has nothing in it except the and find it has some signs where data seconded with respect to fight and this 2nd term which is written in a very funny way 1 over signed the by the failure of signed the baiji theater unless you're used to dealing with operators that this is written in a very compact very nice ways you don't have a lot of terms but you have to be quite careful when you actually put it on a wave function because it you only put the wave function on the right of the operator you don't start inserting it in between things you just put it on the right and then you go sequentially right take the new respected their multiply by side that take the derivative again and so forth but if if you don't understand the operator notation then you're very likely to get things wrong because you may stickup and wherever you think there's a blank and that's not correct this thing I'll land squares called Lashonda in this very famous the Legendre polynomials and so forth and as I said the operator takes a bit of getting used to only put the wave function on the right to be careful about that don't put a sign from both those derivatives the operator of the lender squared the Lucia Ondrea has but all the angular energy in the Hambletonian because the other operators had derivatives with respect to ah if we freeze Pa we don't allow any change in our there's no energy that way that
means that this thing Landis Square is what we want to focus on and it's just the energy to do with all the possible angular emotions of things on a sphere so let's fix are throw that out again same way as we did with the particle honoring and now we've got this new equation to solve minor bars squared over 2 and March squared times landed square on the way function which is a function of status and fight is equal some energy as a function of the and 5 and if we can find a way functions that solve that I can value equation and the iron values than we know the energy and we know the possible way functions on a sphere and a intercourse we expect that it's going to be quantized and so on because it's a trap thing again and as we go round and didn't find much more complicated this time Kansas to home so it's hard to see but it's got to be similar as what we had before there is no potential energy here the only requirement on the quantized Asian is just that the wave function fit in the end to the space OK let's do a practice problem here than practice problem 14 let's show that the angular Schrödinger equation is separable and what is at me that means that whatever this way function in favor and 5 years we can write it as a product of something that's only a function of the and something else that's only a function of 5 and if we can do that then we guessed that the something else that's only a function of is what we had before because last time we do two-dimensional particle a box then OK what it was a product and the X was the same 1 is what we had before and so fate is different from fired because fire goes round and to pine paid only stops here so we would expect it's going to be so easy is that because they too could be different different fight was still fight should be the same as what it was before and so that we got that even the I 5 stuff will that saves us a lot of work OK here's what we gotta show it's separable gift we can write the solution as a product if we want prove that what we have to do as we have to rearrange the equation so that we have 2 terms 1 of which only depends on the plus or minus some under the terms but only depends on Friday His equal constant and that we make the same argument if we fixate on change 5 if it's a constant enemies of both of them are constant otherwise that would work and that means that we can do a one-dimensional equation for each so let's try the product trial product solution again we use capital accidents let use capital fate of data to some function we don't know what it is but we don't care at this point capital 5 of and substituted and so we've got the Lashonda on this thing and now we get a number and here epsilon this number by minus epsilon is just too i.e. over-age bars square and eyes the moment of inertia that is cleans it up so that we don't have to write a lot of extra terms now if if we substitute this here's what we find we have 1 over signs squared 2nd admitted with respect to fight of this product plus 1 over signed data set of 1st truly respected data of signed their 1st 2 respected the of the product is equal to minus epsilon times the product I do the same thing I divide both sides the products and 1st I say hi I've got the product of 2 things secondary above of with respect to fly of state some fraction of the data In a function of fire the function of fate as a constant so I can pull it out because it doesn't matter where it is that's what the partial derivative means In the 2nd term where the derivative has risen respective Fader I can pull the function big fire out in front as a constant because it's just a bunch of derivatives with respect to the data and we treat Pfizer constant so let's call it out just like the same way we pull out a constant in a derivative and if we pull those out in front that makes it much easier to see now we've got big fade out in front signed data Square is 2nd with respect to fly squared and then the rest but you can see is able to minus epsilon times the product and now if I divided by the product on both sides the data goes away and the term with and the fight goes away and the term with data but we still have this sign square so we have to multiply through the whole equation by signs were and then if we do that we finally find the following we have won over 5 times a 2nd derivative with respect to fight that's 1 term plus a bunch of gobbled the goop but it doesn't matter what it is because it doesn't have 5 it's all faded and it has an epsilon signs were there that's equal to 0 that's good enough because I have this thing over here which is just and this thing over here which is just there and so I've got to equations 1 of them just the other ones just stay there and I I'm I mean and we can switch once we've got it down 1 variable we can switch to the regular D the start of the funny indeed because it's the same thing we can solve the differential equation so the 1st term as I said is going to give us exactly what we had before b in the in the park along a rank because this same equation basically as the particle honoring equation is that what it's actually going to be will depend on what they happens to be but that doesn't that doesn't bother us we have either the eye and try and then whatever fate happens to be it's going to be some other function that's gonna be are business to solve In the end the 2nd part of the equation which is a a different equation to do and so we can we can break this very complex problem up into this series of problems and solve them and it turns out for a real problem with an Adams organ dues were gonna 1st break up the problem into the article on a sphere and there were just gonna let RB the other variable and not surprisingly were going to try a product worthiness AG I think that the wave function is a product of some function of time some function of from some function of 5 and see if that doesn't work then that gives us a clue that of how to factorize the thing and see that it works 1 will that fail well if it'll fail right away unfortunately if we have 2 electrons because if we have 2 to electrons then ,comma where each of them happens to be and their repelling each other in their attracting to the nucleus it gets too difficult we can't separate the equation and so we run into problems in that case what we do is we treat each electron separately solve it and then take the electron electron repulsion part but we couldn't handle and we couldn't separated and we treat that as a perturbation but how good that will be will depend a lot on how close the electrons are getting you can imagine that if the electrons happily get pretty close in space that I the potential could get quite high and so the electrons will tend to avoid each other and that means that the emotion is correlated the SAR like a cat chasing its tail 1 1 starts
going this way the other may go that way and so forth and so the electrons and may not be independently moving around and that kind of electron correlation is a very important aspect of multi electron Adamson higher dimensional systems but will touch on that and later on in the course and for now I will leave it there please don't take time can look through but these transformations and spend a little time with in a quiet room with a pencil and a piece of paper and just methodically go through and take each step and at each step say What is it me what am I doing why can I do that and go through it and see if you can figure out why these things have the structure they do b d by the fate signed data stuff with the 1 signed the 1 over centered on front that's going to be a little bit tricky to get but you can get that too if you work on and next time what we'll do is we'll pick up on our solution we can guess the solution and fired but we can't figure out the solution and there yet because that's a different differential equation was signed today in it and we haven't figured out anything to do with that equation yet so that's a separate equation for us to solve will have to figure out what kind of techniques we need to solve that equation and then figure out what these functions are and then hopefully we can get some idea of what these functions on a sphere actually look like so we'll leave it there and then pick it up next time to figure out the actual wave functions for particle on a sphere
Single electron transfer
Transformation <Genetik>
Falle <Kohlenwasserstofflagerstätte>
Elektron <Legierung>
Fülle <Speise>
Hydrophobe Wechselwirkung
Konkrement <Innere Medizin>
Bukett <Wein>
Chemische Forschung
Transformation <Genetik>
Chemisches Element
Klinische Prüfung
Dipol <1,3->
Chemische Forschung
Klinisches Experiment
Konkrement <Innere Medizin>
Chemische Struktur
Chemische Formel
Funktionelle Gruppe
Systemische Therapie <Pharmakologie>
Aktives Zentrum
Biologisches Lebensmittel
Physikalische Chemie
Potenz <Homöopathie>
Gangart <Erzlagerstätte>
Chemische Formel
Kettenlänge <Makromolekül>
Kettenlänge <Makromolekül>
Chemisches Element
Formale Metadaten
Titel Lecture 10. Particles on Rings and Spheres... a Prelude to Atoms
Alternativer Titel Lecture 10. Quantum Principles: Particles on Rings and Spheres... a Prelude to Atoms
Serientitel Chemistry 131A: Quantum Principles
Teil 10
Anzahl der Teile 28
Autor Shaka, Athan J.
Lizenz CC-Namensnennung - Weitergabe unter gleichen Bedingungen 4.0 International:
DOI 10.5446/18888
Herausgeber University of California Irvine (UCI)
Erscheinungsjahr 2014
Sprache Englisch
Inhaltliche Metadaten
Fachgebiet Chemie
Abstract UCI Chem 131A Quantum Principles (Winter 2014) Instructor: A.J. Shaka, Ph.D Description: This course provides an introduction to quantum mechanics and principles of quantum chemistry with applications to nuclear motions and the electronic structure of the hydrogen atom. It also examines the Schrödinger equation and study how it describes the behavior of very light particles, the quantum description of rotating and vibrating molecules is compared to the classical description, and the quantum description of the electronic structure of atoms is studied. Index of Topics: 0:02:23 Particle on a Ring 0:16:49 Quantization 0:24:23 Preparation of Atoms 0:27:16 Spherical Polar Coordinates 0:31:18 Particle on a Sphere 0:33:03 The Legendrian 0:35:06 Spherical Polar Coordinates
Zugehöriges Material
Ähnliche Filme |
9fb338463ee6ea3d | We gratefully acknowledge support from
the Simons Foundation
and member institutions
Full-text links:
Current browse context:
new | recent | 1202
Change to browse by:
References & Citations
Condensed Matter > Quantum Gases
Title:How does an interacting many-body system tunnel through a potential barrier to open space?
Abstract: The tunneling process in a many-body system is a phenomenon which lies at the very heart of quantum mechanics. It appears in nature in the form of alpha-decay, fusion and fission in nuclear physics, photoassociation and photodissociation in biology and chemistry. A detailed theoretical description of the decay process in these systems is a very cumbersome problem, either because of very complicated or even unknown interparticle interactions or due to a large number of constitutent particles. In this work, we theoretically study the phenomenon of quantum many-body tunneling in a more transparent and controllable physical system, in an ultracold atomic gas. We analyze a full, numerically exact many-body solution of the Schrödinger equation of a one-dimensional system with repulsive interactions tunneling to open space. We show how the emitted particles dissociate or fragment from the trapped and coherent source of bosons: the overall many-particle decay process is a quantum interference of single-particle tunneling processes emerging from sources with different particle numbers taking place simultaneously. The close relation to atom lasers and ionization processes allows us to unveil the great relevance of many-body correlations between the emitted and trapped fractions of the wavefunction in the respective processes.
Comments: 18 pages, 4 figures (7 pages, 2 figures supplementary information)
Journal reference: Proc. Natl. Acad. Sci. USA 109, 13521 (2012)
DOI: 10.1073/pnas.1201345109
Cite as: arXiv:1202.3447 [cond-mat.quant-gas]
(or arXiv:1202.3447v1 [cond-mat.quant-gas] for this version)
Submission history
From: Axel Lode [view email]
[v1] Wed, 15 Feb 2012 21:18:52 UTC (3,378 KB) |
33d5ed98b2f7cb6d | Wednesday, August 29, 2012
Quantum Gravity and Taxes
The other day I got caught in a conversation about the Royal Institute of Technology and how it deals with value added taxes. After the third round of explanation, I still hadn’t quite understood the Swedish tax regulations. This prompted my conversation partner to remark Swedish taxes are more complicated than my research.
The only thing I can say in my defense is that in a very real sense taxes are indeed more complicated than quantum gravity.
True, the tax regulations you have to deal with to get through life are more a matter of available information than of understanding. Applying the right rule in the right place requires less knowledge than you need for, say, the singularity theorems in general relativity. In the end taxes are just basic arithmetic manipulations. But what’s the basis of these rules? Where do they come from?
Tax regulations, laws in general, and also social norms have evolved along with our civilizations. They’re results of a long history of adaption and selection in a highly complex, partly chaotic, system. This result is based on vague concepts like “fairness”, “higher powers”, or “happiness”, that depend on context and culture and change with time.
If you think about it too much, the only reason our societies’ laws and norms work is inertia. We just learn how our environment works and most of us most of the time play by the rules. We adapt and slowly change the rules along with our adaption. But ask where the rules come from or by what principles they evolve, and you’ll have a hard time coming up with a good reason for anything. If you make it more than five why’s down the line, I cheer for you.
We don’t have the faintest clue how to explain human civilization. Nobody knows how to derive the human rights from the initial conditions of the universe. People in general, and men in particular, with all their worries and desires, their hopes and dreams, do not make much sense to me, fundamentally. I have no clue why we’re here or what we’re here for, and in comparison to understanding Swedish taxes, quantizing gravity seems like a neatly well-defined and solvable problem.
Saturday, August 25, 2012
How to beat a cosmic speeding ticket
xkcd: The Search
After I had spent half a year doing little more than watching babies grow and writing a review article on the minimal length, I got terribly bored with myself. So I’m apparently one of the world experts on quantum field theories with a minimal length scale. That was not exactly among my childhood aspirations.
As a child I had a (mercifully passing) obsession with science fiction. To this day contact to extraterrestrial intelligent beings is to me one of the most exciting prospects of technological progress.
I think the plausible explanation why we have so far not made alien contact is that they use a communication method we have no yet discovered, and if there is any way to communicate faster than the speed of light, clearly that’s what they would use. Thus, we should work on building a receiver for the faster-than-light signals! Except, well, that our present theories don’t seem to allow for such signals to begin with.
Every day is a winding road, and after many such days I found myself working on quantum gravity.
So when the review was finally submitted, I thought it is time to come back to superluminal information exchange, which resulted in a paper that’s now published
The basic idea isn’t so difficult to explain. The reason that it is generally believed nothing can travel faster than the speed of light is that Einstein’s special relativity sets the speed of light as a limit for all matter that we know. The assumptions for that argument are few, the theory is extremely well in agreement with experiment, and the conclusion is difficult to avoid.
Strictly speaking, special relativity does not forbid faster-than-light propagation. However, since in special relativity a signal moving forward in time faster than the speed of light for one observer might appear like a signal moving backwards in time for another observer, this can create causal paradoxa.
There are three common ways to allow superluminal signaling, and each has its problems:
First, there are wormholes in general relativity, but they generically also lead to causality problems. And how creation, manipulation, and sending signals through them would work is unclear. I’ve never been a fan of wormholes.
Second, one can just break Lorentz-invariance and avoid special relativity altogether. In this case one introduces a preferred frame and observer independence is violated. This avoids causal paradoxa because there’s now a distinguished direction “forward” in time. The difficulty here is that special relativity describes our observations extremely well and we have no evidence for Lorentz-invariance violation whatsoever. There is then explaining to do why we have not noticed violations of Lorentz-invariance before. Many people are working on Lorentz invariance violation, and that by itself limits my enthusiasm.
Third, there are deformations of special relativity which avoid an explicit breaking of Lorentz-invariance by changing the Lorentz-transformations. In this case, the speed of light becomes energy-dependent so that photons with high energy can, in principle, move arbitrarily fast. Since in this case everybody agrees that a photon moves forward in time, this does not create causal paradoxa, at least not just because of the superluminal propagation.
I was quite excited about this possibility for a while, but after some years of back and forth I’ve convinced myself that deformed special relativity creates more problems than it solves. It suffers from various serious difficulties that prevent a recovery of the standard model and general relativity in the suitable limits, notoriously the problem of multi-particle states and non-locality (which we discussed here).
So, none of these approaches is very promising and one is really very constrained in the possible options. The symmetry-group of Minkowski-space is the Lorentz-group plus translations. It has one free parameter and that’s the speed of massless particles. It’s a limiting speed. End of story. There really doesn’t seem to be much wiggle room in that.
Then it occurred to me that it is not actually difficult to allow several different speeds of lights to be invariant, as long as can never measure them at the same time. And that would be the case if one had particles propagating in a background that is a superposition of Minkowski-spaces with different speeds of light. Because in this case then you would use for each speed of light the Lorentz-transformation that belongs to it. In other words, you blow up the Lorentz-group to a one-parameter family of groups that acts on a set of spaces with different speeds of lights.
You have to expect the probability for a particle to travel through an eigenspace that does not belong to the measured speed of light to be small, so that we haven’t yet noticed. To good precision, the background that we live in must be in an eigenstate, but it might have a small admixture of other speeds, faster and slower. Particles then have a small probability to travel faster than the speed of light through one of these spaces.
If you measure a state that was in a superposition, you collapse the wavefunction to one eigenstate, or let us better say it decoheres. This decoherence introduces a preferred frame (the frame of the measurement) which is how causal paradoxa are avoided: there is a notion of forward that comes in through the measurement.
In contrast to the case in which Lorentz invariance is violated though, this preferred frame does not appear on the level of the Lagrangian - it is not fundamentally present. And in contrast to deformations of special relativity, there is no issue here with locality because two observers never disagree on the paths of two photons with different speeds: Instead of there being two different photons, there’s only one, but it’s in a superposition. Once measured, all observers agree on the outcome. So there’s no Box Problem.
That having been said, I found it possible to formulate this idea in the language of quantum field theory. (It wasn’t remotely as straight forward as this summary might make it appear.) In my paper, I then proposed a parameterization of the occupation probability of the different speed of light eigenspaces and the probability of particles to jump from one eigenstate to another upon interaction.
So far so good. Next one would have to look at modifications of standard model cross-sections and see if there is any hope that this theoretical possibility is actually realized in nature.
We still have a long way to go on the way to build the cell phone to talk to aliens. But at least we know now that it’s not incompatible with special relativity.
Wednesday, August 22, 2012
How do science blogs change the face of science?
The blogosphere is coming to age, and I’m doing my annual contemplation of its influence on science.
Science blogs of course have an educational mission, and many researchers use them to communicate the enthusiasm they have for their research, may that be by discussing their own work or that of colleagues. But blogs were also deemed useful to demonstrate that scientists are not all dusty academics, withdrawn professors or introverted nerds who sit all day in their office, shielded by piles of books and papers. Physics and engineering are fields where these stereotypes are quite common – or should I say “used to be quite common”?
Recently I’ve been wondering if not the perception of science that the blogosphere has created is replacing the old nerdy stereotype with a new stereotype. Because the scientists who blog are the ones who are most visible, yet not the ones who are actually very representative characters. This leads to the odd situation in which the avid reader of blogs, who otherwise doesn’t have much contact with academia, is left with the idea that scientists are generally interested in communicating their research. They also like to publicly dissect their colleagues’ work. And, judging from the photos they post, they seem to spend a huge amount of time travelling. Not to mention that, well, they all like to write. Don’t you also think they all look a little like Brian Cox?
I find this very ironic. Because the nerdy stereotype for all its inaccuracy still seems to fit better. Many of my colleagues do spend 12 hours a day in their office scribbling away equations on paper or looking for a bug in their code. They’d rather die than publicly comment on anything. Their Facebook accounts are deserted. They think a hashtag is a drug, and the only photo on their iPhone shows that instant when the sunlight fell through the curtains just so that it made a perfect diffraction pattern on the wall. They're neither interested nor able to communicate their research to anybody except their close colleagues. And, needless to say, very few of them have even a remote resemblance to Brian Cox.
So the funny situation is that my online friends and contacts think it’s odd if one of my colleagues is not available on any social networking platform. Do they even exist for real? And my colleagues still think I’m odd taking part in all this blogging stuff and so on. I’m not sure at all these worlds are going to converge any time soon.
Sunday, August 19, 2012
Book review: “Why does the world exist?” by Jim Holt
Why Does the World Exist?: An Existential Detective Story
By Jim Holt
Liveright (July 16, 2012)
Yes, I do sometimes wonder why the world exists. I believe however it is not among the questions that I am well suited to find an answer to, and thus my enthusiasm is limited. While I am not disinterested in philosophy in principle, I get easily frustrated with people who use words as if they had any meaning that’s not a human construct, words that are simply ill-defined unless the humans themselves and their language are explained too.
I don’t seem to agree with Max Tegmark on many points, but I agree that you can’t build fundamental insights on words that are empty unless one already has these fundamental insights - or wants to take the anthropic path. In other words, if you want to understand nature, you have to do it with a self-referential language like mathematics, not with English. Thus my conviction that if anybody is to understand the nature of reality, it will be a mathematician or a theoretical physicist.
For these reasons I’d never have bought Jim Holt’s book. I was however offered a free copy by the editor. And, thinking that I should broaden my horizon when it comes to the origin of the universe and the existence or absence of final explanations, I read it.
Holt’s book is essentially a summary of thoughts on the question why there isn’t nothing, covering the history of the question as well as the opinions of currently living thinkers. The narrative of the book is Holt’s own quest for understanding that lead him to visit and talk to several philosophers, physicists and other intellectuals, including Steven Weinberg, Alan Guth and David Deutsch. Many others are mentioned or cited, such as Stephen Hawking, Max Tegmark and Roger Penrose.
The book is very well written, though Holt has a tendency to list exactly what he ate and drank when and where which takes up more space than it deserves. There are more bottles of wine and more deaths on the pages of his book than I had expected, though that is balanced with a good sense of humor. Since Holt arranges his narrative along his travel rather than by topic, the book is sometimes repetitive when he reminds the reader of something (eg the “landscape”) that was already introduced earlier.
I am very impressed by Holt’s interviews. He has clearly done a lot of own thinking about the question. His explanations are open-minded and radiate well-meaning, but he is sharp and often critical. In many cases what he says is much more insightful than what his interview partners have to offer.
Holt’s book is good summary of just how bizarre the world is. The only person quoted in this book who made perfect sense to me is Woody Allen. On the very opposite end is a philosopher named Derek Parfit who hates the “scientizing” of philosophy, and some of his colleagues who believe in “panpsychism”, undeterred by the total lack of scientific evidence.
The reader of the book is also confronted with John Updike who belabors the miserable state of string theory “This whole string theory business… There’s never any evidence, right? There are men spending their whole careers working on a theory of something that might not even exist”, and Alex Vilenkin who has his own definition of “nothing,” which, if you ask me, is a good way to answer the question.
Towards the end of the book Jim Holt also puts forward his own solution to the problem of why there is something rather than nothing. Let me give you a flavor of that proof:
“Reality cannot be perfectly full and perfectly empty at the same time. Nor can it be ethically the best and causally the most orderly at the same time (since the occasional miracle could make reality better). And it certainly can’t be the ethically best and the most evil at the same time.”
Where to even begin? Every second word in this “proof” is undefined. How can one attempt to make an argument along these lines without explaining “ethically best” in terms that are not taken out of the universe whose existence is supposed to be explained? Not to mention that all along his travel, nobody seems to have told Holt that, shockingly, there isn’t only system of logic, but a whole selection of them.
This book has been very educational for me indeed. Now I know the names of many ism’s that I do not want to know more about. I hate the idea that I’d have missed this book if it hadn't been for the free copy in my mail box. That having been said, to get anything out of this book you need to come with an interest in the question already. Do not expect the book to create this interest. But if you come with this interest, you’ll almost surely enjoy reading it.
Wednesday, August 15, 2012
"Rapid streamlined peer-review" and its results
Contains 0% Quantum Gravity.
"Scientific Reports" is a new open access journal from the Nature Publishing Group, which advertises its "rapid peer review and publication of research... with the support of an external Editorial Board and a streamlined peer-review system." In this journal I recently found this article
"Testing quantum mechanics in non-Minkowski space-time with high power lasers and 4th generation light sources"
B. J. B. Crowley et al
Scientific Reports 2, Article number: 491
Note the small volume number, all fresh and innocent.
It's a quite interesting article that calculates the cross-section of photons scattering off electrons that are collectively accelerated by a high intensity laser. The possibility to maybe test Unruh radiation in a similar fashion has lately drawn some attention, see eg this paper. But this is explicitly not the setup that the authors of the present paper are after, as they write themselves in the text.
What is remarkable about this paper is the amount of misleading and wrong statements about exactly what it is they are testing and what not. In the title it says they are testing "quantum mechanics in non-Minkowski space-time." What might that mean, I was wondering?
Initially I thought it's another test of space-time non-commutativity, which is why I read the paper in the first place. The first sentence of the abstract reads "A common misperception of quantum gravity is that it requires accessing energies up to the Planck scale of 1019GeV, which is unattainable for any conceivable particle collider." Two sentences later, the authors no longer speak of quantum gravity but "a semiclassical extension of quantum mechanics ... under the assumption of weak gravity." So what's non-Minkowski then? And where's quantum gravity?
What they do in fact in the paper is that they calculate the effect of the acceleration on the electrons and argue that via the equivalence principle this should be equivalent to testing the influence of gravity. (At least locally, though there's not much elaboration on this point in the paper.) Now, strictly speaking we do of course never make any experiment in Minkowski space - after all we sit in a gravitational field. In the same sense we have countless tests of the semi-classical limit of Einstein's field equations. So I read and I am still wondering, what is it that they test?
In the first paragraph then the reader learns that the Newton-Schrödinger equation (which we discussed here) is necessary "to obtain a consistent description of experimental findings" with a reference to Carlip's paper and a paper by Penrose on state reduction. Clearly a misunderstanding, or maybe they didn't actually read the papers they cite. They also don't actually use the Schrödinger-Newton equation however - as I said, there isn't actually a gravitational field in their setup. "We do not concern ourselves with the quantized nature of the gravitational field itself." Fine, no need to quantize what's not there.
Then on page two the reader learns "Our goal is to design an experiment where it may be possible to test some aspects of general relativity..." Okay, so now they're testing neither quantum mechanics nor quantum gravity, nor the Schrödinger-Newton equation, nor semi-classical gravity, but general relativity? Though, since there's no curvature involved, it would be more like testing the equivalence principle, no?
But let's move on. We come across the following sentence: "[T]he most prominent manifestation of quantum gravity is that black holes radiate energy at the universal temperature - the Hawking temperature." Leaving aside that one can debate how "prominent" an effect black hole evaporation is, it's also manifestly wrong. Black hole evaporation is an effect of quantum field theory in curved spacetime. It's not a quantum gravitational effect, that's the exact reason why it's been dissected since decades. The authors then go on to talk about Unruh radiation and make an estimate showing that they are not testing this regime.
It follows the actual calculation, which, as I said, is in principle interesting. But at the end of the calculation we are then informed that this "provid[es], for the first time, a direct way to determine the validity of the models of quantum mechanics in curved space-time, and the specific details of the coupling between classical and quantized fields." Except that there isn't actually any curved space-time in this experiment, unless they mean the gravitational field of the Earth. And the coupling to this has been tested for example in this experiment (and in some follow-up experiements to this), which the authors don't seem to be aware of or at least don't cite. Again, at the very best I think they're proposing to test the equivalence principle.
In the closing paragraph they then completely discard the important qualifier that the space-time is not actually curved and that it's in the best case an indirect test by claiming that, on the contrary, "[T]he scientific case described in this letter is very compelling and our estimates indicate that a direct test of the semiclassical theory of quantum mechanics in curved space-time will become possible." Emphasis mine.
So, let's see what have we. We started with a test of quantum mechanics in non-Minkowski space, came across some irrelevant mentioning of quantum gravity, a misplaced referral to the Schrödinger-Newton equation, testing general relativity in the lab, further irrelevant and also wrong comments about quantum gravity, to direct tests of quantum mechanics in curved space time. All by looking at a bunch of electrons accelerated in a laser beam. Misleading doesn't even begin to capture it. I can't say I'm very convinced by the quality standard of this new journal.
Sunday, August 12, 2012
What is transformative research and why do we need it?
Since 2007, the US-American National Science Foundation (NSF) has an explicit call for “transformative research” in their funding criteria. Transformative research, according to the NSF, is the type of research that can “radically change our understanding of an important existing scientific or engineering concept or educational practice or leads to the creation of a new paradigm or field of science, engineering, or education.” The European Research Council (ERC) calls it “frontier research” and explains that this frontier research is “at the forefront of creating new knowledge[. It] is an intrinsically risky endeavour that involves the pursuit of questions without regard for established disciplinary boundaries or national borders.”
The best way to understand this type of research is that it’s of high risk with a potential high payoff. It’s the type of blue-sky research that is very unlikely to be pursued in for-profit organizations because it might have no tangible outcome for decades. Since one doesn’t actually know if some research has a high payoff before it’s been done, one should better call it “Potentially Transformative Research.”
Why do we need it?
If you think of science being an incremental slow push on the boundaries of knowledge, then transformative research is a jump across the border in the hope to land on save ground. Most likely, you’ll jump and drown, or be eaten by dragons. But if you’re lucky and, let’s not forget about that, smart, you might discover a whole new field of science and noticeably redefine the boundaries of knowledge.
The difficulty is of course to find out if the potential benefit justifies the risk. So there needs to be an assessment of both, and a weighting of them against each other.
Most of science is not transformative. Science is, by function, conservative. It conserves the accumulated knowledge and defends it. We need some transformative research to overcome this conservatism, otherwise we’ll get stuck. That’s why the NSF and ERC acknowledge the necessity of high-risk, high-payoff research.
But while it is clear that we need some of it, it’s not a priori clear we need more of it than we already have. Not all research should aspire to be transformative. How do we know we’re too conservative?
The only way to reliably know is to take lots of data over a long time and try to understand where the optimal balance lies. Unfortunately, the type of payoff that we’re talking about might take decades to centuries to appear, so that is, at present, not very feasible.
In lack of this the only thing we can do is to find a good argument for how to move towards the optimal balance.
One way you can do this is with measures for scientific success. I think this is the wrong approach. It’s like setting prices in a market economy by calculating them from the product’s properties and future plans. It’s not a good way to aggregate information and there’s no reason to trust whoever comes up with the formula for the success measure knows what they’re doing.
The other way is to enable a natural optimization process, much like the free market prices goods. Just that in science the goal isn’t to price goods but to distribute researchers over research projects. How many people should optimally work on which research so their skills are used efficiently and progress is as fast as possible? Most scientists have the aspiration to make good use of their skills and to contribute to progress, so the only thing we need to do is to let them follow their interests.
Yes, that’s right. I’m saying the best we can do is trust the experts to find out themselves where their skills are of best use. Of course one needs to provide a useful infrastructure for this to work. Note that this does not mean everybody necessarily works on the topic they’re most interested in, because the more people work on a topic the smaller the chances become that there are significant discoveries for each of them to be made.
The tragedy is of course that this is nowhere like science is organized today. Scientists are not free to choose on which problem to use their skills. Instead, they are subject to all sorts of pressures which prevent the optimal distribution of researchers over projects.
The most obvious pressures are financial and time pressure. Short term contracts put a large incentive on short-term thinking. Another problem is the difficulty for researchers to change topics, which has the effect that there is a large (generational) time-lag in the population of research fields. Both of these problems cause a trend towards conservative rather than transformative research. Worse: They cause a trend towards conservative rather than transformative thinking and, by selection, a too small ratio of transformative rather than conservative researchers. This is why we have reason to believe the fraction of transformative research and researchers is presently smaller than optimal.
How can we support potentially transformative research?
The right way to solve this problem is to reduce external pressure on researchers and to ensure the system can self-optimize efficiently. But this is difficult to realize. If that is not possible, one can still try to promote transformative research by other means in the hope of coming closer to the optimal balance. How can one do this?
The first thing that comes to mind is to write transformative research explicitly into the goals of the funding agencies, encourage researchers to propose such projects, and peers to review them favorably. This most likely will not work very well because it doesn’t change anything about the too conservative communities. If you random sample a peer review group for a project, you’re more likely to get conservative opinions just because they’re more common. As a result, transformative research projects are unlikely to be reviewed favorably. It doesn’t matter if you tell people that transformative research is desirable, because they still have to evaluate if the high risk justifies the potential high payoff. And assessment of tolerable risk is subjective.
So what can be done?
One thing that can be done is to take a very small sample of reviewers, because the smaller the sample the larger the chance of a statistical fluctuation. Unfortunately, this also increases the risk that nonsense will go through because the reviewers just weren’t in the mood to actually read the proposal. The other thing you can do is to pre-select researchers so you have a subsample with a higher ratio of transformative to conservative researchers.
This is essentially what FQXi is doing. And, in their research area, they’re doing remarkably well actually. That is to say, if I look at the projects that they fund, I think most of it won’t lead anywhere. And that’s how it should be. On the downside, it’s all short-term projects. The NSF is also trying to exploit preselection in a different form in their new EAGER and CREATIV funding mechanism that are not at all assessed by peers but exclusively by NSF staff. In this case the NSF staff is the preselected group. However, I am afraid that the group might be too small to be able to accurately assess the scientific risk. Time will tell.
Putting a focus on transformative research is very difficult for institutions with a local presence. That’s because when it comes to hire colleagues who you have to get along with, people naturally tend to select those who fit in, both in type of research and in type of personality. This isn’t necessarily a bad thing as it benefits collaborations, but it can promote homogeneity and lead to “more of the same” research. It takes a constant effort to avoid this trend. It also takes courage and a long-term vision to go for the high-risk, high payoff research(er), and not many institutions can afford this courage. So here is again the financial pressure that hinders leaps of progress just because of lacking institutional funding.
It doesn’t help that during the last weeks I had to read that my colleagues in basic research in Canada, the UK and also the USA are looking forward to severe budget cuts:
“Of paramount concern for basic scientists [in Canada] is the elimination of the Can$25-million (US$24.6-million) RTI, administered by the Natural Sciences and Engineering Research Council of Canada (NSERC), which funds equipment purchases of Can$7,000–150,000. An accompanying Can$36-million Major Resources Support Program, which funds operations at dozens of experimental-research facilities, will also be axed.” [Source: Nature]
“Hanging over the effective decrease in support proposed by the House of Representatives last week is the ‘sequester’, a pre-programmed budget cut that research advocates say would starve US science-funding agencies.” [Source: Nature]
“[The] Engineering and Physical Sciences Research Council (EPSRC) [is] the government body that holds the biggest public purse for physics, mathematics and engineering research in the United Kingdom. Facing a growing cash squeeze and pressure from the government to demonstrate the economic benefits of research, in 2009 the council's chief executive, David Delpy, embarked on a series of controversial reforms… The changes incensed many physical scientists, who protested that the policy to blacklist grant applicants was draconian. They complained that the EPSRC's decision to exert more control over the fields it funds risked sidelining peer review and would favour short-term, applied research over curiosity-driven, blue-skies work in a way that would be detrimental to British science.” [Source:Nature]
So now more than ever we should make sure that investments in basic research are used efficiently. And one of the most promising ways to do this is presently to enable more potentially transformative research.
Thursday, August 09, 2012
Thinking, Fast and Slow
By Daniel Kahneman
Farrar, Straus and Giroux (October 25, 2011)
The book is well written, reads smoothly, is well organized, and thoroughly referenced. As a bonus, the appendix contains reprints of Kahneman’s two most influential papers that contain somewhat more details than the summary in the text. He narrates along the story of his own research projects and how they came into being which I found a little tiresome after he elaborated on the third dramatic insight that he had about his own cognitive bias. Or maybe I'm just jealous because a Nobel Prize winning insight in theoretical physics isn't going to come by that way.
In summary, it’s a well-written and thoroughly useful book that is interesting for everybody with an interest in human decision-making and its shortcomings. I'd give this book four out of five stars.
Tuesday, August 07, 2012
Why does the baby cry? Fact sheet.
Gloria at 2 months, crying.
Sunday, August 05, 2012
Erdös and amphetamines: check
Some weeks ago I wrote a review on Jonah Lehrer's book "Imagine," in which I complained about missing references. Now that it turns out Lehrer fabricated quotes and facts on various occasions (see eg here and here), I recalled that I meant to look up a reference on an interesting story he told, that the famous mathematician Paul Erdös kept up his productivity by taking benzedrine. Benzedrine belongs to the amphetamines, also known as speed. Lehrer did not quote any source for this story.
So I did look it up, and it turns out it's true. In Paul Hoffman's biography of Erdös one finds:
Erdös first did mathematics at the age of three, but for the last twenty-five years of his life, since the death of this mother, he put in nineteen-hour days, keeping himself fortified with 10 to 20 milligrams of Benzedrine or Ritalin, strong espresso, and caffeine tablets. "A mathematician," Erdös was fond of saying, "is a machine for tuning coffee into theorems." When friends urged him to slow down, he always had the same response: "There'll be plenty of time to rest in the grave."
(You can read chapter 1 from the book, which contains this paragraph, here).
Benzedrine was available on prescription in the USA during this time. Erdös lived to the age of 83. During his lifetime, he wrote or co-authored 1,475 academic papers.
Lehrer also relates the following story in his book
Ron Graham, a friend and fellow mathematician, once bet Erdos five hundred dollars that he couldn't abstain from amphetamines for thirty days. Erdos won the wager but complained that the progress of mathematicians had been set back by a month: "Before, when I looked at a piece of blank paper, my mind was filled with ideas," he complained. "Now all I see is a blank piece of paper.
(Omitted umlauts are Lehrer's, not mine.) Lehrer does not mention Erdös was originally prescribed benzedrine to treat depression after his mother's death. I'm not sure exactly what the origin of this story is. It is mentioned in a slightly different wording in this PDF by Joshua Hill:
Erdős's friends worried about his drug use, and in 1979 Graham bet Erdős $500 that he couldn't stop taking amphetamines for a month. Erdős accepted, and went cold turkey for a complete month. Erdős's comment at the end of the month was "You've showed me I'm not an addict. But I didn't get any work done. I'd get up in the morning and stare at a blank piece of paper. I'd have no ideas, just like an ordinary person. You've set mathematics back a month." He then immediately started taking amphetamines again.
Hill's article is not quoted by Lehrer, and there's no reference in Hill's article. It also seems to go back to Paul Hoffman's book (same chapter).
(Note added: I revised the above paragraph, because I hadn't originally seen it in Hoffman's book.)
Partly related: Calculate your Erdős number here, mine is 4.
Friday, August 03, 2012
Lara and Gloria are presently very difficult. They have learned to climb the chairs and upwards from there; I constantly have to pick them off the furniture. Yesterday, I turned my back on them for a second, and when I looked again Lara was sitting on the table, happily pulling a string of Kleenex out of the box, while Gloria was moving away the chair Lara had used to climb up.
During the last month, the girls have added a few more words to their vocabulary. The one that's most obvious to understand is "lallelalle," which is supposed to mean "empty", and usually a message to me to refill the apple juice. Gloria also has found a liking in the word "Haar" (hair), and she's been saying "Goya" for a while, which I believe means "Gloria". Or maybe yogurt. They both can identify most body parts if you name them. Saying "feet" will make them grab their feet, "nose" will have them point at their nose, and so on. If Gloria wants to make a joke, she'll go and grab her sister's nose instead. Gloria also announces that she needs a new diaper by padding her behind, alas after the fact.
I meanwhile am stuck in proposal writing again. The organization for the conference in October and the program in November is going nicely, and I'm very much looking forward to both events. My recent paper was accepted for publication in Foundations of Physics, and I've wrapped up another project that had been in my drawer for a while. Besides this, I've spent some time reading up the history of Nordita, which is quite interesting actually, maybe I'll have a post on this at some point.
I finally said good bye to my BlackBerry and now have an iPhone, which works so amazingly smoothly I'm deeply impressed.
Below a little video of the girls that I took the other day. YouTube is offering a fix for shaky videos, which is why you might see the borders moving around.
I hope your summer is going nicely and that you have some time to relax!
Wednesday, August 01, 2012
Letter of recommendation 2.0
I am currently reading Daniel Kahneman’s book “Thinking, fast and slow,” which summarizes a truly amazing amount of studies. Among many other cognitive biases, Kahneman explains that it is difficult for people to accept that often algorithms based on statistical data produce better predictions than experts. This is difficult to accept even when one is shown evidence that the algorithm is better. He cites many examples for that, among them forecasting the future success of military personnel, quality of wine, or treatment of patients.
The reason, Kahneman explains, is that humans are not as efficient screening and aggregating data as software. Humans are prone to miss details, especially if the data is noisy, they get tired or fall for various cognitive biases in their interpretation of data. Generally, the human brain does not effortlessly engage in Bayesian inference. In combination with it trying to save energy and effort, this leads to mistakes. Humans are especially bad in making summary judgements of complex information, Kahneman writes, while at the same time being overly confident about the accuracy of their judgement. One of his examples is: “Experienced radiologists who evaluate chest X-rays as “normal or “abnormal” contradict themselves 20% of the time when they see the same picture on separate occasions.”
Interestingly however, Kahneman also cites evidence that expert intuition can be very valuable, provided the expert’s judgement is about a situation where learning from experience is possible. (Expert judgement is an illusion when a data series is entirely uncorrelated.) He thus suggests that judgements should be based on an analysis of statistical data from past performance, combined with expert intuition. We should overcome our disliking of statistical measures, he writes “to maximize predictive accuracy, final decisions should be left to formulas, especially in low-validity environments” (when prediction is difficult due to a large amount of relevant factors).
This made me question my own objections to using measures for scientific success, as scientific success is of the type of prediction that is very difficult to make because luck plays a big role. Part of my disliking arguably stems from a general unease of leaving decisions about people’s future to a computer. While that is the case, and probably part of the reason I don’t like the idea, it’s not the actual problem I have belabored in my earlier blogposts. For me the main problem with using measures for scientific success is that I’d like to see evidence they are actually working, and do not adversely affect research. I am worried particularly that a widely used measure for scientific success would literally redefine what we mean by success in the first place. A small mistake, implemented and streamlined globally, could in this way dramatically slow down progress.
But I am wondering now if not, based on what Kahneman writes, I have to conclude that in addition to asking for letters of recommendation (the “expert’s intuiton”) it would be valuable to judge researchers’ past performance on a point scale. Consider that you’d be asked to fill out a questionnaire for each of your students and postdocs, ranking him or her from 0 to 5 for those characteristics typically named in letters: technical skills, independence, creativity, and so on, and also add your confidence on these judgements. You could update your scores if your opinion changes. What a hiring committee would do with these scores is a different question entirely.
The benefit of this would be the assembly of a data base needed to discover predictors for future performance, if they exist. The difficulty is that the experts in question are rarely offering a neutral judgement; many have a personal interest in seeing their students succeed, so there needs to be some incentive for accuracy. The risk would be that such a predictor might become a self-fulfilling prophecy. At least until a reality check documents that actually, despite all the honors, prices and awards, very little has happened in terms of actual progress.
Either way, now that I think about it, such a ranking would be temptingly useful for hiring committees to sort through large numbers of applicants quickly. I wouldn’t be surprised if somebody tries this rather sooner or later. Would you welcome it? |
452048081fd4d6d7 | 2017 Vol. 41, No. 4
Particles and fields
Chiral corrections to the 1-+ exotic meson mass
Bin Zhou, Zhi-Feng Sun, Xiang Liu, Shi-Lin Zhu
2017, 41(4): 043101. doi: 10.1088/1674-1137/41/4/043101
We first construct the effective chiral Lagrangians for the 1-+ exotic mesons. With the infrared regularization scheme, we derive the one-loop infrared singular chiral corrections to the π1(1600) mass explicitly. We investigate the variation of the different chiral corrections with the pion mass under two schemes. Hopefully, the explicit non-analytical chiral structures will be helpful for the chiral extrapolation of lattice data from the dynamical lattice QCD simulation of either the exotic light hybrid meson or the tetraquark state.
Implications of fermionic dark matter on recent neutrino oscillation data
Shivaramakrishna Singirala
2017, 41(4): 043102. doi: 10.1088/1674-1137/41/4/043102
We investigate flavor phenomenology and dark matter in the context of the scotogenic model. In this model, the neutrino masses are generated through radiative corrections at the one-loop level. Considering the neutrino mixing matrix to be of tri-bimaximal form with additional perturbations to accommodate the recently observed non-zero value of the reactor mixing angle θ13, we obtain the relation between various neutrino oscillation parameters and the model parameters. Working in a degenerate heavy neutrino mass spectrum, we obtain light neutrino masses obeying the normal hierarchy and also study the relic abundance of fermionic dark matter candidates, including coannihilation effects. A viable parameter space is thus obtained, consistent with neutrino oscillation data, relic abundance and various lepton flavor violating decays such as lα→lβγ and lα→3lβ.
Rare Higgs three body decay induced by top-Higgs FCNC coupling in the littlest Higgs model with T-parity
Bing-Fang Yang, Zhi-Yong Liu, Ning Liu
2017, 41(4): 043103. doi: 10.1088/1674-1137/41/4/043103
Motivated by the search for flavor-changing neutral current (FCNC) top quark decays at the LHC, we calculate the rare Higgs three body decay H→Wbc induced by top-Higgs FCNC coupling in the littlest Higgs model with T-parity (LHT). We find that the branching ratio of H→Wbc in the LHT model can reach O(10-7) in the allowed parameter space.
Some new symmetric relations and prediction of left- and right-handed neutrino masses using Koide's relation
Yong-Chang Huang, Syeda Tehreem Iqbal, Zhen Lei, Wen-Yu Wang
2017, 41(4): 043104. doi: 10.1088/1674-1137/41/4/043104
The masses of the three generations of charged leptons are known to completely satisfy Koide's mass relation, but the question remains of whether such a relation exists for neutrinos. In this paper, by considering the seesaw mechanism as the mechanism generating tiny neutrino masses, we show how neutrinos satisfy Koide's mass relation, on the basis of which we systematically give exact values of both left- and right-handed neutrino masses.
Analytic solutions in the acoustic black hole analogue of the conical Kerr metric
H. S. Vieira
2017, 41(4): 043105. doi: 10.1088/1674-1137/41/4/043105
We study the sound perturbation of a rotating acoustic black hole in the presence of a disclination. The radial part of the massless Klein-Gordon equation is written into a Heun form, and its analytical solution is obtained. These solutions have an explicit dependence on the parameter of the disclination. We obtain the exact Hawking-Unruh radiation spectrum.
125 GeV Higgs decay with lepton flavor violation in the μνSSM
Hai-Bin Zhang, Tai-Fu Feng, Shu-Min Zhao, Yu-Li Yan, Fei Sun
2017, 41(4): 043106. doi: 10.1088/1674-1137/41/4/043106
Recently, the CMS and ATLAS collaborations have reported direct searches for the 125 GeV Higgs decay with lepton flavor violation, h→μτ. In this work, we analyze the signal of the lepton flavour violating (LFV) Higgs decay h→μτ in the μ from ν Supersymmetric Standard Model (μνSSM) with slepton flavor mixing. Simultaneously, we consider the constraints from the LFV decay τ →μ γ, the muon anomalous magnetic dipole moment and the lightest Higgs mass around 125 GeV.
Nuclear physics
Symmetry energy and experimentally observed cold fragments in intermediate heavy-ion collisions
Su-Ya-La-Tu Zhang, Mei-Rong Huang, R. Wada, Xing-Quan Liu, Wei-Ping Lin, Jian-Song Wang
2017, 41(4): 044001. doi: 10.1088/1674-1137/41/4/044001
An attempt is made to study the symmetry energy at the time of primary fragment formation from the experimentally observed cold fragments for a neutron-rich system of 64Ni+9Be at 140 MeV/nucleon, utilizing the recent finding that the excitation energy becomes lower for more neutron-rich isotopes with a given Z value. The extracted asym/T values from the cold fragments, based on the Modified Fisher Model (MFM), are compared to those from the primary fragments of the antisymmetrized molecular dynamics (AMD) simulation and become consistent with the simulation when the I=N-Z value becomes larger, indicating that the excitation energy of these neutron-rich isotopes is indeed lower.
Determination of neutron capture cross sections of 232Th at 14.1 MeV and 14.8 MeV using the neutron activation method
Chang-Lin Lan, Yi Zhang, Tao Lv, Bao-Lin Xie, Meng Peng, Ze-En Yao, Jin-Gen Chen, Xiang-Zhong Kong
2017, 41(4): 044002. doi: 10.1088/1674-1137/41/4/044002
The 232Th(n,γ)233Th neutron capture reaction cross sections were measured at average neutron energies of 14.1 MeV and 14.8 MeV using the activation method. The neutron flux was determined using the monitor reaction 27Al(n,α)24Na. The induced gamma-ray activities were measured using a low background gamma ray spectrometer equipped with a high resolution HPGe detector. The experimentally determined cross sections were compared with the data in the literature, and the evaluated data of ENDF/B-VⅡ.1, JENDL-4.0u+, and CENDL-3.1. The excitation functions of the 232Th(n,γ)233Th reaction were also calculated theoretically using the TALYS1.6 computer code.
Collective states and shape competition in 126Te
Liu-Chun He, Yun Zheng, Li-Hua Zhu, Hai-Liang Ma, Xiao-Guang Wu, Chuang-Ye He, Guang-Sheng Li, Lie-Lin Wang, Xin Hao, Ying Liu, Xue-Qin Li, Bo Pan, Zhong-Yu Li, Huai-Bo Ding
2017, 41(4): 044003. doi: 10.1088/1674-1137/41/4/044003
High-spin states in 126Te have been investigated by using in-beam γ ray spectroscopy with the 124Sn(7Li, 1p4n)126Te reaction at a beam energy of 48 MeV. The previously known level scheme has been enriched, and a new negative-parity sequence has been established. The yrast positive-parity band shows a shape change between triaxial shape and collective oblate shape as a function of spin. In particular, three competitive minima appear in the potential energy surface for the Iπ=8+ states, with one aligned state at γ=-120° and two triaxial states at γ~30° and -45°, respectively. The signature splitting behavior of the negative-parity band is discussed. The shape change with increasing angular momentum and the signature splitting can be interpreted well in terms of the Cranked Nilsson-Strutinsky-Bogoliubov and Cranked Nilsson-Strutinsky model calculations.
Simulation of the fission dynamics of the excited compound nuclei 206Po and 168Yb produced in the reactions 12C+194Pt and 18O+150Sm
H. Eslamizadeh, F. Bagheri
2017, 41(4): 044101. doi: 10.1088/1674-1137/41/4/044101
A two-dimensional dynamical model based on the Langevin equation was used to study the fission dynamics of the compound nuclei 206Po and 168Yb produced in the reactions 12C+194Pt and 18O+150Sm, respectively. The fission cross section and average pre-scission neutron multiplicity were calculated for the compound nuclei206Po and 168Yb, and results of the calculations compared with the experimental data. The elongation coordinate was used as the first dimension and the projection of the total spin of the compound nucleus onto the symmetry axis, K, considered as the second dimension in the Langevin dynamical calculations. In the two-dimensional calculations, a constant dissipation coefficient of K and a non-constant dissipation coefficient have been used to reproduce the above-mentioned experimental data. It is shown that the two-dimensional Langevin equation can satisfactorily reproduce the fission cross section and average pre-scission neutron multiplicity for the compound nuclei 206Po and 168Yb by using constant values of the dissipation coefficient of K equal to γK=0.18(MeV zs)-1/2 and γK=0.20(MeV zs)-1/2 for the compound nuclei 206Po and 168Yb, respectively.
Yield ratios and directed flows of light fragments from reactions induced by neutron-rich nuclei at intermediate energy
2017, 41(4): 044102. doi: 10.1088/1674-1137/41/4/044102
The yield ratios of neutron/proton and 3H/3He and the directed flow per nucleon for these projectile-like fragments at large impact parameters are studied for 50Ca+40Ca and 50Cr+40Ca for comparison at 50 MeV/u using the isospin-dependent quantum molecular dynamics (IQMD) model. It is found that the yield ratios and the directed flows per nucleon are different for reactions induced by the neutron-rich nucleus 50Ca and the stable isobaric nucleus 50Cr, and depend on the hardness of the EOS. The ratios of neutron/proton and 3H/3He and the difference of directed flow per nucleon of neutron-proton are suggested to be possible observables to investigate the isospin effects.
Correlation between quarter-point angle and nuclear radius
Wei-Hu Ma, Jian-Song Wang, S. Mukherjee, Qi Wang, D. Patel, Yan-Yun Yang, Jun-Bing Ma, Peng Ma, Shi-Lun Jin, Zhen Bai, Xing-Quan Liu
2017, 41(4): 044103. doi: 10.1088/1674-1137/41/4/044103
The correlation between quarter-point angle of elastic scattering and nuclear matter radius is studied systematically. Various phenomenological formulae with parameters for nuclear radius are adopted and compared by fitting the experimental data of quarter point angle extracted from nuclear elastic scattering reaction systems. A parameterized formula related to binding energy is recommended, which gives a good reproduction of nuclear matter radii of halo nuclei. It indicates that the quarter-point angle of elastic scattering is quite sensitive to the nuclear matter radius and can be used to extract the nuclear matter radius.
Exploration of resonances by using complex momentum representation
Ya-Juan Tian, Tai-Hua Heng, Zhong-Ming Niu, Quan Liu, Jian-You Guo
2017, 41(4): 044104. doi: 10.1088/1674-1137/41/4/044104
Resonance research is a hot topic in nuclear physics, and many methods have been developed for resonances. In this paper, we explore resonances by solving the Schrödinger equation in complex momentum representation, in which the bound states and resonant states are separated completely from the continuum and exposed clearly in the complex momentum plane. We have checked the convergence of the calculations on the grid numbers of the Gauss-Hermite quadrature and the Gauss-Legendre quadrature, and the dependence on the contour of momentum integration. Satisfactory results are obtained. 17O is chosen as an example, and we have calculated the bound and resonant states to be in excellent agreement with those calculated in the coordinate representation.
New empirical formula for (γ, n) reaction cross section near GDR peak for elements with Z≥60
Rajnikant Makwana, S. Mukherjee, Jian-Song Wang, Zhi-Qiang Chen
2017, 41(4): 044105. doi: 10.1088/1674-1137/41/4/044105
A new empirical formula has been developed that describes the (γ, n) nuclear reaction cross sections for isotopes with Z≥60. The results were supported by calculations using TALYS—1.6 and EMPIRE—3.2.2 nuclear modular codes. The energy region for incident photon energy has been selected near the giant dipole resonance (GDR) peak energy. The evaluated empirical data were compared with available data in the experimental data library EXFOR. The data produced using TALYS—1.6 and EMPIRE—3.2.2 are in good agreement with experimental data. We have tested and presented the reproducibility of the present new empirical formula. We observe the reproducibility of the new empirical formula near the GDR peak energy is in good agreement with the experimental data and shows a remarkable dependency on key nuclei properties: the neutron, proton and atomic number of the nuclei. The behavior of nuclei near the GDR peak energy and the dependency of the GDR peak on the isotopic nature are predicted. An effort has been made to explain the deformation of the GDR peak in (γ, n) nuclear reaction cross sections for some isotopes, which could not be reproduced with TALYS—1.6 and EMPIRE—3.2.2. The evaluated data have been presented for the isotopes 180W, 183W, 202Pb, 203Pb, 204Pb, 205Pb, 231Pa, 232U, 237U and 239Pu, for which there are no previous measurements.
Particle and Nuclear Astrophysics and Cosmology
Polarization of gamma-ray burst afterglows in the synchrotron self-Compton process from a highly relativistic jet
Hai-Nan Lin, Xin Li, Zhe Chang
2017, 41(4): 045101. doi: 10.1088/1674-1137/41/4/045101
Linear polarization has been observed in both the prompt phase and afterglow of some bright gamma-ray bursts (GRBs). Polarization in the prompt phase spans a wide range, and may be as high as ≳50%. In the afterglow phase, however, it is usually below 10%. According to the standard fireball model, GRBs are produced by synchrotron radiation and Compton scattering process in a highly relativistic jet ejected from the central engine. It is widely accepted that prompt emissions occur in the internal shock when shells with different velocities collide with each other, and the magnetic field advected by the jet from the central engine can be ordered on a large scale. On the other hand, afterglows are often assumed to occur in the external shock when the jet collides with interstellar medium, and the magnetic field produced by the shock through, for example, Weibel instability, is possibly random. In this paper, we calculate the polarization properties of the synchrotron self-Compton process from a highly relativistic jet, in which the magnetic field is randomly distributed in the shock plane. We also consider the generalized situation where a uniform magnetic component perpendicular to the shock plane is superposed on the random magnetic component. We show that it is difficult for the polarization to be larger than 10% if the seed electrons are isotropic in the jet frame. This may account for the observed upper limit of polarization in the afterglow phase of GRBs. In addition, if the random and uniform magnetic components decay with time at different speeds, then the polarization angle may change 90° during the temporal evolution.
Neutron stars including the effects of chaotic magnetic fields and anomalous magnetic moments
Fei Wu, Chen Wu, Zhong-Zhou Ren
2017, 41(4): 045102. doi: 10.1088/1674-1137/41/4/045102
The relativistic mean field (RMF) FSUGold model extended to include hyperons is employed to study the properties of neutron stars with strong magnetic fields. The chaotic magnetic field approximation is utilized. The effect of anomalous magnetic moments (AMMs) is also investigated. It is shown that the equation of state (EOS) of neutron star matter is stiffened by the presence of the magnetic field, which increases the maximum mass of a neutron star by around 6%. The AMMs only have a small influence on the EOS of neutron star matter, and increase the maximum mass of a neutron star by 0.02Msun. Neutral particles are spin polarized due to the presence of the AMMs.
Hawking radiation and entropy of a black hole in Lovelock-Born-Infeld gravity from the quantum tunneling approach
Gu-Qiang Li
2017, 41(4): 045103. doi: 10.1088/1674-1137/41/4/045103
Constraints on dark matter annihilation and decay from the isotropic gamma-ray background
2017, 41(4): 045104. doi: 10.1088/1674-1137/41/4/045104
We study the constraints on dark matter (DM) annihilation/decay from the Fermi-LAT Isotropic Gamma-Ray Background (IGRB) observation. We consider the contributions from both extragalactic and galactic DM components. For DM annihilation, the evolution of extragalactic DM halos is taken into account. We find that the IGRB annihilation constraints under some DM subhalo models can be comparable to those derived from the observations of dwarf spheroidal galaxies and CMB. We also use the IGRB results to constrain the parameter regions accounting for the latest AMS-02 electron-positron anomaly. We find that the majority of DM annihilation/decay channels are strongly disfavored by the latest Fermi-LAT IGRB observation; only DM decays to μ+μ- and 4μ channels may be valid.
Detectors, Related Electronics and Experimental Methods
Study of a sealed high gas pressure THGEM detector and response of alpha particle spectra
2017, 41(4): 046001. doi: 10.1088/1674-1137/41/4/046001
Study of CdMoO4 crystal for a neutrinoless double beta decay experiment with 116Cd and 100Mo nuclides
2017, 41(4): 046002. doi: 10.1088/1674-1137/41/4/046002
The scintillation properties of a CdMoO4 crystal have been investigated experimentally. The fluorescence yields and decay times measured from 22 K to 300 K demonstrate that CdMoO4 crystal is a good candidate for an absorber for a bolometer readout, for both heat and scintillation signals. The results from Monte Carlo studies, taking the backgrounds from 2ν2β of 42100Mo (48116Cd) and internal trace nuclides 214Bi and 208Tl into account, show that the expected sensitivity of a CdMoO4 bolometer for neutrinoless double beta decay experiments with an exposure of 100 kg·years is one order of magnitude higher than those of the current sets of the T1/20νββ of 42100Mo and 48116Cd.
A highly pixelated CdZnTe detector based on Topmetal-II- sensor
2017, 41(4): 046003. doi: 10.1088/1674-1137/41/4/046003
Topmetal-Ⅱ- is a low noise CMOS pixel direct charge sensor with a pitch of 83 μm. CdZnTe is an excellent semiconductor material for radiation detection. The combination of CdZnTe and the sensor makes it possible to build a detector with high spatial resolution. In our experiments, an epoxy adhesive is used as the conductive medium to connect the sensor and cadmium zinc telluride (CdZnTe). The diffusion coefficient and charge efficiency of electrons are measured at a low bias voltage of -2 V, and the image of a single alpha particle is clear with a reasonable spatial resolution. A detector with such a structure has the potential to be applied in X-ray imaging systems with further improvements of the sensor.
Measurement of the transfer function for a spoke cavity of C-ADS Injector I
Xue-Fang Huang, Yi Sun, Guang-Wei Wang, Shao-Zhe Wang, Xiang Zheng, Qun-Yao Wang, Rong Liu, Hai-Ying Lin, Mu-Yuan Wang
2017, 41(4): 047001. doi: 10.1088/1674-1137/41/4/047001
The spoke cavities mounted in the China Accelerator Driven sub-critical System (C-ADS) have high quality factor (Q) and very small bandwidth, making them very sensitive to mechanical perturbations, whether external or self-induced. The transfer function is used to characterize the response of the cavity eigenfrequency to the perturbations. This paper describes a method to measure the transfer function of a spoke cavity. The measured Lorentz transfer function shows there are 206 Hz and 311 Hz mechanical eigenmodes excited by Lorentz force in the cavity of C-ADS, and the measured piezo fast tuner transfer function shows there are 12 mechanical eigenmodes from 0 to 500 Hz. According to these results, some effective measures have been taken to weaken the influence of helium pressure fluctuation, avoid mechanical resonances and improve the reliability of the RF system.
Electron bunch train excited higher-order modes in a superconducting RF cavity
2017, 41(4): 047002. doi: 10.1088/1674-1137/41/4/047002
Higher-order mode (HOM) based intra-cavity beam diagnostics has been proved effective and convenient in superconducting radio-frequency (SRF) accelerators. Our recent research shows that the beam harmonics in the bunch train excited HOM spectrum, which have much higher signal-to-noise ratio than the intrinsic HOM peaks, may also be useful for beam diagnostics. In this paper, we will present our study on bunch train excited HOMs, including a theoretical model and recent experiments carried out based on the DC-SRF photoinjector and SRF linac at Peking University.
Microfocus small-angle X-ray scattering at SSRF BL16B1
Wen-Qiang Hua, Yu-Zhu Wang, Ping Zhou, Tao Hu, Xiu-Hong Li, Feng-Gang Bian, Jie Wang
2017, 41(4): 048001. doi: 10.1088/1674-1137/41/4/048001
Fast and accurate generation method of PSF-based system matrix for PET reconstruction
2017, 41(4): 048201. doi: 10.1088/1674-1137/41/4/048201
This work investigates the positional single photon incidence response (P-SPIR) to provide an accurate point spread function (PSF)-contained system matrix and its incorporation within the image reconstruction framework. Based on the Geant4 Application for Emission Tomography (GATE) simulation, P-SPIR theory takes both incidence angle and incidence position of the gamma photon into account during crystal subdivision, instead of only taking the former into account, as in single photon incidence response (SPIR). The response distribution obtained in this fashion was validated using Monte Carlo simulations. In addition, two-block penetration and normalization of the response probability are introduced to improve the accuracy of the PSF. With the incorporation of the PSF, the homogenization model is then analyzed to calculate the spread distribution of each line-of-response (LOR). A primate PET scanner, Eplus-260, developed by the Institute of High Energy Physics, Chinese Academy of Sciences (IHEP), was employed to evaluate the proposed method. The reconstructed images indicate that the P-SPIR method can effectively mitigate the depth-of-interaction (DOI) effect, especially at the peripheral area of field-of-view (FOV). Furthermore, the method can be applied to PET scanners with any other structures and list-mode data format with high flexibility and efficiency. |
34b734194fc39b45 | Linear Algebra and Higher Dimensions
Nowadays, governments all try to maximize their GDPs. But if you read Estève’s article on the Human Development Index, you’ll realize that GDP is not all that matters. Too often people and organizations are judged on a single measure that makes them lose sight of the diversity of the goals they must achieved. Other examples include the Shanghai ranking for universities (which led to huge merging projects in France!), calories in diets or “good and evil“. Here’s one last example explained by Barney Stinson on How I Met Your Mother, about how hotness must not be the only criterion to judge a woman’s attractiveness:
I’m going to use Barney’s graph all along, and I hope you won’t get offended. The point of the hot-crazy scale is just to make the article more fun! Also, note that in Barney’s example, Vicky Mendoza’s trajectory should actually rather be a sort of staircase going up…
In all these examples, mathematicians would say that a multidimensional space gets projected on a single dimension, as we turn vectors into scalars.
What??? The multi-what?
Hehe… Let’s enter the breathtaking world of linear algebra to see what I mean!
Vectors and Scalars
In linear algebra, a vector is fancy word to talk about all the dimensions at once. For instance, according to Barney Stinson, a girl is a combination of craziness and hotness. Now, I know I’m not supposed to do that, but we can give values to these dimensions, ranging from 0 to 10. So, for instance, a girl both not crazy and hot would be 2-crazy and 8-hot. This information is compactly denoted (2,8). Each of the numbers is then called a scalar.
So, a vector is just 2 scalars?
Not necessarily 2! A vector can be made of more scalars. For instance, we may be interested to know if the girl cooks well. And, once again, we can give a value between 0 and 10 describing how well she cooks. In this setting, 3 scalars are necessary. A girl may then be 2-crazy, 8-hot and 6-cooking girl, which we would denote (2,8,6). More generally, the number of scalars we need to describe a girl is called the dimension of the set of girls.
The dimension? Like space dimensions?
Weirdly enough, yes. Exactly like space dimensions. This is something that was remarkably noticed by Frenchman René Descartes in the 1600s, as he unified the two major branches of mathematics of that time, geometry and algebra.
What do you mean? What are you talking about?
Descartes noticed that a single scalar can be associated to a point on an infinite line. And, just like Barney did it, a 2-scalar vector can be placed on a 2-dimensional graph. And, as you’ve guessed, a 3-scalar vector can be placed in our 3-dimension space. Thus, basic geometry can be boiled down to scalars! This is what’s displayed below:
So linear algebra is just about studying several scalars all at once rather than a single one?
Basically, yes. But there’s so much more…
Let’s get back to Barney’s crazy-hot scale. Vectors here are made of two scalars, one for craziness, the other for hotness. But as Barney pointed it out, what rather matters is the location of a girl with regards to the Vicky-Mendoza diagonal. To quote Barney, “a girl is allowed to be crazy as long as she is equally hot”. Now, what’s disappointing with the measures of hotness and craziness is that we need to know both to find out whether a girl is attractive or not. So, let’s introduce two other measures we call attractiveness and extremeness.
How do you define these?
Well, according to Barney, the more a girl is above the Vicky-Mendoza diagonal, the more attractive she is. Thus, attractiveness can be defined as how far away from the diagonal a girl is. Meanwhile, a girl is extreme if she’s near the top right corner. Let’s illustrate with figures:
Hot-Crazy and Attractive-Extreme
By now using the extreme-attractive scale, a 2-crazy and 8-hot girl can be described as a 5-extreme and 3-attractive girl. In more mathematical terms, we say that this girl has coordinates (2,8) on the crazy-hot scale, and (5,3) on the extreme-attractive scale. Now here’s the tricky part of linear algebra: While a girl can be defined as 5-extreme and 3-attractive, it makes no sense to only say that she is 3-attractive!
More precisely, the attractiveness of a girl can only be defined within a coordinate system. So, you can talk about her attractiveness in the extreme-attractiveness scale, but you cannot define attractiveness alone! And this holds for craziness, hotness and extremeness as well!
What the hell? Why?
Let’s consider our 2-crazy and 8-hot girl, and let’s look at her coordinates in the hot-attractive scale.
Aren’t these (8,3)?
Well, let’s draw a graph to figure it out!
Hot-Attractive Scale
As you can see, on a hot-attractive scale, the coordinates of the girl are not (8,3). They are (10,-2). In particular, the attractiveness is now… negative!
I’m completely lost here… What the hell is happening?
In essence, what we see here is that the concept of coordinates strongly depend on the scales we choose. These scales are called coordinate systems, or basis. But, more importantly, what we have shown here is that there’s nothing fundamental in the concept of coordinates. Any modification of the axes change these coordinates. Another phrasing of this remark consists in saying choosing a first and a second dimension is completely artificial and not fundamental. This is what’s brilliantly explained by Henry Reich on Minute Physics:
To understand what Henry means by “different kinds of dimension of spacetime”, check my articles on special and general relativity. Beware, this is trickier than it sounds!
To fully understand what’s going on here, we actually need to study spaces independently from a coordinate system. That’s where linear algebra becomes awesome!
Addition and Scalar Multiplication
The key concept for a geometrical description of linear algebra is to think of vectors as motions. Such motions can typically be represented by arrows. For instance, the following figure displays the motion of Barney’s face from bottom left to top right:
Crucially, the arrows of the figure all represent the same motion. And since a vector is a motion (not an arrow!), they all represent the same vector. In other words, you can move an arrow wherever you want, it will still stand for the same vector.
But what’s the point in considering vectors as motions?
We’ll now get to do some cool algebra of vectors, independently from any basis! At its core, algebra is the study of operations. And there are two very natural operations we can do with motions. First, we can combine motions. This is called addition. Here’s how it’s done:
Addition of Vectors
The reason why we define this combination of motions as an addition is because it has strong similarities with the classical addition of numbers. Namely, the addition of vectors yields a zero element (the motion which consists in not moving!), is commutative and associative (the order in which we combine several motions does not matter) and can be inverted (by the opposite motion). In pure algebra terms, this means that a vector space is a commutative group.
Another important operation which can be done with a motion is its rescaling. Algebraically, this stretching corresponds to multiplying the motion by a scalar. For instance, we can double a motion (multiplication by 2), or invert it (multiplication by -1). This operation is known as scalar multiplication.
More precisely, the set of scalars must be a field, which means that an addition, subtraction, multiplication and division (except by zero) must all be well-defined. In practice, this set of scalars is often one of the following sets: $\mathbb Q$, $\mathbb R$, $\mathbb C$, $\mathbb Z/p\mathbb Z$. If the set of scalar is only a ring (which means division is not defined in general) like $\mathbb Z$, then we talk about modules rather than vector spaces.
So, in essence, a vector space is simply a set of vectors which can be added and scalar-multiplied. In fact, everything in linear algebra has to be based on merely additions and scalar-multiplications!
Really? Even coordinate systems?
Yes! So far, a basis has been defined by two graduated axes. But we should rather think of bases in terms of motions too!
Really? How?
Let’s get back to the crazy-hot scale. In this setting, we can define the crazy unit vector as the motion of 1 towards craziness, and the hot unit vector as the motion of 1 towards hotness.
OK… So scales are defined by the unit vectors, but this doesn’t say much about coordinates!
There’s one last ingredient we need for this: The origin. A girl can now be located by the motion from the origin to her location in the crazy-hot scale. And, amazingly, this motion can be decomposed by certain amounts of unit motions towards craziness and towards hotness. For instance, a 2-crazy and 8-hot girl can be described as a motion from the origin by 2 unit vectors of craziness and 8 unit vectors of hotness! This is what’s pictured below:
Decomposition in Crazy-Hot Basis
In fact, all girls are associated to vectors which can be decomposed as a sum of the crazy and hot unit vectors! And this holds for other bases too:
Different Vector Decompositions
So, what’s really meant by coordinates in a coordinate system is how a vector can be decomposed into a sum of the unit vectors of the basis, right?
Yes! And now, this definition doesn’t require an a priori on the coordinate system! It’s solely based on additions and scalar multiplications! Isn’t it amazing?
Hummm… I guess…
OK, let me show you how else algebra can blow your mind now then! So far, to decompose the 2-crazy and 8-hot vector into other bases, I had to draw the figure on the left. This technique is great for our intuition, but it’s not very fast…
Are you going to show how to decompose a vector quickly in any basis?
Yes! For instance, let’s decompose the 2-crazy and 8-hot vector in the extreme-attractive scale without geometry! The key is to notice that a crazy vector is half an extreme unit vector minus half an attractive unit vector. Meanwhile, a hot unit vector is itself a sum of half an extreme unit vector and half an attractive unit vector. Let’s replacing them in our first equality above and involve the power of algebra:
Decomposition with Algebra
As often done in algebra, for the simplicity of formulas, I’ve removed all signs of scalar multiplications.
Can such a decomposition as a sum of unit vectors always be done?
No. The family of unit vectors must satisfy 2 criteria. First, the decomposition must exist, in which case we say that the unit vectors are spanning. And second, the decomposition must be unique. When this is the case, the family is called linearly independent. When the family of unit vectors is both spanning and linearly independent, it forms a basis and defines a proper coordinate systems.
Formally, a family $(e_1, …, e_n)$ of vectors is spanning if, for any vector $x$, there exist scalar coordinates $(\lambda_1, …, \lambda_n)$ such that $x = \lambda_1 e_1 + … + \lambda_n e_n$. A family $(e_1, …, e_n)$ is linearly independent if the only coordinates $(\lambda_1, …, \lambda_n)$ such that $\lambda_1 e_1 + … + \lambda_n e_n = 0$ are the zero coordinates $(0, …, 0)$.
Amazingly, for a given set of vectors, the number of vectors in a basis is always the same. This invariant is called the dimension of the vector space. Typically, the crazy-hot scale is of dimension 2 because any coordinate system requires 2 scalars. But, as we have seen, the crazy-hot-cooking scale has dimension 3, and we can easily go on defining higher dimension vector spaces! In fact, dimensions can even be infinite!
Yes! That’s the case of infinite sequences. Indeed, these form a vector space, as the addition of two infinite sequences (add terms by terms!) and the scalar multiplication by a scalar (multiply all terms by the scalar) are naturally defined. Yet, there is no way of decomposing all sequences into a basis made of a finite number of unit vectors!
We can still guarantee that infinite dimension spaces have bases. In this case, a basis is a family of vectors such that any vector $v$ can be decomposed as a finite sum of basis vectors. BUT, weirdly enough, proving the existence of these bases requires the very controversial and not-always-accepted axiom of choice, as the proof involves considering minimal spanning family and maximal linearly independent ones. In fact, bases of infinite sequences are never explicit!
As you’ve probably guessed it, the method we used to perform algebraically a basis change was perfectly generic. So let’s really generalize it! To do so, instead of studying the 2-crazy and 8-hot girl, we’ll focus on all girls at once!
Can we do that?
Yes! Let’s just call them $x$. More precisely, all girls can be described as $x_{crazy}$-crazy and $x_{hot}$-hot. And, as we do the exact same manipulation as earlier, we obtain the following formulas:
Basis Change
Hummm… OK… What now?
Now, we know that the terms before the unit vectors are the coordinates! Thus, we can write the coordinates of a girl in the extreme-attractive scales as a function of their coordinates in the crazy-hot scale. Here’s what we get:
Basis Change Formulas
As you can see, each coordinate in the extreme-attractive scale is actually a simple sum of the coordinates in crazy-hot scales multiplied by constant coefficients. This means that the relation between the two scales is perfectly described by these coefficients. To keep track of these coefficients, mathematicians have decided to put them in tables, called matrices.
Matrices? Like the movie matrix?
Exactly! Although, I’m not sure why they named the movie like that…
Basically, a matrix is just a table. But, interestingly, there are plenty of operations which can be done with matrices. In particular, we can do addition, scalar multiplication (which means matrices are vectors!), and multiplication (which means that they are more than just vectors!). These operations are beautifully described in Bill Shillito in the great TedEd video:
Careful though, matrix multiplication is not commutative, which means that $MN \neq NM$. In fact, if matrices $M$ and $N$ aren’t of the right size, it might be that $MN$ is well-defined but $NM$ not.
Let’s apply that to basis change! By arranging the coefficients of the formulas we got in a natural 2-lines and 2-columns matrix, and the 2-scalar coordinates in 2×1 matrices, we get the following equality:
Basis Change with Matrix
Pretty cool, isn’t it?
Hummm… So, when we do linear algebra, we always have to write huge matrices?
Hehe… That’s where we use the full potential of algebra! A huge matrix can be referred to by any letter you want! So, for instance, let’s call $P_{CH}^{EA}$ our 2×2 matrix, $X_{CH}$ the 2×1 matrix of crazy-hot coordinates and $X_{EA}$ the 2×1 matrix of the extreme-attractive coordinates, the formula above is simply denoted $X_{EA} = P_{CH}^{EA} X_{CH}$. So, once you know $P_{CH}^{EA}$, a mere matrix multiplication yields the basis change! Now, this matrix multiplication may still sound hard to compute, but don’t forget we have computers to do that now!
Well, you still need to compute $P_{CH}^{EA}$…
Haven’t you noticed what $P_{CH}^{EA}$ stood for? The two columns are in fact the coordinates of the crazy and hot unit vectors in the extreme-attractive scale! More generally, the coordinates of the former basis in the latter one are the only information needed to compute $P_{CH}^{EA}$! Isn’t that awesomely simple?
I wouldn’t say that… But it’s pretty cool indeed!
Let’s now generalize matrix multiplications.
Linear Transformation
Once we’ve chosen a basis of a vector space, all vectors can be represented by their coordinates, which can be gathered in a column matrix $X$, simply called column. Operations can then be done to that column, like the multiplication $MX$ by a matrix $M$. We then obtain a new column $Y = MX$, whose coordinates in turns represent some vector in the chosen basis. Now, that kind of going back-and-forth between vectors and their corresponding columns is not something I appreciate…
Because all this reasoning pre-assumes we had bases to work on. This may not be the case for some vector spaces. And even if these do yield bases, it may not be natural to work with them! So, what mathematicians have done is taking out the essence of the operations of multiplying by $M$: linearity.
After all, we’re doing linear algebra, right?
I’ve read that, but I’m still not sure what that means!
Linearity means that everything is made of additions and scalar multiplications. And everything is compatible with these. Crucially, taking a column $X$ and transforming it into $MX$ is a linear operation. This means that it is compatible with addition and scalar multiplication, which, formally, corresponds to identities $M(X+Y) = MX + MY$ and $M(\lambda X) = \lambda (MX)$ where $\lambda$ is a scalar. But, once again, I don’t want to study columns… I want to study vectors!
So you’re going to define linear transformations of vectors, right?
Precisely. Let’s denote $x$ any vector, and let’s transform it into another vector we call $u(x)$. Then, $u$ is a linear transformations of vectors if $u(x+y) = u(x) + u(y)$ and $u(\lambda x) = \lambda u(x)$.
Can you give an example?
Sure! A girl in the crazy-hot scale is some vector $x$. We can transform this vector into the column of its coordinates in the crazy-hot scale. By decomposing all vectors as a sum of crazy and hot unit vectors, you can show that this transformation is indeed linear. Another example is transforming $x$ into its crazy coordinate in the crazy-hot scale. This time, $u(x)$ would be a scalar, and, once again, you can show that $u$ is linear. In fact, this is an example of the projections I was referring to in the introduction!
Are linear transformations always about retrieving coordinates?
No! Other geometrical examples are homothety (multiply all vectors by a constant) and rotation, you can read about in my articles on symmetries and complex numbers.
Rotation and Homothety
One cool fact about linear transformations is that you fully know them once you know how they transform a basis. Indeed, if $(e_1, …, e_n)$ is a basis, any vector $x$ can be decomposed by $x = x_1 e_1 + … + x_n e_n$. By linearity, we then have $u(x) = x_1 u(e_1) + … + x_n u(e_n)$. The linear transformation $u$ is fully represented by the matrix whose columns are the coordinates of the vectors $u(e_1), …, u(e_n)$ in a basis of the output vector space.
Formally, given bases of the input and output vector spaces, the mapping of a linear transformation $u$ to the matrix described above is in fact a fundamental isomorphism of algebra. This means that it is a one-to-one map, which is linear and compatible with the multiplication of matrices, which is associated to the composition of linear functions.
However, even given a corresponding matrix, it’s usually hard to fully understand linear transformations.
Are there other ways?
Yes! Most importantly are the images and the kernels of linear transformations. The image is the range of values $u$ can yield. For instance, if $u$ maps vectors to a column whose entries are all the first coordinate in a certain basis, then $u$ always creates columns with all-identical entries. Meanwhile, the kernel contains all the solutions of equation $u(x) = 0$. This is very important for solving equations like $u(x)=y$, which correspond to linear systems of equations. Then, if $x_0$ is a solution and $x$ is in the kernel, then $x+x_0$ is also solution. Indeed, $u(x_0 + x) = u(x_0) + u(x) = y + 0 = y$.
Images and kernels are in fact vector subspaces. An important theorem of linear algebra then links them. It says that the dimensions of the kernel and of the image add up to the dimension of the input vector space.
Finally, note that the most important kind of linear transformations are those which transform a vector into a vector of the same space. These are called endomorphism. Then, it’s common to use the same basis for inputs and outputs to write the corresponding matrices. Since each column and each row corresponds to a unit vector of the basis, these matrices are then square matrices.
In particular, endomorphisms can be multiplied with one another without restriction. Equivalently, the multiplication is perfectly well-defined for square matrices of a given size. This means that the set of endomorphisms and the set of square matrices are what mathematicians call algebras.
A powerful way to study endomorphisms and square matrices is the study of eigenvectors and eigenvalues. An eigenvector is a vector such that its image is proportional to itself. So typically, $x$ is an eigenvector if $u(x) = \lambda x$. The scalar $\lambda$ is then an eigenvalue of $u$. The study of eigenvalues is especially important to quantum mechanics, where waves stationary for the Schrödinger equation are eigenvectors.
The study of eigenvalues is a very important area of mathematics. If you can, please write about it!
Let’s Conclude
It is hard to stress how important linear algebra is to mathematics and science. To do so, let me end by listing some of my articles which require linear algebra for a full understanding: linear programming, spacetime of special relativity, spacetime of general relativity, infinite series, model of football games, Fourier analysis, group representation, high-dynamic range, complex numbers, dynamics of the wave function, game theory, evolutionary game theory… In my present research on mechanism design, linear algebra is the essential ground I stand on without even noticing it. That’s how essential linear algebra is!
As I have tried to give you the intuition of linear algebra, it might have sounded long to introduce. Yet, the power of algebra could get you much further much quicker. This is why I strongly recommend you to learn more and become more and more familiar with algebra. Once you do master it better, you’ll be unstoppable in the understanding of complex (linear) systems!
More on Science4All
The Magic of Algebra The Magic of Algebra
By Lê Nguyên Hoang | Updated:2016-02 | Views: 3155
Colours and Dimensions Colours and Dimensions
By Lê Nguyên Hoang | Updated:2015-12 | Views: 2998
Optimization by Linear Programming Optimization by Linear Programming
By Lê Nguyên Hoang | Updated:2016-02 | Views: 6926
Imaginary and Complex Numbers Imaginary and Complex Numbers
By Lê Nguyên Hoang | Updated:2016-02 | Views: 5956
My first reaction to imaginary numbers was... What the hell is that? Even now, I have trouble getting my head around these mathematical objects. Fortunately, I have a secret weapon: Geometry! This article proposes constructing complex numbers with a very geometrical and intuitive approach, which is probably very different from what you've learned (or will learn).
The Surprising Flavor of Infinite Series The Surprising Flavor of Infinite Series
By Lê Nguyên Hoang | Updated:2016-01 | Views: 9140
Spacetime of General Relativity Spacetime of General Relativity
By Lê Nguyên Hoang | Updated:2016-01 | Views: 12541
Leave a Reply
|
9306f51394940207 | Skip to main content
Chemistry LibreTexts
Lecture 12: Vibrational Spectroscopy of Diatomic Molecules
• Page ID
• Recap of Lecture 11
Last lecture continued the discussion of vibrations into the realm of quantum mechanics. We reviewed the classical picture of vibrations including the classical potential, bond length, and bond energy. We then introduced the quantum version using the harmonic oscillator as an approximation of the true potential. This involves constructing a Hamilonian with a parabolic potential. Solving the resulting (time-independent) Schrödinger equation to obtain the eigeinstates, energies, and quantum numbers (v) results is beyond this course, so they are given. Key aspect of these solution are the fundamental frequency and zero-point energy.
The motion of two particles in space can be separated into translational, vibrational, and rotational motions.
Different ways of visualizing the 6 degrees of freedom of a diatomic molecule. (CC BY-NC-SA; anonymous by request)
IR spectroscopy which has become so useful in identification, estimation, and structure determination of compounds draws its strength from being able to identify the various vibrational modes of a molecule. A complete description of these vibrational normal modes, their properties and their relationship with the molecular structure is the subject of this article.
Degree of freedom is the number of variables required to describe the motion of a particle completely. For an atom moving in 3-dimensional space, three coordinates are adequate so its degree of freedom is three. Its motion is purely translational. If we have a molecule made of N atoms (or ions), the degree of freedom becomes 3N, because each atom has 3 degrees of freedom. Furthermore, since these atoms are bonded together, all motions are not translational; some become rotational, some others vibration.
\[3N-5 \label{1}\]
\[3N-6 \label{2}\]
1. Determine if the molecule is linear or nonlinear (i.e. Draw out molecule using VSEPR). If linear, use Equation \ref{1}. If nonlinear, use Equation \ref{2}
2. Calculate how many atoms are in your molecule. This is your \(N\) value.
3. Plug in your \(N\) value and solve.
Atoms (very symmetric) Linear molecules (less symmetric) Non-linear molecules (most unsymmetric)
Translation (x, y, and z) 3 3 3
Rotation (x, y, and z) 0 2 3
Vibrations 0 3N − 5 3N − 6
Total (including Vibration) 3 3N 3N
Example 1: Water
• The Symmetric Stretch (Example shown is an H2O molecule at 3685 cm-1)
• The Asymmetric Stretch (Example shown is an H2O molecule at 3506 cm-1)
• Bend (Example shown is an H2O molecule at 1885 cm-1)
A linear molecule will have another bend in a different plane that is degenerate or has the same energy. This accounts for the extra vibrational mode.
Example 2: Carbon Dioxide
Example 3: The Methylene Group
It is important to note that there are many different kinds of bends, but due to the limits of a 2-dimensional surface it is not possible to show the other ones.
The frequency of these vibrations depend on the inter atomic binding energy which determines the force needed to stretch or compress a bond.
Properties of a Molecular Bond
What do we know about bonds from general chemistry?
1. Breaking a bond always requires energy and hence making bonds always release energy.
2. Bond length
3. Bond energy (or enthalpy or strength)
The potential energy of a system of two atoms depends on the distance between them. At large distances the energy is zero, meaning “no interaction”. At distances of several atomic diameters attractive forces dominate, whereas at very close approaches the force is repulsive, causing the energy to rise. The attractive and repulsive effects are balanced at the minimum point in the curve.
The internuclear distance at which the potential energy minimum occurs defines the bond length. This is more correctly known as the equilibrium bond length, because the two atoms will always vibrate about this distance.
Bond lengths depend mainly on the sizes of the atoms, and secondarily on the bond strengths, the stronger bonds tending to be shorter. Bonds involving hydrogen can be quite short; The shortest bond of all, H–H, is only 74 pm. Multiply-bonded atoms are closer together than singly-bonded ones; this is a major criterion for experimentally determining the multiplicity of a bond. This trend is clearly evident in the above plot which depicts the sequence of carbon-carbon single, double, and triple bonds.
Reduced mass (Converting two atoms moving into one)
The internal motions of vibration and rotation for a two-particle system can be described by a single reduced particle with a reduced mass \(μ\) located at \(r\). For a diatomic molecule, In the below figure, the vector \(\vec{r}\) corresponds to the internuclear axis. The magnitude or length of \(r\) is the bond length, and the orientation of \(r\) in space gives the orientation of the internuclear axis in space. Changes in the orientation correspond to rotation of the molecule, and changes in the length correspond to vibration. The change in the bond length from the equilibrium bond length is the vibrational coordinate for a diatomic molecule.
The diagram shows the coordinate system for a reduced particle. \(R_1\) and \(R_2\) are vectors to \(m_1\) and \(m_2\). \(R\) is the resultant and points to the center of mass. (b) Shows the center of mass as the origin of the coordinate system, and (c) expressed as a reduced particle.
The Classical Harmonic Oscillator
Simple harmonic oscillators about a potential energy minimum can be thought of as a ball rolling frictionlessly in a dish (left) or a pendulum swinging frictionlessly back and forth. The restoring forces are precisely the same in either horizontal direction.
Simple image of a ball oscillating in a potential. Image used with permission from Wikipedia.
Recall that the Hamiltonian operator \(\hat{H}\) is the summation of the kinetic and potential energy in a system. There are several ways to approximate the potential function \(V\), but the two main means of approximation are done by using a Taylor series expansion, and the Morse Potential. The vibration of a diatomic is akin to an oscillating mass on a spring.
An undamped spring–mass system undergoes simple harmonic motion. Image used with permission from Wikipedia.
The classical forces in chemical bonds can be described to a good approximation as spring-like or Hooke's law type forces. This is true provided the energy is not too high. Of course, at very high energy, the bond reaches its dissociation limit, and the forces deviate considerably from Hooke's law. It is for this reason that it is useful to consider the quantum mechanics of a harmonic oscillator.
We will start in one dimension. Note that this is a gross simplification of a real chemical bond, which exists in three dimensions, but some important insights can be gained from the one-dimensional case. The Hooke's law force is
where \(k\) is the spring constant. This force is derived from a potential energy
Let us define the origin of coordinates such that \(x_0 =0\). Then the potential energy
\[ \color{red} V(x)=\dfrac{1}{2}kx^2\]
If a particle of mass \(m\) is subject to the Hooke's law force, then its classical energy is
\[\dfrac{p^2}{2m}+\dfrac{1}{2}kx^2 =E\]
Thus, we can set up the Schrödinger equation:
\[\left [ -\dfrac{(\hbar)^2}{2m}\dfrac{d^2}{dx^2}+\dfrac{1}{2}kx^2 \right ]\psi (x)=E\psi (x)\]
In this case, the Hamiltonian is
Since \(x\) now ranges over the entire real line \(x\epsilon (-\infty ,\infty)\), the boundary conditions on \(\psi (x)\) are conditions at \(x=\pm \infty\). At \(x= \pm \infty\), the potential energy becomes infinite. Therefore, it must follow that as \(x \rightarrow \pm \infty\), \(\psi (x)\rightarrow 0\). Hence, we can state the boundary conditions as \(\psi (\pm \infty)=0\).
Solving this differential equation is not an easy task, so we will not attempt to do it. Here, we simply quote the allowed energies and some of the wave functions. The allowed energies are characterized by a single integer \(v\), which can be \(0,1,2,...\) and take the form
\[ \color{red} E_v =\left ( v+\dfrac{1}{2} \right )h\nu_1 \label{BigEq}\]
where \(\nu\) is the frequency of the oscillation (of a single mass on a spring):
\[ \nu_{1} =\dfrac{1}{2\pi}\sqrt{\dfrac{k}{m}}\]
\(\nu_1\) is the fundamental frequency of the mechanical oscillator which depends on the force constant of the spring and a single mass of the attached (single) body and independent of energy imparted on the system. when there are two masses involved in the system (e.g., a vibrating diatomic), then the mass used in Equation \(\ref{BigEq}\) becomes is a reduced mass:
\[ \color{red} \mu = \dfrac{m_1 m_2}{m_1+m_2} \label{14}\]
The fundamental vibrational frequency is then rewritten as
\[\nu = \dfrac{1}{2\pi} \sqrt{\dfrac{k}{\mu}} \label{15}\]
Do not confuse \(v\) the quantum number for harmonic oscillator with \(\nu\) the fundamental frequency of the vibration
The natural frequency \(\nu\) can be converted to angular frequency \(\omega\) via
\[\omega = 2\pi \nu\]
Then the energies in Equation \(\ref{BigEq}\) can be rewritten in terms of the fundamental angular frequency as
Now the eigenstates
Now we can define the parameter (for convenience)
\[\alpha =\dfrac{\sqrt{km}}{\hbar}=\dfrac{m\omega}{\hbar}=\dfrac{4\pi ^2m\nu}{h}\]
the first few wave functions are
\[\begin{align*}\psi_0 (x) &= \left ( \dfrac{\alpha}{\pi} \right )^{1/4}e^{-\alpha x^2 /2}\\ \psi_1(x) &= \left ( \dfrac{4\alpha ^3}{\pi} \right )^{1/4}xe^{-\alpha x^2 /2}\\ \psi_2 (x) &= \left ( \dfrac{\alpha}{4\pi} \right )^{1/4}(2\alpha x^2 -1)e^{-\alpha x^2/2}\\ \psi_3 (x) &= \left ( \dfrac{\alpha ^3}{9\pi} \right )^{1/4}(2\alpha x^3 -3x)e^{- \alpha x^2 /2}\end{align*}\]
You should verify that these are in fact solutions of the Schrödinger equation by substituting them back into the equation with their corresponding energies. The figure below shows these wave functions
The harmonic oscillator wavefunctions describing the four lowest energy states.
The figure below shows these wave functions and the corresponding probability densities: \(p_n (x)=\psi_{n}^{2}(x)\):
The probability densities for the four lowest energy states of the harmonic oscillator.
Note that in contrast to a particle in an infinite high box, \(x\epsilon (-\infty ,\infty)\), so the normalization conditionfor each eignestate is
Despite this, because the potential energy rises very steeply, the wave functions decay very rapidly as \(|x|\) increases from 0 unless \(n\) is very large. This is discussed as tunneling elsewhere.
Another way to show solutions to harmonic oscillators. (left) Wavefunction representations for the first eight bound eigenstates, n = 0 to 7. The horizontal axis shows the position x. Note: The graphs are not normalized, and the signs of some of the functions differ from those given in the text. (right) Corresponding probability densities.Images used with permission (CC BY-SA 3.0; AllenMcC.)
Recap of Lecture 10
Last lecture addressed three aspects. The first is the introduction of the commutator which is meant to evaluate is two operators commute (a property used extensively in basic algebra courses). Not every pair of operators will commute meaning the order of operations matter. The second aspect is to redefine the Heisenberg Uncertainty Principle now within the context of commutators. Now, we can identify if any two quantum measurements (i.e., eigenvalues of specific operators) will require the Heisenberg Uncertainty Principle to be addressed when simultaneously evaluating them. The third aspect of the lecture was the introduction of vibrations, including how many vibrations a molecule can he (depending on linearity) and the origin of this. The solutions to the harmonic oscillator potential were qualitatively shown (via Java application) with an emphasis of the differences between this model system and the particle in the box (important).
Translation (x, y, and z) 3 3 3
Rotation (x, y, and z) 0 2 3
Vibrations 0 3N − 5 3N − 6
Total (including Vibration) 3 3N 3N
Quantum Mechanical Vibrations
The simplified potential discussed in general chemistry is the potential energy function used in constructing the Hamiltonian. From solving the Schrodinger equations, we get eigenfunctions, eigenvalues (energies) and quantum numbers. Combining these on the potential like we did for the particle in a box give a more detailed (quantum) picture
General Potential (not am approximation) of a vibration with associated eigenenergies.
Zero-point energy
Zero-point energy is the lowest possible energy that a quantum mechanical system may have,i.e. it is the energy of the system's ground state. The uncertainty principle states that no object can ever have precise values of position and velocity simultaneously.
For the particle in the 1D box of length \(L\), with energies
\[E_n = \dfrac{h^2 n^2 }{8m L^2}\]
the zero-point energy (\(n=1\)) is
\[E_{ZPE} = \dfrac{h^2}{8m L^2}\]
For the Harmonic Oscillator with energies
the zero-point energy (\(v=0\)) is
\[E_v = \dfrac{h\nu}{2} = \dfrac{\hbar\omega }{2} \]
Many people have make cookie ideas of tapping into the zero-point energy to drive our economy. This is a silly idea and impossible, since this energy cannot never be tapped (see for more discussion
Infrared Spectroscopy
Infrared (IR) spectroscopy is one of the most common and widely used spectroscopic techniques employed mainly by inorganic and organic chemists due to its usefulness in determining structures of compounds and identifying them. Chemical compounds have different chemical properties due to the presence of different functional groups. A molecule composed of n-atoms has 3n degrees of freedom, six of which are translations and rotations of the molecule itself. This leaves 3N-6 degrees of vibrational freedom (3N-5 if the molecule is linear). Vibrational modes are often given descriptive names, such as stretching, bending, scissoring, rocking and twisting. The four-atom molecule of formaldehyde, the gas phase spectrum of which is shown below, provides an example of these terms.
The spectrum of gas phase formaldehyde, is shown below.
Gas Phase Infrared Spectrum of Formaldehyde, \(H_2C=O\)
Characteristic normal modes (vibrations) in formaldehyde
• \(CH_2\) Asymmetric Stretch
• \(CH_2\) Symmetric Stretch
• \(C=O\) Stretch
• \(CH_2\) Scissoring
• \(CH_2\) Rocking
• \(CH_2\) Wagging
Transition Moment Integrals gives Selection rules
Spectroscopy is a matter-light interaction. You first need to know the results of the Schrödinger equation of a specific system. This include both eigenstates (wavefuctions), eigenvalues (energies), and quantum numbers and You need to understand how to couple the eigenstates with electromagnetic radiation. This is done via the transition moment integral
\[\langle \psi_i | \hat{M}| \psi_f \rangle \label{26}\]
The transition moment integral gives information about the probability of a transition occurring. For IR of a single harmonic oscillator, \(\hat{M}\) can be set to \(x\). A more detailed discussion will be presented later. So the probability for a transiton in HO is \[P_{i \rightarrow f} = \langle \psi_i | x | \psi_f \rangle\]
• From Equation \(\ref{26}\) comes general rules for absorption. For IR, the transition is allowed only if the molecule has a changing dipole moment.
• From Equation \(\ref{26}\) comes selection rules (what possible transitions are allowed). For IR this results in \(\Delta v = \pm 1\).
The vibration must change the molecular dipole moment to have a non-zero (electric) transition dipole moment. Molecules CAN have a zero net dipole moment, yet STILL UNDERGO transitions when stimulated by infrared light.
Dipole Moments (rehash from gen chem)
\[ \vec{\mu} = \sum_i q_i \, \vec{r}_i \label{d1}\]
The dipole moment acts in the direction of the vector quantity. An example of a polar molecule is \(H_2O\). Because of the lone pair on oxygen, the structure of H2O is bent (via VEPSR theory), which that the vectors representing the dipole moment of each bond do not cancel each other out. Hence, water is polar.
Dipole moment of water. The convention in chemistry is that the arrow representing the dipole moment goes from positive to negative. In physics, the opposite is used.
Example \(\PageIndex{1}\):
Which molecules absorb IR radiation?
No: Vibration does not change the dipole moment of the molecule due to symmetry.
Yes: Vibration does change the dipole moment of the molecule since there is a different in electronegativity so the distance between the two atoms affects the dipole moment (Equation \ref{d1}).
Yes.: A vibration does change the dipole moment of the molecule since there is a different in electronegativity so the distance between the two atoms affects the dipole moment. This is not the symetric stretch, but the other modes.
Energies of Harmonic Oscillators and IR Transitions
Using the harmonic oscillator and wave equations of quantum mechanics, the energy can be written in terms of the spring constant and reduced mass as
where h is Planck's constant and v is the vibrational quantum number and ranges from 0,1,2,3.... infinity.
\[E = \left(v+\dfrac{1}{2}\right)h \nu \label{17}\]
\[{\triangle E} = h\nu = \dfrac{h}{2\pi} \sqrt{\dfrac{k}{\mu}} \label{18}\]
At room temperature, the majority of molecules are in the ground state \(v = 0\), from the equation above
\[E_o = \dfrac{1}{2}hv \label{19}\]
when a molecule absorbs energy, there is a promotion to the first excited state
\[E_1 = \dfrac{3}{2} hv \label{20}\]
\[\left(\dfrac{3}{2} h\nu_m - \dfrac{1}{2} hv_m \right) = hv \label{21}\]
The frequency of radiation \(\nu\) that will bring about this change is identical to the classical vibrational frequency of the bond and it can be expressed as
\[ \color{ref} E_{radiation} = hv = {\triangle E} = hv_m = \dfrac{h}{2\pi} \sqrt{\dfrac{k}{\mu}} \label{22}\]
The above equation can be modified so that the radiation can be expressed in wavenumbers
\[ \color{red} \widetilde{\nu} = \dfrac{1}{2\pi c} \sqrt{\dfrac{k}{\mu}} \label{23}\]
Corresponding probability densities. Image used with permission (Public domain; Allen McC.)
Java simulation of particles in boxes :
Selection Rules
Photons can be absorbed or emitted, and the harmonic oscillator can go from one vibrational energy state to another. Which transitions between vibrational states are allowed? If we take an infrared spectrum of a molecule, we see numerous absorption bands, and we need to relate these bands to the allowed transitions involving different normal modes of vibration.
The selection rules are determined by the transition moment integral.
\[\mu_T = \int \limits _{-\infty}^{\infty} \Psi_{v'}^* (Q) \hat {\mu} (Q) \Psi _v (Q) dQ = \langle \Psi_{v'} |\hat {\mu} |\Psi _v \rangle \label {6.6.1}\]
To evaluate this integral we need to express the dipole moment operator, \(\hat {\mu}\), in terms of the magnitude of the normal coordinate \(Q\). Evaluating the integral in Equation \(\ref{6.6.1}\) can be difficult depending on the complexity of the wavefunctions used. We can often (although not always) take advantage of the symmetries of the waveunfction (and \(\hat {\mu}\) too) to make things easlier.
Figure: (left) \(f(x) = x^2\) is an example of an even function. (right) \(f(x) = x^3\) is an example of an odd function. Images used with permission from Wikipedia.
While functions exhibit this symmetry, the product of functions inherent the symmetries of the constituent components via a "product table." The one below is in term of odd/even symmetry, but as you will learn in other classes (especially group theories), there are several symmetries that 3D objects have. The product tables constructed that take into account all such symmetries are more complicated.
Product table Odd Function (anti-symmetric) Even Function (symmetric) No symmetry (neither odd nor even)
Odd Function (anti-symmetric) Even Function (symmetric) Odd Function (anti-symmetric) who knows
Even Function (symmetric) Odd Function (anti-symmetric) Even Function (symmetric) who knows
No symmetry (neither odd nor even) who knows who knows who knows
These symmetries are important since the integrand of an integral (over all space) of an odd function is ALWAYS zero. So you do no need to solve it.
And now the Harmonic Oscillator Wavefunctions
Because of the association of the wavefunction with a probability density, it is necessary for the wavefunction to include a normalization constant, \(N_v\).
\[N_v = \dfrac {1}{(2^v v! \sqrt {\pi} )^{1/2}} \label {5.6.15}\]
The final form of the harmonic oscillator wavefunctions is the product of thee terms:
\[ \color{red} | \Psi _v (x) \rangle = N_v H_v (x) e^{-x^2/2} \label {5.6.16}\]
The first few physicists' Hermite polynomials are:
The symmetry of the Harmonic Oscillator wavefunctions is dictated by the Hermite Polynomials
Screen Shot 2015-06-09 at 4.11.30 PM.png Screen Shot 2015-06-09 at 4.14.45 PM.png
• Which levels are even functions and which are odd functions?
• Without calculating an integral, what is \(\langle x\rangle\)? |
6c12391fd4b3eb20 | Guide to Vector Version (sspropv, sspropvc)
The vector version of the SSPROP solves the coupled nonlinear Schrödinger equations for propagation in a birefringent fiber. The code can model birefringence, differential group delay (PMD), polarization-dependent dispersion, and polarization dependent loss, all in the context of nonlinear propagation.
The user may choose from two different algorithms, depending on whether the birefringent beat length is shorter or longer than the nonlinear length.
In general, the birefringent axes of an optical fiber may not be oriented in the x- and y- directions, but in some other arbitrary direction ψ. Moreover, the two orthogonal eigenstates of the fiber may not even be linearly polarized — they could be circularly or even elliptically polarized. This would be the case, for example, in fiber that is twisted or spun during or after fabrication. To handle the most general case, SSPROP allows the user to separately specify not only the dispersion β(ω) and loss (α) for each of the two eigenstates, but also the exact polarization states to which these coefficients apply.
The most general elliptical polarization state can be described by two angular parameters, ψ and χ. As depicted in the figure below, ψdescribes the angle that the polarization ellipse makes with the x-axis and χ is an anglar quantity that describes the degree of ellipticity.
Positive values of χ correspond to right-handed polarization states while negative values of χ are left-handed polarization states. χ = 0 corresponds to linear polarization while χ = π/4 is circularly polarized. On the Poincaré sphere, 2ψ and 2χ describe the longitude and lattitude of the principal eigenstate, respectively.
When specifying the eigenstates of the fiber, it is sufficient to give ψ and χ for one eigenstate because the second eigenstate is known to be orthogonal to the first.
Elliptical Basis Method
Any polarization state may be decomposed into a linear combination of the two orthogonal eigenstates of the fiber, which we label “a” and “b”. If ux and ux represent the two components of the electric field vector in the x-y basis (i.e., the Jones vector), then the corresponding components ua and ub can be calculated using the unitary transformation:
where ψ and χ describe the principal eigenstate (“a”) of the fiber. In this new basis, the linear portion of the wave equations for ua and ub are decoupled. The linear portion of the propagation can therefore be performed separately on ua and ub in the spectral domain, using a technique analogous to that used in the scalar case.
where ha and hb are given by:
When performing the nonlinear part of the propagation, the appropriate coupled nonlinear equations (with linear terms omitted) are [Menyuk, JQE 1989]:
where χ quantifies the degree of ellipticity of the eigenstates as described above. The (…) terms in the above expression denote additional nonlinear terms that average to zero when the birefringent beat length is much shorter than the nonlinear length. These additional terms are also identically zero when the eigenstates are circularly polarized.
After propagating through the desired number of steps, the final solution can be rotated from the elliptical basis (ua, ua) back into the Jones basis (ux,uy) by using the inverse transformation:
Circular Basis Method
When the fiber birefringence is small, i.e., when the beat length is comparable to or larger than the nonlinear length, the additional terms in the nonlinear equations cannot be neglected. In this case, it is necessary to decompose the field into left- and right-hand circular polarization components before computing the nonlinear propagation. The circular components u+ and u can be computed from ux and uy using the following unitary transformation:
With this transformation, the coupled nonlinear equations for u+ and u become (again omitting linear terms):
where in this case no additional nonlinear terms have been neglected. Because the eigenstates of the fiber are not in general circularly polarized, the linear portion of the propagation is not as simple in the circular basis. After some algebra, one finds that the linear propagation can be be computed in the spectral domain, using the following matrix multiplication:
where the matrix elements hnm are given by:
and ha and hb are the same quantities given earlier in the context of the elliptical basis method.
After propagating through the desired number of steps, the final solution can be rotated from the circular basis (u+, u) back into the Jones basis (ux,uy) by using the inverse transformation:
The circular basis method is more accurate than the elliptical basis method because it does not neglect any nonlinear terms. The disadvantage of the circular method is that the stepsize dz must always be much smaller than the beat length in order to produce meaningful results. If the beat length is smaller than the nonlinear length, this requirement forces one to use a stepsize that is much smaller than the nonlinearity would otherwise dictate.
A summary of the syntax and usage can be obtained from Matlab by typing “help sspropv” or “help sspropvc“.
The compiled mex file (sspropvc) can be invoked from Matlab using one of the following forms:
u1 = sspropvc(u0x,u0y,dt,dz,nz,alphaa,alphab,betapa,betapb,gamma);
u1 = sspropvc(u0x,u0y,dt,dz,nz,alphaa,alphab,betapa,betapb,gamma,psp,method);
u1 = sspropvc(u0x,u0y,dt,dz,nz,alphaa,alphab,betapa,betapb,gamma,psp,method,maxiter);
The last four arguments assume a default value if they are left unspecified. The corresponding Matlab m-file can be invoked using a similar syntax by replacing sspropvc with sspropv.
sspropvc may also be invoked with a single input argument, to specify options specific to the FFTW routines (discussed below):
sspropvc -option
Input Arguments
u0x, u0y
vector (N) Input optical field, specified by two length-N vector time sequences. u0x represents the x-component of the complex, slowly-varying envelope of the optical field, and u0y represents the corresponding y-component. The fields should be normalized so that |u0x|^2 + |u0y|^2 is the optical power.
dt scalar The time increment between adjacent points in the vector u0.
dz scalar The step-size to use for propagation
nz scalar (int) The number of steps to take. The total distance propagated is therefore L = nz*dz
alphaa, alphab scalar or vector (N) The linear power attenuation coefficients for the two eigenstates of the fiber. Here we use the labels “a” and “b” to denote the two eigenstates, which need not coincide with the x-y axes. Polarization dependent loss is modeled by using different numbers for alphaa and alphab.The loss coefficient may optionally be specified as a vector of the same length as u0x, in which case it will be treated as vector that describes a wavelength-dependent loss coefficient α(ω) in the frequency domain. (The function wspace.m in the tools subdirectory can be used to construct a vector with the corresponding frequencies.)
betapa, betapb vector Real-valued vectors that specify the dispersion for each eigenstate (a, b) of the fiber. The dispersion can be specified to any polynomial order by using a betap vector of the appropriate length.Birefringence is accomodated by making the first elements betapa(1) and betapb(1) unequal. Differential group delay, or polarization mode dispersion is likewise treated by making the second elements betapa(2) and betapb(2) different. (See note below for a more complete discussion.)The propagation constant can also be specified directly by replacing the polynomial argument betap with a vector of the same length as u0x. In this case, the argument betap is treated as a vector describing propagation constant β(ω) in the frequency domain. (The function wspace.m in the tools subdirectory can be used to construct a vector with the corresponding frequencies.)
gamma scalar A real number that describes the nonlinear coefficient of the fiber, which is related to the mode effective area and the nonlinear refractive index n2.
psp scalar or vector (2) Principal eigenstate of the fiber, specified as a 2-vector containing the angles ψ and χ (see discussion above), psp = [ψ ,χ].If psp is a scalar, it is interpreted to be ψ, and χ is then taken to be zero. This corresponds to a linearly-birefringent fiber whose axes are oriented at an angle χ with respect to the x-y axes.If psp is left completely unspecified, it assumes a default value of [0,0], which means that the fiber eigenstates are linearly polarized along the x- and y- directions.
method string String that specifies which method to use when performing the split-step calculations. The following methods are recognized “elliptical” or “circular”.When method = “elliptical”, sspropv will solve the equations by decomposing the input field into the (in general) elliptical eigenstates of the fiber. This method is appropriate only in fibers where the birefringent beat length is much shorter than the nonlinear length.When method = “circular”, sspropv will instead solve the equations by decomposing the input field into a right- and left-circular basis. This method is more accurate, but requires that the step size be small compared to the beat length.
maxiter scalar (int) The maximum number of iterations to make per step. If the solution does not converge to the desired tolerance within this number of iterations, a warning message will be generated. Usually this means that the chosen stepsize was too small. (default = 4)
tol scalar Convergence tolerance: controls to what level the solution must converge when performing the symmetrized split-step iterations in each step. (default = 10–5.)
Output Arguments
u1x, u1y
vector (N) Output optical field, specified as two length-N vectors.
Several internal options of the routine can be controlled by separately invoking sspropvc with a single argument:
sspropvc -savewisdom
sspropvc -forgetwisdom
sspropvc -loadwisdom
The first command will save the accumulated FFTW wisdom to a file that can be later used. The second command causes sspropc to forget all of the accumulated wisdom. The last command forces FFTW to load the wisdom file from the current directory. The wisdom file (if it exists) is automatically loaded the first time sspropvc is executed. The name of the wisdom file is “fftw-wisdom.dat” for the double-precision version of the program and “fftwf-wisdom.dat” for the single-precision version. This can be changed by recompiling the code. The wisdom files can and are shared between the vector and scalar versions of SSPROP. Note that the wisdom files are platform- and machine-specific. You should not expect optimal performance if you use wisdom files that were generated on a different computer.
The following four commands can be used to designate the planner method used by the FFTW routines in subsequent calls to sspropc.
sspropvc -estimate
sspropvc -measure
sspropvc -patient
sspropvc -exhaustive
The default method is patient. These settings are reset when the function is cleared or when Matlab is restarted.
These options are only available in the compiled version of the routine.
Slowly-varying Envelope: In the scalar version of SSPROP, is is customary to factor out the rapidly oscillating terms exp(i(β0z – ωt)) from the field in order to obtain an equation for the slowly-varying envelope. In SSPROP, this is achieved by setting the first argument of the dispersion polynomial betap(1) equal to 0. In a fiber that has birefringence, it is no longer clear how to factor out these rapid oscillations: should we use β0x or β0y? One approach is to factor out exp(iβ0xz) from the x-component of the field and exp(iβ0yz) from the y-component of the field. However, with this definition we can no longer regard u0x and u0y as a Jones vector that describes the polarization state. Therefore, we instead choose to factor out a common phase exp(iβ0avgz) variation from both components of the field. Provided we choose β0avg to be the average of β0x and β0y, the resulting fields ux and uy will still be slowly-varying envelopes that describe the instantaneous Jones vector of the optical signal. In SSPROP, this is accomplished numerically by choosing betapa(1) and betapb(1) to be equal and opposite such that betapa(1) – betapb(1) = Δβ0.
Moving Reference Frames: A similar consideration applies to the difference in group velocity. In a birefringent fiber, the group velocities can be different for the x- and y- polarizations. Therefore we solve the nonlinear Schrodinger equations in a reference frame that is moving at a velocity in between vx and vy. This amounts to making a change of varibles T = t – β1avgz, where β1avg is the average value of β1. In SSPROP, this is accomplished numerically by choosing betapa(2) and betapb(2) to be equal and opposite such that betapa(2) – betapb(2) = Δβ1.
Units and Dimensions: The dimensions of the input and output quantities are arbitrary, as long as they are self consistent. For examle, if |u0|2 has dimensions of Watts and dz has dimensions of meters, then the nonlinearity parameter gamma should be specified in W-1m-1. Similarly, if dt is given in picoseconds, and dz is given in meters, then the dispersion polynomial coefficients betap(n) should have dimensions of ps(n-1)/m. It is of course possible to solve the normalized dimensionless nonlinear Schrödinger equation by setting some of the input terms to 1 or –1 as appropriate.
Periodicity: SSPROP uses the FFT (DFT) to calculate the spectrum, which implies that the input and output signals are periodic in time. The periodicity is determined by the time increment and the length of the input vector, T = dt*length(u0x). Because of the periodic boundary conditions used by the DFT, care must be taken to ensure that if the optical field at the edges of the window is not negligible it must be continuous in both magnitude and phase.
Iterations and Tolerance: The last two optional parameters, maxiter and tol are related to the symmetrized split-step iteration algorithm. The algorithm uses a trapazoid integration equation to approximate the effect of the nonlinearity over a distance dz, but this approximation requires knowledge of the field at the subsequent distance-step. This problem is solved by using an iterative approach. maxiter represents the maximum number of iterations performed per step, and tol is a positive dimensionless number that tells the algorithm what level of convergence is required before the iteration stops. |
f2e1466989e5bb25 | onsdag 10 december 2014
The Radiating Atom 7: Quantum Electro Dynamics Without Infinities?
The interaction between matter in the form of an atom and light as electro-magnetic wave is supposedly described by Quantum Electro Dynamics QED as a generalization of quantum mechanics into the "jewel of physics" according to Feynman as main creator. However QED was from start loaded with infinities requiring "renormalization", which made the value of the jewel as a "strange theory" questionable according to Feynman himself:
Let us see what we can say from the experience of the present series of posts on The Radiating Atom leading to the following Schrödinger equation for a radiating Hydrogen atom subject to exterior forcing:
• $\dot\psi + H\phi -\gamma\dddot\phi = f$, (1)
• $-\dot\phi + H\psi -\gamma\dddot\psi = g$, (2)
where $\psi = \psi (x,t)$ and $\phi = \phi (x,t)$ are real-valued functions of space-time coordinates $(x,t)$ (as the real and imaginary parts of Schrödinger's complex-valued electronic wave function $\psi +i\phi$), $\dot\psi =\frac{\partial\psi}{\partial t}$,
• $H=-\frac{h^2}{2m}\Delta + V(x)$
is the Hamiltonian with $\Delta$ the Laplacian with respect to $x$, $V(x)=-\frac{1}{\vert x\vert}$ the kernel potential, $m$ the electron mass and $h$ Planck's constant, $-\gamma\dddot\phi$ is a Abraham-Lorentz radiation recoil force with corresponding radiation energy $\gamma\ddot\phi^2$ with $\gamma$ a small positive radiation coefficient and $f=f(x,t)$ and $g=g(x,t)$ express exterior forcing. Note that here the electron wave function is coupled to radiation and forcing through a radiative damping modeled by $(-\gamma\dddot\phi ,-\gamma\dddot\psi )$ and the right hand side $(f,g)$, and not through a time-dependent potential connecting an incoming electric field to an electronic dipole moment, which is a common alternative. An advantage of the above more phenomenological model is simpler mathematical analysis since the potential is kept independent of time.
The system (1)-(2) can be viewed as a generalized harmonic oscillator with small radiative damping subject to exterior forcing similar to the system analyzed in Mathematical Physics of Black Body Radiation. The essence of this analysis is a balance of forcing and radiation (cf. PS5 below):
• $R \equiv\int\gamma (\ddot\psi^2 +\ddot\phi^2)dxdt\approx \int (f^2 + g^2)dxdt$,
which can be viewed to express that $output \approx input$.
A radiating atom with wave function $(\psi ,\phi )$ can be viewed to interact with an electromagnetic $(E,B)$ through the charge density
• $\rho (x,t) =\psi^2(x,t) + \phi^2(x,t)$,
according to Maxwell's equations:
• $\dot B + \nabla\times E = 0$, $\nabla\cdot B =0$,
• $-\dot E + \nabla\times B = J$, $\nabla\cdot E =\rho$,
with $J$ a corresponding current. For a superposition of two pure eigen-states with eigenvalues $E_1$ and $E_2$ the charge density varies in time with frequency $\omega =(E_2 -E_1)/h$ and then as an electrical dipole generates outgoing radiation
• $P\sim\omega^4$,
which is balanced by the radiation damping in Schrödinger's equation
• $R=\int\gamma (\ddot\psi^2 +\ddot\phi^2)dxdt\sim\omega^4$.
The above QED model combining Schrödinger's equation for an atom with Maxwell's equations for an electro-magnetic field, thus explains the physics of
1. an electron configuration as a superposition of two pure eigen-states of different energies,
2. which generates a time variable charge/electrical dipole,
3. which generates an electro-magnetic field,
4. which generates outgoing radiation,
5. under exterior forcing.
The analysis in Mathematical Physics of Black Body Radiation shows that in this system
• $P \approx R\approx \int (f^2 + g^2)dxdt$, that is,
• outgoing radiation $\approx$ radiative damping $\approx$ exterior forcing.
The fact that outgoing radiation $\approx$ exterior forcing makes it possible to reverse the physics (1) from an atom generating outgoing radiation as an electromagnetic field (emission) into (2) a model of the reaction of an atom subject to an incoming electro-magnetic field (absorption). This is the same reversal that can be made to use a loadspeaker as a microphone (or that an antenna reradiates about half what it absorbs allowing Swedish Television agents to detect individual watchers and check if the TV-license has been paid).
Note that the physics of (1) may be easier to explain/understand than (2), since outgoing radiation/emission can be observed, while atomic absorption of incoming electro-magnetic waves is hidden to inspection. On the other hand if (2) is just the other side of (1), then explaining/understanding (1) may be sufficient.
The analysis thus offers an explanation of self-interaction without a catastrophy of acoustic feedback between loadspeaker and microphone, which may be at the origin of the infinities troubling Feynman's jewel of physics QED with photons being emitted and possibly directly being reabsorbed in a form of catastrophical photonic feedback.
PS1 The radiation damping $-\gamma\dddot\psi$ may alternatively take the form
$\gamma \vert\dot\rho\vert^2\dot\psi$, with again $R\sim \omega^4$ for a superposition of eigen-states, and $R=0$ for a pure eigen-state with $\dot\rho =0$. Compare PS5 below.
PS2 The basic conservation laws built into (1)-(2) with $f=g=0$ are (with PS1)
• $\frac{d}{dt}\int\rho (x,t)dx =0$ (conservation of charge),
• $\frac{d}{dt}\int (\psi H\psi +\phi H\phi)dx = -\int(\gamma\vert\dot\rho\vert^2(\dot\psi^2+\dot\phi^2)dx$ (radiative damping of energy).
PS3 Feynman states in the above book:
• It is very important to know that light behaves like particles, especially for those of you who have gone to school, where you were probably told something about light behaving like waves. I am telling you the way does behave - like particles. ...every instrument (photomultiplier) that has been designed to be sensitive enough to detect weak light has always ended up discovering the same thing: light is made of particles.
We read that Feynman concludes that because the output of a light detector/photo-multiplier under decreasingly weak light input, changes from a continuous signal to an intermittent signal to no signal, light must also be intermittent as if composed of a stream of isolated particles. But this is a weak argument because it draws a general conclusion about the normal nature of light from an extreme situation where blips on a screen or sound clicks are taken as evidence that what causes the blips also must be blip-like, that is must be particles. But to draw conclusions about normality by only observing extremity or non-normality, is to stretch normal scientific methodology beyond reason. In particular, the infinities troubling QED seems to originate from particle self-interaction. With light and atom instead in the form of waves and their interaction consisting of interference of waves, self-interaction does not seem to be an issue.
PS4 The book Atoms and Light Interactions presents what its author by J. D. Dodd refers to as a semi-classical view of the interaction of electromagnetic radiation and atoms, thus as waves and not particles (which is also my view):
• It may well be that the semiclassical view falls down at some stage and is unable to predict correctly certain phenomena; my own view is that it succeeds much more widely than it is given credit for. Even if it is not justified from the point of view of many physicists, i is still useful for another reason. Even if the quantum nature of radiation (QED) is required, the underlying physics needs a firm understanding of its classical basis.
Yes, it may well by that also atomistic physics is a form of wave mechanics and thus a form of classical continuum physics, as expressed by Zeh:
• There are no quantum jumps and nor are there any particles.
PS5 The analysis of Mathematical Physics of Black Body Radiation is more readily applicable if (1)-(2) is formulated as a second order in time wave equation of the form
• $\ddot\psi +H^2\psi + \gamma\dot\rho^2\dot\psi = F$,
with the following tentative main result as an extension of the analysis from radiative damping $-\gamma\dddot\psi$ to $\gamma\dot\rho^2\dot\psi$ (with $\gamma >0$ constant):
• $\int\gamma\dot\rho^2\dot\psi^2dxdt\approx\int F^2dxdt$.
Here $\gamma$ may have a dependence on $\psi$ to guarantee charge conservation under forcing.
21 kommentarer:
1. Are there any attempts to calculate the hydrogen spectrum? If so, are you then using gamma as a fitting parameter, or how do you else calculate it?
2. The spectrum as the difference of eigenvalues of the Hamiltonian/h, does not depend on gamma, but the presence of a positive gamma with corresponding radiative damping is what makes it possible to observe the spectrum as outgoing radiation. The precise value of gamma is insignificant.
3. Regarding your objections against Feynmans argument under PS3.
Although Feynman uses an example of a photomultiplier, that is not the only argument for the point he is making. And I'm quite positive Feynman knew that. But remember that the book is an adaption of a set of lectures he gave, aimed at the GENERAL PUBLIC (in the late 70s). There is no need to give any deeper arguments than that to give the big picture.
Starting in the 70s, and continued until today, there is a tremendous amount of different kinds of experiments that really nails the coffin for any kind of semiclassical (that is a theory of only waves) description of light, since it would be astronomically improbable for such a theory to work, regarding the experimental data.
I urge you to read the published paper
Observing the quantum behavior of light in an undergraduate laboratory
for a concrete example.
This, and similar experiments, is what you need to argue against if you want to refute the photon model.
Any objections?
4. This article states that the photoelectric and Compton effect do not give evidence of the existence of photons, and I agree. Their experimental evidence very difficult to understand, and certainly beyond undergraduate level. To say that something is impossible, like saying that a complicated experiment cannot possibly be explained by wave theory, is easy to do but probably impossible to prove. To show that something is possible is possible by just doing it, while showing impossibility is virtually impossible.
5. I strongly disagree with you.
The experiment is not at all difficult to understand. The theory behind it is highly accessible after finishing a modern introductory quantum mechanical course. And on the master level this is not hard at all.
Coincidence techniques and beam splitters are not rocket science, and the element of field theory is pretty straight forward.
So I need to ask you, difficulties aside (I don't think the universe cares if undergraduates find the theory difficult or not ;-) ), you don't seem to object to the experimental result? Any theory containing classical waves predicts a g^(2)
of 1 or larger, so the experimental evidence says that a classical theory can't account for the data. If you disagree, why so?
6. The evidence is negative: It is claimed that a certain phenomenon is not readily explained by wave theory, but how can we be so sure that an explanation is impossible? Photoelectricity used to be evidence of particle nature of light, but the authors of the article do not buy that argument. To claim that something is impossible favors ignorance and inability, and that is dangerous.
7. It is claimed that a certain phenomenon is not readily explained by wave theory
Not really. From any theory with a classical EM-field, there is a prediction on the correlation function in question. Since the observed effect is lower then this prediction (by 377 standard deviations!), the classical theory must be seen as falsified. This doesn't mean that a semi-classical description can't be used. The question then is, when is it justified to use a semi-classical description as an approximation? There is no generality in a semi-classical description, this can be justified to a really high precision judging from experimental data.
There are no quantum jumps and nor are there any particles.
I would say that you misjudge this title. We do know from quantum mechanics that there are no particles. That doesn't mean that there are waves. It is a false dichotomy. The general quantity in quantum mechanics is the quantum field, that is neither a wave nor a particle.
I don't know if you read Zeh's paper in detail, either way, you should really re/read the paragraph in detail.
It thus appears becoming evident that our classical concepts describe mere shadows on
the wall of Plato's cave in which we are living. Using them for describing reality must
lead to 'paradoxes'.
8. Ok, so we agree that there are no particles and then what remains is waves. To say neither particle nor wave but something else without telling what is not constructive, to me at least.
9. I did write what, an excitation in a quantum field.
10. And what is it then? In physical terms?
11. That is kind of a silly question.
Classically speaking, what is an electro-magnetic field? In physical terms?
12. A distributed function of space and time satisfying Maxwell's wave equation that is a wave, not a particle.
13. A distributed function [...]
But that doesn't say what the electromagnetic field really is, physically.
You say here how to describe the field mathematically. From a classical point of view the classical electromagnetic field is fundamental, so you can't describe it in much more detail than you do here.
It is the same with a quantum field. If you want the mathematical formalism, see for instance
An Introduction To Quantum Field Theory
If you accept the classical method of defining an electromagnetic field (I assert that you do, since you just used it that way), I can't see how you wouldn't accept the exact same method for the quantum field.
At the same time, the quantum field must be a more fundamental description, for two (at least) reasons.
First, it contains the classical fields and equations as a subset.
Second, it can be used to account for more experimental data (as discussed above).
14. We are talking about waves or particles. I say electromagnetic fields are waves as real-valued functions of space and time, not particles, which satisfy Maxwell's equations and thus can be understood by many. QFT is loaded with infinities and not understood by many, if any.
15. We are talking about waves or particles.
Not really, you are talking about waves OR particles. I tried to mention earlier that doing so is a false dichotomy, false dilemma, false duality or what you want to call it. None the less, it is a logical fallacy since you miss the situation where we have something that is more fundamental than our notions of particles and waves. Heck, you even linked to a paper that had that as a main theme (the Zeh paper and reference to a wall in Plato's cave in the conclusions).
It is a great misfortune that the name wave-particle duality still remains.
[...] which satisfy Maxwell's equations and thus can be understood by many.
I would call this a fallacy as well. Humans ability to understand a theory can of course not have any impact on how well that theory describes reality. Maybe I misunderstand you here. If so, what do you really mean?
16. Yes, a theory which cannot be understood is not useful, like the special and general theories of relativity. I have the impression that QFT falls in the same category, but in this case I may be wrong. Anyway, I am seeking a continuum model for the radiating atom which can make sense and thus be understood.
17. In what way cannot special and general relativity be understood? The theoretical frameworks are not that complicated, especially in the case of special relativity. And I don't know of any inconsistencies either. So I must admit that I don't understand what you mean here.
18. I got a bit curios to what you think cannot be understood about the theory of relativity, looked around on this blog and found your other blog, The World As Computation. And more specific the post
Questioning Relativity 2: Unphysical Lorentz Transformation
And I do see a little where your confusion about the theory originates.
You write there
However, the figure is misleading: The x'-axis defined by t' =0 is not parallel to the x-axis, since it is given by the line t=vx which is tilted with respect to the x-axis.
And you then conclude that the transformation must be unphysical.
This looks really strange, and I can not see how this has to do with relativity at all. In relativity, the primed coordinate system is not given by an equation, it is just another inertial system, it is predefined, or given if you so wish.
The Lorentz transformation then, is just the passive (no physical change) transformation that relates the physical event (x,t) in one coordinate system with another coordinate system that describes the SAME PHYSICAL event as (x',t'). Given the constraint that both systems should agree numerically on a measurement of the light speed.
To be honest, this is really simple stuff. General relativity gets trickier, but is not impossible.
19. If you think that special relativity is a physical theory, then you are fooling yourself. Yes, it is as mathematical theory simple/trivial because it just a simple linear transformation, but it has no physical meaning and thus the simplicity you perceive is just an illusion. A meaningless theory cannot be viewed as simple.
20. but it has no physical meaning
I must ask you to be more specific in what you mean. What criteria do you use to call a theory physical and with meaning?
21. I explain this in detail in Many Minds Relativity. Take it or leave it. |
fc77411361ea4269 | All Issues
Volume 24, 2019
Volume 23, 2018
Volume 22, 2017
Volume 21, 2016
Volume 20, 2015
Volume 19, 2014
Volume 18, 2013
Volume 17, 2012
Volume 16, 2011
Volume 15, 2011
Volume 14, 2010
Volume 13, 2010
Volume 12, 2009
Volume 11, 2009
Volume 10, 2008
Volume 9, 2008
Volume 8, 2007
Volume 7, 2007
Volume 6, 2006
Volume 5, 2005
Volume 4, 2004
Volume 3, 2003
Volume 2, 2002
Volume 1, 2001
Discrete & Continuous Dynamical Systems - B
August 2017 , Volume 22 , Issue 6
Select all articles
Stabilization of difference equations with noisy proportional feedback control
Elena Braverman and Alexandra Rodkina
2017, 22(6): 2067-2088 doi: 10.3934/dcdsb.2017085 +[Abstract](1478) +[HTML](7) +[PDF](1838.9KB)
Given a deterministic difference equation $x_{n+1}= f(x_n)$ with a continuous $f$ increasing on $[0, b]$, $f(0) \geq 0$, we would like to stabilize any point $x^{\ast}\in (f(0), f(b))$, by introducing the proportional feedback (PF) control. We assume that PF control contains either a multiplicative $x_{n+1}= f\left((\nu + \ell\chi_{n+1})x_n \right)$ or an additive noise $x_{n+1}=f(\lambda x_n) +\ell\chi_{n+1}$. We study conditions under which the solution eventually enters some interval, treated as a stochastic (blurred) equilibrium. In addition, we prove that, for each $\varepsilon>0$, when the noise level $\ell$ is sufficiently small, all solutions eventually belong to the interval $(x^{\ast}-\varepsilon, x^{\ast}+\varepsilon)$.
An analysis of functional curability on HIV infection models with Michaelis-Menten-type immune response and its generalization
Jeng-Huei Chen
2017, 22(6): 2089-2120 doi: 10.3934/dcdsb.2017086 +[Abstract](1754) +[HTML](2) +[PDF](660.7KB)
Let HIV infection be modeled by a dynamical system with a Michaelis-Mente-type immune response. A functional cure refers to driving the system from a stable high-viral-load state to a stable low-viral-load state. This may occur only when at least two stable equilibrium states coexist in the system. This paper analyzes how the number of biologically meaningful equilibrium states varies with system parameters. Meanwhile, it investigates how patients' profiles of immune responses determine their clinical outcomes, with focus on functional curability. The analysis provides a criterion that a functional cure is possible only if the capability of immune stimulation starts to attenuate when the density of infected cells is below a threshold. From treatment viewpoints, such a criterion is crucial because it identifies which patients cannot use a low-viral-load state as a treatment endpoint. The deriving process also provides a method to study functional curability problems with a wider class of immune response functions and functional curability problems of similar virus infections such as chronic hepatitis B virus infection.
Synchronising and non-synchronising dynamics for a two-species aggregation model
Casimir Emako-Kazianou, Jie Liao and Nicolas Vauchelet
2017, 22(6): 2121-2146 doi: 10.3934/dcdsb.2017088 +[Abstract](1645) +[HTML](3) +[PDF](512.0KB)
This paper deals with analysis and numerical simulations of a one-dimensional two-species hyperbolic aggregation model. This model is formed by a system of transport equations with nonlocal velocities, which describes the aggregate dynamics of a two-species population in interaction appearing for instance in bacterial chemotaxis. Blow-up of classical solutions occurs in finite time. This raises the question to define measure-valued solutions for this system. To this aim, we use the duality method developed for transport equations with discontinuous velocity to prove the existence and uniqueness of measure-valued solutions. The proof relies on a stability result. In addition, this approach allows to study the hyperbolic limit of a kinetic chemotaxis model. Moreover, we propose a finite volume numerical scheme whose convergence towards measure-valued solutions is proved. It allows for numerical simulations capturing the behaviour after blow up. Finally, numerical simulations illustrate the complex dynamics of aggregates until the formation of a single aggregate: after blow-up of classical solutions, aggregates of different species are synchronising or nonsynchronising when collide, that is move together or separately, depending on the parameters of the model and masses of species involved.
Averaging principle for the Schrödinger equations
Peng Gao and Yong Li
2017, 22(6): 2147-2168 doi: 10.3934/dcdsb.2017089 +[Abstract](1885) +[HTML](9) +[PDF](400.8KB)
Averaging principle for the cubic nonlinear Schrödinger equations with rapidly oscillating potential and rapidly oscillating force are obtained, both on finite but large time intervals and on the entire time axis. This includes comparison estimate, stability estimate, and convergence result between nonlinear Schrödinger equation and its averaged equation. Furthermore, the existence of almost periodic solution for cubic nonlinear Schrödinger equations is also investigated.
Domain control of nonlinear networked systems and applications to complex disease networks
Suoqin Jin, Fang-Xiang Wu and Xiufen Zou
2017, 22(6): 2169-2206 doi: 10.3934/dcdsb.2017091 +[Abstract](3259) +[HTML](1389) +[PDF](5226.6KB)
The control of complex nonlinear dynamical networks is an ongoing challenge in diverse contexts ranging from biology to social sciences. To explore a practical framework for controlling nonlinear dynamical networks based on meaningful physical and experimental considerations, we propose a new concept of the domain control for nonlinear dynamical networks, i.e., the control of a nonlinear network in transition from the domain of attraction of an undesired state (attractor) to the domain of attraction of a desired state. We theoretically prove the existence of a domain control. In particular, we offer an approach for identifying the driver nodes that need to be controlled and design a general form of control functions for realizing domain controllability. In addition, we demonstrate the effectiveness of our theory and approaches in three realistic disease-related networks: the epithelial-mesenchymal transition (EMT) core network, the T helper (Th) differentiation cellular network and the cancer network. Moreover, we reveal certain genes that are critical to phenotype transitions of these systems. Therefore, the approach described here not only offers a practical control scheme for nonlinear dynamical networks but also helps the development of new strategies for the prevention and treatment of complex diseases.
Bility and traveling wavefronts for a convolution model of mistletoes and birds with nonlocal diffusion
Huimin Liang, Peixuan Weng and Yanling Tian
2017, 22(6): 2207-2231 doi: 10.3934/dcdsb.2017093 +[Abstract](1489) +[HTML](2) +[PDF](1034.7KB)
A convolution model of mistletoes and birds with nonlocal diffusion is considered in this paper. We first consider the stability of the constant steady states of the model by linearized method, and then the existence of traveling solutions. The main aim of this article is to challenge the hardness lying in the construction of upper-lowers for wave profile system. With the help of an additional condition, we at last obtain a pair of upper-lower solutions. A constant $c_{*}>0$ is obtained such that traveling wavefronts exist for $c\geq c_{*}$. Amongst the construction, we take advantage of the relation between two components of principle eigenvector for the linearized system to control the two components of upper solution. The method seems novel. Some simulations and discussions are given to illustrate the applications of our main results and the effect of parameters on $c_{*}$. A comparison for $c_{*}$ is also given with two different kernel functions.
Convergence of global and bounded solutions of a two-species chemotaxis model with a logistic source
Ke Lin and Chunlai Mu
2017, 22(6): 2233-2260 doi: 10.3934/dcdsb.2017094 +[Abstract](1513) +[HTML](13) +[PDF](575.0KB)
In this paper, we consider a system of three parabolic equations in high-dimensional smoothly bounded domain
which describes the mutual competition between two populations on account of the Lotka-Volterra dynamics.
For any cross-diffusivities $\chi_1>0$ and $\chi_2>0$ and the rates $a_1>0$ and $a_2>0$, it is proved that the global classical bounded solutions exist for sufficiently regular initial data when the parameters $\mu_1$ and $\mu_2$ are sufficiently large. In deriving the convergence of solutions to this system, we need to distinguish two cases $a_1, a_2\in[0, 1)$ and $a_1>1$ and $0\leq a_2 < 1$ to prove globally asymptotic stability.
Seasonal forcing and exponential threshold incidence in cholera dynamics
Jinhuo Luo, Jin Wang and Hao Wang
2017, 22(6): 2261-2290 doi: 10.3934/dcdsb.2017095 +[Abstract](1875) +[HTML](5) +[PDF](878.3KB)
We propose a seasonal forcing iSIR (indirectly transmitted SIR) model with a modified incidence function, due to the fact that the seasonal fluctuations can be the main culprit for cholera outbreaks. For this nonautonomous system, we provide a sufficient condition for the persistence and the existence of a periodic solution. Furthermore, we provide a sufficient condition for the global stability of the periodic solution. Finally, we present some simulation examples for both autonomous and nonautonomous systems. Simulation results exhibit dynamical complexities, including the bistability of the autonomous system, an unexpected outbreak of cholera for the nonautonomous system, and possible outcomes induced by sudden weather events. Comparatively the nonautonomous system is more realistic in describing the indirect transmission of cholera. Our study reveals that the relative difference between the value of immunological threshold and the peak value of bacterial biomass is critical in determining the dynamical behaviors of the system.
Blow-up phenomena for nonlinear pseudo-parabolic equations with gradient term
Monica Marras, Stella Vernier-Piro and Giuseppe Viglialoro
2017, 22(6): 2291-2300 doi: 10.3934/dcdsb.2017096 +[Abstract](1628) +[HTML](5) +[PDF](375.4KB)
This paper is concerned with the pseudo-parabolic problem
where \begin{document}$\Omega$\end{document} is a bounded domain in \begin{document}$\mathbb{R}^n, \ n\geq 2$\end{document}, with smooth boundary \begin{document}$ \partial \Omega$\end{document}, \begin{document}$ k$\end{document} is a positive constant or in general positive derivable function of \begin{document}$t$\end{document}. The solution \begin{document}$u(x,t)$\end{document} may or may not blow up in finite time. Under suitable conditions on data, a lower bound for \begin{document}$t^*$\end{document} is derived, where \begin{document}$[0,t^*)$\end{document} is the time interval of existence of \begin{document}$u(x,t).$\end{document} We indicate how some of our results can be extended to a class of nonlinear pseudo-parabolic systems.
Boundedness and asymptotic stability in a two-species chemotaxis-competition model with signal-dependent sensitivity
Masaaki Mizukami
2017, 22(6): 2301-2319 doi: 10.3934/dcdsb.2017097 +[Abstract](1950) +[HTML](70) +[PDF](484.6KB)
This paper deals with the two-species chemotaxis-competition system
where \begin{document}$\Omega$\end{document} is a bounded domain in \begin{document}$\mathbb{R}^n$\end{document} with smooth boundary \begin{document}$\partial \Omega$\end{document}, \begin{document}$n\in \mathbb{N}$\end{document}; \begin{document}$h$\end{document}, \begin{document}$\chi_i$\end{document} are functions satisfying some conditions. In the case that \begin{document}$\chi_i(w)=\chi_i$\end{document}, Bai–Winkler [1] proved asymptotic behavior of solutions to the above system under some conditions which roughly mean largeness of \begin{document}$\mu_1, \mu_2$\end{document}. The main purpose of this paper is to extend the previous method for obtaining asymptotic stability. As a result, the present paper improves the conditions assumed in [1], i.e., the ranges of \begin{document}$\mu_1, \mu_2$\end{document} are extended.
Asymptotic behaviors of Green-Sch potentials at infinity and its applications
Lei Qiao
2017, 22(6): 2321-2338 doi: 10.3934/dcdsb.2017099 +[Abstract](1317) +[HTML](1) +[PDF](423.6KB)
The first aim in this paper is to deal with asymptotic behaviors of Green-Sch potentials in a cylinder. As an application we prove the integral representation of nonnegative weak solutions of the stationary Schrödinger equation in a cylinder. Next we give asymptotic behaviors of them outside an exceptional set. Finally we obtain a quantitative property of rarefied sets with respect to the stationary Schrödinger operator at \begin{document}$+\infty$\end{document} in a cylinder. Meanwhile we show that the reverse of this property is not true.
On the Kolmogorov entropy of the weak global attractor of 3D Navier-Stokes equations:Ⅰ
Yong Yang and Bingsheng Zhang
2017, 22(6): 2339-2350 doi: 10.3934/dcdsb.2017101 +[Abstract](1359) +[HTML](3) +[PDF](389.4KB)
One particular metric that generates the weak topology on the weak global attractor \begin{document}$\mathcal{A}_w$\end{document} of three dimensional incompressible Navier-Stokes equations is introduced and used to obtain an upper bound for the Kolmogorov entropy of \begin{document}$\mathcal{A}_w$\end{document}. This bound is expressed explicitly in terms of the physical parameters of the fluid flow.
Oscillation theorems for impulsive parabolic differential system of neutral type
Min Zou, An-Ping Liu and Zhimin Zhang
2017, 22(6): 2351-2363 doi: 10.3934/dcdsb.2017103 +[Abstract](1228) +[HTML](1) +[PDF](352.0KB)
In this paper, oscillatory properties of solutions to a nonlinear impulsive parabolic differential system of neutral type are investigated. A series of sufficient conditions are established for problems with Robin and Dirichlet boundary conditions. Examples are provided to confirm the validity of the analysis.
Stability and Hopf bifurcation of an HIV infection model with saturation incidence and two delays
Hui Miao, Zhidong Teng and Chengjun Kang
2017, 22(6): 2365-2387 doi: 10.3934/dcdsb.2017121 +[Abstract](2275) +[HTML](7) +[PDF](1486.6KB)
In this paper, the dynamical behaviors of a viral infection model with cytotoxic T-lymphocyte (CTL) immune response, immune response delay and production delay are investigated. The threshold values for virus infection and immune response are established. By means of Lyapunov functionals methods and LaSalle's invariance principle, sufficient conditions for the global stability of the infection-free and CTL-absent equilibria are established. Global stability of the CTL-present infection equilibrium is also studied when there is no immune delay in the model. Furthermore, to deal with the local stability of the CTL-present infection equilibrium in a general case with two delays being positive, we extend an existing geometric method to treat the associated characteristic equation. When the two delays are positive, we show some conditions for Hopf bifurcation at the CTL-present infection equilibrium by using the immune delay as a bifurcation parameter. Numerical simulations are performed in order to illustrate the dynamical behaviors of the model.
Numerical solutions of viscoelastic bending wave equations with two term time kernels by Runge-Kutta convolution quadrature
Da Xu
2017, 22(6): 2389-2416 doi: 10.3934/dcdsb.2017122 +[Abstract](2018) +[HTML](1) +[PDF](509.9KB)
In this paper, we study the numerical solutions of viscoelastic bending wave equations
for \begin{document}$ 0<x<1,~ 0<t\leq T $\end{document}, with self-adjoint boundary and initial value conditions, in which the functions \begin{document}$ \beta_{1}(t) $\end{document} and \begin{document}$ \beta_{2}(t) $\end{document} are completely monotonic on \begin{document}$ (0,~\infty) $\end{document} and locally integrable, but not constant. The equations are discretised in space by the finite difference method and in time by the Runge-Kutta convolution quadrature. The stability and convergence of the schemes are analyzed by the frequency domain and energy methods. Numerical experiments are provided to illustrate the accuracy and efficiency of the proposed schemes.
Limit cycle bifurcations for piecewise smooth integrable differential systems
Jihua Yang and Liqin Zhao
2017, 22(6): 2417-2425 doi: 10.3934/dcdsb.2017123 +[Abstract](1513) +[HTML](8) +[PDF](385.5KB)
In this paper, we study a class of piecewise smooth integrable non-Hamiltonian systems, which has a center. By using the first order Melnikov function, we give an exact number of limit cycles which bifurcate from the above periodic annulus under the polynomial perturbation of degree n.
Invariant measures for complex-valued dissipative dynamical systems and applications
Xin Li, Wenxian Shen and Chunyou Sun
2017, 22(6): 2427-2446 doi: 10.3934/dcdsb.2017124 +[Abstract](1516) +[HTML](3) +[PDF](486.9KB)
In this work, we extend the classical real-valued framework to deal with complex-valued dissipative dynamical systems. With our new complex-valued framework and using generalized complex Banach limits, we construct invariant measures for continuous complex semigroups possessing global attractors. In particular, for any given complex Banach limit and initial data \begin{document}$u_{0}$\end{document}, we construct a unique complex invariant measure \begin{document}$\mu$\end{document} on a metric space which is acted by a continuous semigroup \begin{document}$\{S(t)\}_{t\geq 0}$\end{document} possessing a global attractor \begin{document}$\mathcal{A}$\end{document}. Moreover, it is shown that the support of \begin{document}$\mu$\end{document} is not only contained in global attractor \begin{document}$\mathcal{A}$\end{document} but also in \begin{document}$\omega(u_{0})$\end{document}. Next, the structure of the measure \begin{document}$\mu$\end{document} is studied. It is shown that both the real and imaginary parts of a complex invariant measure are invariant signed measures and that both the positive and negative variations of a signed measure are invariant measures. Finally, we illustrate the main results of this article on the model examples of a complex Ginzburg-Landau equation and a nonlinear Schrödinger equation and construct complex invariant measures for these two complex-valued equations.
Nonsmooth frameworks for an extended Budyko model
Anna M. Barry, Esther WIdiasih and Richard Mcgehee
2017, 22(6): 2447-2463 doi: 10.3934/dcdsb.2017125 +[Abstract](2059) +[HTML](13) +[PDF](635.0KB)
In latitude-dependent energy balance models, ice-free and ice-covered conditions form physical boundaries of the system. With carbon dioxide treated as a bifurcation parameter, the resulting bifurcation diagram is nonsmooth with curves of equilibria and boundaries forming corners at points of intersection. Over long time scales, atmospheric carbon dioxide varies dynamically and the nonsmooth diagram becomes a set of quasi-equilibria. However, when introducing carbon dynamics, care must be taken with the physical boundaries and appropriate boundary motion specified. In this article, we extend an energy balance model to include slowly varying carbon dioxide and develop nonsmooth frameworks based on physically relevant boundary dynamics. Within these frameworks, we prove existence and uniqueness of solutions, as well as invariance of the region of phase space bounded by ice-free and ice-covered states.
Chiellini integrability condition, planar isochronous systems and Hamiltonian structures of Liénard equation
A. Ghose Choudhury and Partha Guha
2017, 22(6): 2465-2478 doi: 10.3934/dcdsb.2017126 +[Abstract](1410) +[HTML](3) +[PDF](373.9KB)
Using a novel transformation involving the Jacobi Last Multiplier (JLM) we derive an old integrability criterion due to Chiellini for the Liénard equation. By combining the Chiellini condition for integrability and Jacobi's Last Multiplier the Lagrangian and Hamiltonian of the Liénard equation is derived. We also show that the Kukles equation is the only equation in the Liénard family which satisfies both the Chiellini integrability and the Sabatini criterion for isochronicity conditions. In addition we examine this result by mapping the Liénard equation to a harmonic oscillator equation using tacitly Chiellini's condition. Finally we provide a metriplectic and complex Hamiltonian formulation of the Liénard equation through the use of Chiellini condition for integrability.
Stationarity and periodicity of positive solutions to stochastic SEIR epidemic models with distributed delay
Qun Liu, Daqing Jiang, Ningzhong Shi, Tasawar Hayat and Ahmed Alsaedi
2017, 22(6): 2479-2500 doi: 10.3934/dcdsb.2017127 +[Abstract](1553) +[HTML](12) +[PDF](495.0KB)
In this paper, we consider two SEIR epidemic models with distributed delay in random environments. First of all, by constructing a suitable stochastic Lyapunov function, we obtain the existence of stationarity of the positive solution to the stochastic autonomous system. Then we establish sufficient conditions for extinction of the disease. Finally, by using Khasminskii's theory of periodic solutions, we prove that the stochastic nonautonomous epidemic model admits at least one nontrivial positive T-periodic solution under a simple condition.
Quasi-periodic solutions of generalized Boussinesq equation with quasi-periodic forcing
Yanling Shi, Junxiang Xu and Xindong Xu
2017, 22(6): 2501-2519 doi: 10.3934/dcdsb.2017104 +[Abstract](1620) +[HTML](2) +[PDF](509.2KB)
In this paper, one-dimensional quasi-periodically forced generalized Boussinesq equation
with hinged boundary conditions is considered, where \begin{document}$\varepsilon$\end{document} is a small positive parameter, \begin{document}$\phi(t)$\end{document} is a real analytic quasi-periodic function in \begin{document}$t$\end{document} with frequency vector \begin{document}$\omega=( \omega_1,\omega_2,\cdots,\omega_m ).$\end{document} It is proved that, under a suitable hypothesis on \begin{document}$\phi(t),$\end{document} there are many quasi-periodic solutions for the above equation via KAM theory.
2018 Impact Factor: 1.008
Email Alert
[Back to Top] |
52ccef81188718f6 | Citation for this page in APA citation style. Close
Mortimer Adler
Rogers Albritton
Alexander of Aphrodisias
Samuel Alexander
William Alston
Louise Antony
Thomas Aquinas
David Armstrong
Harald Atmanspacher
Robert Audi
Alexander Bain
Mark Balaguer
Jeffrey Barrett
William Barrett
William Belsham
Henri Bergson
George Berkeley
Isaiah Berlin
Richard J. Bernstein
Bernard Berofsky
Robert Bishop
Max Black
Susanne Bobzien
Emil du Bois-Reymond
Hilary Bok
Laurence BonJour
George Boole
Émile Boutroux
Michael Burke
Lawrence Cahoone
Joseph Keim Campbell
Rudolf Carnap
Ernst Cassirer
David Chalmers
Roderick Chisholm
Randolph Clarke
Samuel Clarke
Anthony Collins
Antonella Corradini
Diodorus Cronus
Jonathan Dancy
Donald Davidson
Mario De Caro
Daniel Dennett
Jacques Derrida
René Descartes
Richard Double
Fred Dretske
John Dupré
John Earman
Laura Waddell Ekstrom
Herbert Feigl
Arthur Fine
John Martin Fischer
Frederic Fitch
Owen Flanagan
Luciano Floridi
Philippa Foot
Alfred Fouilleé
Harry Frankfurt
Richard L. Franklin
Michael Frede
Gottlob Frege
Peter Geach
Edmund Gettier
Carl Ginet
Alvin Goldman
Nicholas St. John Green
H.Paul Grice
Ian Hacking
Ishtiyaque Haji
Stuart Hampshire
Sam Harris
William Hasker
Georg W.F. Hegel
Martin Heidegger
Thomas Hobbes
David Hodgson
Shadsworth Hodgson
Baron d'Holbach
Ted Honderich
Pamela Huby
David Hume
Ferenc Huoranszki
William James
Lord Kames
Robert Kane
Immanuel Kant
Tomis Kapitan
Walter Kaufmann
Jaegwon Kim
William King
Hilary Kornblith
Christine Korsgaard
Saul Kripke
Andrea Lavazza
Christoph Lehner
Keith Lehrer
Gottfried Leibniz
Jules Lequyer
Michael Levin
George Henry Lewes
David Lewis
Peter Lipton
C. Lloyd Morgan
John Locke
Michael Lockwood
E. Jonathan Lowe
John R. Lucas
Alasdair MacIntyre
Ruth Barcan Marcus
James Martineau
Storrs McCall
Hugh McCann
Colin McGinn
Michael McKenna
Brian McLaughlin
John McTaggart
Paul E. Meehl
Uwe Meixner
Alfred Mele
Trenton Merricks
John Stuart Mill
Dickinson Miller
Thomas Nagel
Otto Neurath
Friedrich Nietzsche
John Norton
Robert Nozick
William of Ockham
Timothy O'Connor
David F. Pears
Charles Sanders Peirce
Derk Pereboom
Steven Pinker
Karl Popper
Huw Price
Hilary Putnam
Willard van Orman Quine
Frank Ramsey
Ayn Rand
Michael Rea
Thomas Reid
Charles Renouvier
Nicholas Rescher
Richard Rorty
Josiah Royce
Bertrand Russell
Paul Russell
Gilbert Ryle
Jean-Paul Sartre
Kenneth Sayre
Moritz Schlick
Arthur Schopenhauer
John Searle
Wilfrid Sellars
Alan Sidelle
Ted Sider
Henry Sidgwick
Walter Sinnott-Armstrong
Saul Smilansky
Michael Smith
Baruch Spinoza
L. Susan Stebbing
Isabelle Stengers
George F. Stout
Galen Strawson
Peter Strawson
Eleonore Stump
Francisco Suárez
Richard Taylor
Kevin Timpe
Mark Twain
Peter Unger
Peter van Inwagen
Manuel Vargas
John Venn
Kadri Vihvelin
G.H. von Wright
David Foster Wallace
R. Jay Wallace
Ted Warfield
Roy Weatherford
C.F. von Weizsäcker
William Whewell
Alfred North Whitehead
David Widerker
David Wiggins
Bernard Williams
Timothy Williamson
Ludwig Wittgenstein
Susan Wolf
Michael Arbib
Walter Baade
Bernard Baars
Leslie Ballentine
Gregory Bateson
John S. Bell
Mara Beller
Charles Bennett
Ludwig von Bertalanffy
Susan Blackmore
Margaret Boden
David Bohm
Niels Bohr
Ludwig Boltzmann
Emile Borel
Max Born
Satyendra Nath Bose
Walther Bothe
Hans Briegel
Leon Brillouin
Stephen Brush
Henry Thomas Buckle
S. H. Burbury
Donald Campbell
Anthony Cashmore
Eric Chaisson
Gregory Chaitin
Jean-Pierre Changeux
Arthur Holly Compton
John Conway
John Cramer
E. P. Culverwell
Olivier Darrigol
Charles Darwin
Richard Dawkins
Terrence Deacon
Lüder Deecke
Richard Dedekind
Louis de Broglie
Max Delbrück
Abraham de Moivre
Paul Dirac
Hans Driesch
John Eccles
Arthur Stanley Eddington
Gerald Edelman
Paul Ehrenfest
Albert Einstein
Hugh Everett, III
Franz Exner
Richard Feynman
R. A. Fisher
Joseph Fourier
Philipp Frank
Steven Frautschi
Edward Fredkin
Lila Gatlin
Michael Gazzaniga
GianCarlo Ghirardi
J. Willard Gibbs
Nicolas Gisin
Paul Glimcher
Thomas Gold
A. O. Gomes
Brian Goodwin
Joshua Greene
Jacques Hadamard
Mark Hadley
Patrick Haggard
Stuart Hameroff
Augustin Hamon
Sam Harris
Hyman Hartman
John-Dylan Haynes
Donald Hebb
Martin Heisenberg
Werner Heisenberg
John Herschel
Art Hobson
Jesper Hoffmeyer
E. T. Jaynes
William Stanley Jevons
Roman Jakobson
Pascual Jordan
Ruth E. Kastner
Stuart Kauffman
Martin J. Klein
William R. Klemm
Simon Kochen
Hans Kornhuber
Stephen Kosslyn
Ladislav Kovàč
Leopold Kronecker
Rolf Landauer
Alfred Landé
Pierre-Simon Laplace
David Layzer
Benjamin Libet
Seth Lloyd
Hendrik Lorentz
Josef Loschmidt
Ernst Mach
Donald MacKay
Henry Margenau
James Clerk Maxwell
Ernst Mayr
John McCarthy
Warren McCulloch
Ulrich Mohrhoff
Jacques Monod
Emmy Noether
Abraham Pais
Howard Pattee
Wolfgang Pauli
Massimo Pauri
Roger Penrose
Steven Pinker
Colin Pittendrigh
Max Planck
Susan Pockett
Henri Poincaré
Daniel Pollen
Ilya Prigogine
Hans Primas
Adolphe Quételet
Jürgen Renn/a>
Juan Roederer
Jerome Rothstein
David Ruelle
Tilman Sauer
Jürgen Schmidhuber
Erwin Schrödinger
Aaron Schurger
Claude Shannon
David Shiang
Herbert Simon
Dean Keith Simonton
B. F. Skinner
Lee Smolin
Ray Solomonoff
Roger Sperry
John Stachel
Henry Stapp
Tom Stonier
Antoine Suarez
Leo Szilard
Max Tegmark
William Thomson (Kelvin)
Giulio Tononi
Peter Tse
Vlatko Vedral
Heinz von Foerster
John von Neumann
John B. Watson
Daniel Wegner
Steven Weinberg
Paul A. Weiss
John Wheeler
Wilhelm Wien
Norbert Wiener
Eugene Wigner
E. O. Wilson
Stephen Wolfram
H. Dieter Zeh
Ernst Zermelo
Wojciech Zurek
Konrad Zuse
Fritz Zwicky
Free Will
Mental Causation
James Symposium
Schrödinger's Cat
Erwin Schrödinger's intention for his infamous cat-killing box was to discredit certain non-intuitive implications of quantum mechanics, of which his wave mechanics was the second formulation. Schrödinger's wave mechanics is more continuous mathematically, and apparently more deterministic, than Werner Heisenberg's matrix mechanics.
Schrödinger did not like Niels Bohr's idea of "quantum jumps" between Bohr's "stationary states" - the different "energy levels" in an atom. Bohr's "quantum postulate" said that the jumps between discrete states emitted (or absorbed) energy in the amount hν = E2 - E1.
Bohr did not accept Albert Einstein's 1905 hypothesis that the radiation was a spatially localized quantum of energy hν. Until well into the 1920's, Bohr (and Max Planck, the inventor of the quantum hypothesis himself) believed radiation was a continuous wave. This was the question of wave-particle duality, which Einstein saw as early as 1909.
It was Einstein who originated the suggestion that the superposition of Schrödinger's wave functions implied that two different physical states could exist at the same time. This was a serious interpretational error that plagues the foundation of quantum physics to this day.
This error is found frequently in discussions of so-called "entangled" states (see the Einstein-Podolsky-Rosen experiment).
Entanglement occurs only for atomic level phenomena and over limited distances that preserve the coherence of two-particle wave functions by isolating the systems (and their eigenfunctions) from interactions with the environment.
We never actually "see" or measure any system (whether a microscopic electron or a macroscopic cat) in two distinct states. Quantum mechanics simply predicts a significant probability of the system being found in these different states. And these probability predictions are borne out by the statistics of large numbers of identical experiments.
The Pauli Exclusion Principle says (correctly) that two identical indistinguishable (fermion) particles cannot be in the same place at the same time. Entanglement is often interpreted (incorrectly) as saying that a single particle can be in two places at the same time. Dirac's Principle of Superposition does not say that a particle is in two states at the same time, only that there is a non-zero probability of finding it in either state should it be measured.
Max Born described the somewhat paradoxical result:
The motion of the particle follows the laws of probability, but the probability itself propagates in accord with causal laws.
Einstein wrote to Schrödinger with the idea that the decay of a radioactive nucleus could be arranged to set off a large explosion. Since the moment of decay is unknown, Einstein argued that the superposition of decayed and undecayed nuclear states implies the superposition of an explosion and no explosion. It does not. In both the microscopic and macroscopic cases, quantum mechanics simply estimates the probability amplitudes for the two cases.
Many years later, Richard Feynman made Einstein's suggestion into a nuclear explosion! (What is it about some scientists?)
Einstein and Schrödinger did not like the fundamental randomness implied by quantum mechanics. They wanted to restore determinism to physics. Indeed Schrödinger's wave equation predicts a perfectly deterministic time evolution of the wave function. But what is evolving deterministically is only abstract probabilities. And these probabilities are confirmed only in the statistics of large numbers of identically prepared experiments. Randomness enters only when a measurement is made and the wave function "collapses" into one of the possible states of the system.
Schrödinger devised a variation in which the random radioactive decay would kill a cat. Observers could not know what happened until the box is opened.
The details of the tasteless experiment include:
• a Geiger counter which produces an avalanche of electrons when an alpha particle passes through it
• a bit of radioactive material with a decay half-life likely to emit an alpha particle in the direction of the Geiger counter during a time T
• an electrical circuit energized by the electrons which drops a hammer
• a flask of a deadly hydrocyanic acid gas, smashed open by the hammer.
The gas will kill the cat, but the exact time of death is unpredictable and random because of the irreducible quantum indeterminacy in the time of decay (and the direction of the decay particle, which might miss the Geiger counter!).
This thought experiment is widely misunderstood. It was meant (by both Einstein and Schrödinger) to suggest that quantum mechanics describes the simultaneous (and obviously contradictory) existence of a live and dead cat. Here is the famous diagram with a cat both dead and alive.
What's wrong with this picture?
Quantum mechanics claims only that the time evolution of the Schrödinger wave functions for the probability amplitudes of nuclear decay accurately predict the proportion of nuclear decays that will occur in a given time interval.
(Classical) probabilities (no interference between terms) simply predict the number of live and dead cats that will be observed in a large number of identical experiments.
Quantum "probability amplitudes" do allow interference between the possible states of a quantum object, but not between macroscopic objects like live and dead cats
More specifically, quantum mechanics provides us with the accurate prediction that if this experiment is repeated many times (the SPCA would disapprove), half of the experiments will result in dead cats.
Note that this is a problem in epistemology. What knowledge is it that quantum physics provides?
If we open the box at the time T when there is a 50% probability of an alpha particle emission, the most a physicist can know is that there is a 50% chance that the radioactive decay will have occurred and the cat will be observed as dead or dying.
If the box were opened earlier, say at T/2, there is only a 25% chance that the cat has died. Schrödinger's superposition of live and dead cats would look like this.
If the box were opened later, say at 2T, there is only a 25% chance that the cat is still alive. Quantum mechanics is giving us only statistical information - knowledge about probabilities.
Schrödinger is simply wrong that the mixture of nuclear wave functions that accurately describes decay can be magnified to the macroscopic world to describe a similar mixture of live cat and dead cat wave functions and the simultaneous existence of live and dead cats.
The kind of coherent superposition of states needed to describe an atomic system as in a linear combination of states (see Paul Dirac's explanation of superposition using three polarizers) does not describe macroscopic systems.
Instead of a linear combination of pure quantum states, with quantum interference between the states, i.e.,
| Cat > = ( 1/√2) | Live > + ( 1/√2) | Dead >,
quantum mechanics tells us only that there is 50% chance of finding the cat in either the live or dead state, i.e.,
Cats = (1/2) Live + (1/2) Dead.
Just as in the quantum case, this probability prediction is confirmed by the statistics of repeated identical experiments, but no interference between these states is seen.
What do exist simultaneously in the macroscopic world are genuine alternative possibilities for future events. There is the real possibility of a live or dead cat in any particular experiment. Which one is found is irreducibly random, unpredictable, and a matter of pure chance.
Genuine alternative possibilities is what bothered physicists like Einstein, Schrödinger, and Max Planck who wanted a return to deterministic physics. It also bothers determinist and compatibilist philosophers who have what William James calls an "antipathy to chance." Ironically, it was Einstein himself, in 1916, who discovered the existence of irreducible chance, in the elementary interactions of matter and radiation.
Until the information comes into existence, the future is indeterministic. Once information is macroscopically encoded, the past is determined.
How does information physics resolve the paradox?
As soon as the alpha particle sets off the avalanche of electrons in the Geiger counter (an irreversible event with a significant entropy increase), new information is created in the world.
For example, a simple pen-chart recorder attached to the Geiger counter could record the time of decay, which a human observer could read at any later time. Notice that, as usual in information creation, the energy expended by a recorder increases the entropy more than the increased information decreases it, thus satisfying the second law of thermodynamics.
Even without a mechanical recorder, the cat's death sets in motion biological processes that constitute an equivalent, if gruesome, recording. When a dead cat is the result, a sophisticated autopsy can provide an approximate time of death, because the cat's body is acting as an event recorder. There never is a superposition (in the sense of the simultaneous existence) of live and dead cats.
The paradox points clearly to the Information Philosophy solution to the problem of measurement. Human observers are not required to make measurements. In this case, the cat is the observer.
In most physics measurements, the new information is captured by apparatus well before any physicist has a chance to read any dials or pointers that indicate what happened. Indeed, in today's high-energy particle interaction experiments, the data may be captured but not fully analyzed until many days or even months of computer processing establishes what was observed. In this case, the experimental apparatus is the observer.
And, in general, the universe is its own observer, able to record (and sometimes preserve) the information created.
The basic assumption made in Schrödinger's cat thought experiments is that the deterministic Schrödinger equation describing a microscopic superposition of decayed and non-decayed radioactive nuclei evolves deterministically into a macroscopic superposition of live and dead cats.
But since the essence of a "measurement" is an interaction with another system (quantum or classical) that creates information to be seen (later) by an observer, the interaction between the nucleus and the cat is more than enough to collapse the wave function. Calculating the probabilities for that collapse allows us to estimate the probabilities of live and dead cats. These are probabilities, not probability amplitudes. They do not interfere with one another.
After the interaction, they are not in a superposition of states. We always have either a live cat or a dead cat, just as we always observe a complete photon after a polarization measurement and not a superposition of photon states, as P.A.M.Dirac explains so simply and clearly.
According to quantum mechanics the result of this experiment will be that sometimes one will find a whole photon, of energy equal to the energy of the incident photon, on the back side and other times one will find nothing. When one finds a whole photon, it will be polarized perpendicular to the optic axis. One will never find only a part of a photon on the back side. If one repeats the experiment a large number of times, one will find the photon on the back side in a fraction sin2α of the total number of times.
Quantum mechanics similarly gives us only the probability of finding live cats (or dead cats) in a large number of identically prepared experiments (pace the SPCA)
Thus we may say that the photon has a probability sin2α of passing through the tourmaline and appearing on the back side polarized perpendicular to the axis and a probability cos2α of being absorbed. These values for the probabilities lead to the correct classical results for an incident beam containing a large number of photons.
In this way we preserve the individuality of the photon in all cases. We are able to do this, however, only because we abandon the determinacy of the classical theory. The result of an experiment is not determined, as it would be according to classical ideas, by the conditions under the control of the experimenter. The most that can be predicted is a set of possible results, with a probability of occurrence for each...
When we make the photon meet a tourmaline crystal, we are subjecting it to an observation. We are observing whether it is polarized parallel or perpendicular to the optic axis. The effect of making this observation is to force the photon entirely into the state of parallel or entirely into the state of perpendicular polarization. It has to make a sudden jump from being partly in each of these two states to being entirely in one or other of them. Which of the two states it will jump into cannot be predicted, but is governed only by probability laws. If it jumps into the parallel state it gets absorbed and if it jumps into the perpendicular state it passes through the crystal and appears on the other side preserving this state of polarization.
Superposition and Indeterminacy
The non-classical nature of the superposition process is brought out clearly if we consider the superposition of two states, A and B, such that there exists an observation which, when made on the system in state A, is certain to lead to one particular result, a say, and when made on the system in state B is certain to lead to some different result, b say. What will be the result of the observation when made on the system in the superposed state? The answer is that the result will be sometimes a and sometimes b, according to a probability law depending on the relative weights of A and B in the superposition process. It will never be different from both a and b.
There is no justification for assuming an intermediate (and absurd) condition of simultaneous live and dead cats. The thing that is "intermediate" is the probability, not the outcome.
The intermediate character of the state formed by superposition thus expresses itself through the probability of a particular result for an observation being intermediate between the corresponding probabilities for the original states,† not through the result itself being intermediate between the corresponding results for the original states.
In this way we see that such a drastic departure from ordinary ideas as the assumption of superposition relationships between the states is possible only on account of the recognition of the importance of the disturbance accompanying an observation and of the consequent indeterminacy in the result of the observation. When an observation is made on any atomic system that is in a given state, in general the result will not be determinate, i.e., if the experiment is repeated several times under identical conditions several different results may be obtained. It is a law of nature, though, that if the experiment is repeated a large number of times, each particular result will be obtained in a definite fraction of the total number of times, so that there is a definite probability of its being obtained. This probability is what the theory sets out to calculate.
Decoherence and the Lack of Macroscopic Superpositions
Despite the claims of decoherence theorists, microscopic superpositions of quantum states do not allow us to "see" a system in two different states. Quantum mechanics simply predicts a significant probability of the system being found in these different states. Thus it is no surprise that we do not see macroscopic "superpositions of live and dead cats" at the same time. What does exist at any given time is the probabilities of the two states (in the macroscopic world) and the probability amplitude of the two states (which can coherently interfere with one another) in the microscopic world.
Decoherence theorists claim that they explain the "mysterious" non-appearance of macroscopic superpositions of states. But quantum mechanics does not predict such states, despite the popular idea of macroscopic superposition of live and dead cats.
Normal | Teacher | Scholar |
9cbb65814ef8bfa3 | I am reading a introduction to quantum mechanics right now. There is a part about the time evolution operator:
\begin{align*} i\hbar \partial_t \,\psi(\vec r, t) = \hat H(t)\, \psi(\vec r,t) \end{align*} is the time dependent Schrödinger-equation. If we assume that for each $\psi(\vec r, t_0)$ there is a unique solution $\psi(\vec r, t)$, then we can define an operator
$$U(t,t_0): \mathcal H \to \mathcal H,\, \psi(\vec r, t_0) \mapsto \psi(\vec r, t)$$
This operator is linear, since the Schrödinger equation is linear and it is unitary, since $\partial_t \langle\psi(\vec r, t)| \psi(\vec r, t)\rangle = 0$. I am totally happy with that. I can also accept, that $U(t,t_0) = e^{-i(t-t_0)\hat H/ \hbar}$, if $\hat H$ is time independent, where $e^{-i(t-t_0)\hat H/ \hbar}$ is defined over how it acts on the eigenvectors of $\hat H$.
But I have no idea, what the next sentence in my book means, and there is no good explanation. Is says there:
The differential equation, together with the initial condition ($U(t_0,t_0) = Id$) is equivalent to the integral equation: \begin{align*} U(t,t_0) = 1 - \frac{i}{\hbar } \int_{t_0}^t ds\, \hat H(s) U(s,t_0) \end{align*}
So my problem is basically, I don't understand this at all :/. How can I integrate operators, what does that even mean? Are there any good examples, where this integral makes sense? This is probably a really stupid question, but I am happy if someone could spare two minutes to help me.
• $\begingroup$ The lhs and the rhs of your last equation are, formally, allowed to act on wavefunctions, by means of $U(t,t_0) \psi(t_0) = \psi(t_0) - \frac{i}{\hbar} \int_{t_0}^t ds \hat{H}(s)U(s,t_0) \psi(t_0)$. Now the rhs is just the usual integral. $\endgroup$ – Creo Oct 15 '18 at 20:28
This is a very good question, but a mathematical one. The expression you quoted from the book is the part-"summation" of the Dyson expansion of the unitary evolution operator. To quote from Reed and Simon, theorem X.69 (Vol. II, p. 282)
Let $t\mapsto H(t)$ a strongly continuous map of $\mathbb{R}$ into the bounded self-adjoint operators on a Hilbert space $\mathcal{H}$. Then there is a unitary propagator on $\mathcal{H}$ so that, for all $\psi\in\mathcal{H}$, $$\phi_s (t) = U(t,s) \psi $$ satisfies $$ \frac{d}{dt} \phi_s (t) = -i H(t) \phi_s (t) \ ; \ \phi_s (s) = \psi$$
The proof starts by explicitely exhibiting the unitary propagator as
$$ U(t,s) \phi = 1 +\sum_{n=1}^{\infty} (-i)^n \int_{s}^{t} \int_{s}^{t_1} ... \int_{s}^{t_n} H(t_1)... H(t_n) \phi \ dt_n ... \ dt_1 $$
What the book did is just ~resum~ the infinite expression to the right of $ H(t_1) $ into another U (the minus vs. plus sign after the unit vector comes from the different convention for evolution). Now we have no longer an integral of a product of operators, but of Hilbert space-valued functions. This is just an iteration of a Bochner-type integral.
• 1
$\begingroup$ Thanks a lot.I didn't know this book exists. It's really cool. I want to make sure, that I understand it right... The proof is roughly: 1.) Show that the series is absolutely convergent with respect to the operator norm for each t, thus U(t,s) exists, since operator spaces are complete. 2.) check all of its properties. The fact that U(t,s) satisfies the differential equations, comes from the fact, that I am allowed to differentiate series term by term, if I am inside the convergence radius. $\endgroup$ – N.Beck Oct 16 '18 at 7:32
• $\begingroup$ @N.Beck That is correct. $\endgroup$ – DanielC Oct 16 '18 at 16:20
Your Answer
|
0961021a46ee4fda | Open main menu
Wikipedia β
In physics, hidden variable theories are held by some physicists who argue that the state of a physical system, as formulated by quantum mechanics, does not give a complete description for the system; i.e., that quantum mechanics is ultimately incomplete, and that a complete theory would provide descriptive categories to account for all observable behavior and thus avoid any indeterminism. The existence of indeterminacy for some measurements is a characteristic of prevalent interpretations of quantum mechanics; moreover, bounds for indeterminacy can be expressed in a quantitative form by the Heisenberg uncertainty principle.
Albert Einstein, the most famous proponent of hidden variables, objected to the fundamentally probabilistic nature of quantum mechanics,[1] and famously declared "I am convinced God does not play dice".[2] Einstein, Podolsky, and Rosen argued that "elements of reality" (hidden variables) must be added to quantum mechanics to explain entanglement without action at a distance.[3][4] Later, Bell's theorem suggested that local hidden variables of certain types are impossible, or that they evolve non-locally. A famous non-local theory is De Broglie–Bohm theory.
Under the Copenhagen interpretation, quantum mechanics is non-deterministic, meaning that it generally does not predict the outcome of any measurement with certainty. Instead, it indicates what the probabilities of the outcomes are, with the indeterminism of observable quantities constrained by the uncertainty principle. The question arises whether there might be some deeper reality hidden beneath quantum mechanics, to be described by a more fundamental theory that can always predict the outcome of each measurement with certainty: if the exact properties of every subatomic particle were known the entire system could be modeled exactly using deterministic physics similar to classical physics.
In other words, it is conceivable that the standard interpretation of quantum mechanics is an incomplete description of nature. The designation of variables as underlying "hidden" variables depends on the level of physical description (so, for example, "if a gas is described in terms of temperature, pressure, and volume, then the velocities of the individual atoms in the gas would be hidden variables"[5]). Physicists supporting De Broglie–Bohm theory maintain that underlying the observed probabilistic nature of the universe is a deterministic objective foundation/property—the hidden variable. Others, however, believe that there is no deeper deterministic reality in quantum mechanics.[citation needed]
A lack of a kind of realism (understood here as asserting independent existence and evolution of physical quantities, such as position or momentum, without the process of measurement) is crucial in the Copenhagen interpretation. Realistic interpretations (which were already incorporated, to an extent, into the physics of Feynman[6]), on the other hand, assume that particles have certain trajectories. Under such view, these trajectories will almost always be continuous, which follows both from the finitude of the perceived speed of light ("leaps" should rather be precluded) and, more importantly, from the principle of least action, as deduced in quantum physics by Dirac. But continuous movement, in accordance with the mathematical definition, implies deterministic movement for a range of time arguments;[7] and thus realism is, under modern physics, one more reason for seeking (at least certain limited) determinism and thus a hidden variable theory (especially that such theory exists: see De Broglie–Bohm interpretation).
Although determinism was initially a major motivation for physicists looking for hidden variable theories, non-deterministic theories trying to explain what the supposed reality underlying the quantum mechanics formalism looks like are also considered hidden variable theories; for example Edward Nelson's stochastic mechanics.
"God does not play dice"Edit
In June 1926, Max Born published a paper, "Zur Quantenmechanik der Stoßvorgänge" ("Quantum Mechanics of Collision Phenomena") in the scientific journal Zeitschrift für Physik, in which he was the first to clearly enunciate the probabilistic interpretation of the quantum wavefunction, which had been introduced by Erwin Schrödinger earlier in the year. Born concluded the paper as follows:
Here the whole problem of determinism comes up. From the standpoint of our quantum mechanics there is no quantity which in any individual case causally fixes the consequence of the collision; but also experimentally we have so far no reason to believe that there are some inner properties of the atom which conditions a definite outcome for the collision. Ought we to hope later to discover such properties ... and determine them in individual cases? Or ought we to believe that the agreement of theory and experiment—as to the impossibility of prescribing conditions for a causal evolution—is a pre-established harmony founded on the nonexistence of such conditions? I myself am inclined to give up determinism in the world of atoms. But that is a philosophical question for which physical arguments alone are not decisive.
Born's interpretation of the wavefunction was criticized by Schrödinger, who had previously attempted to interpret it in real physical terms, but Albert Einstein's response became one of the earliest and most famous assertions that quantum mechanics is incomplete:
Niels Bohr reportedly replied to Einstein's later expression of this sentiment by advising him to "stop telling God what to do."[10]
Early attempts at hidden variable theoriesEdit
Shortly after making his famous "God does not play dice" comment, Einstein attempted to formulate a deterministic counterproposal to quantum mechanics, presenting a paper at a meeting of the Academy of Sciences in Berlin, on 5 May 1927, titled "Bestimmt Schrödinger's Wellenmechanik die Bewegung eines Systems vollständig oder nur im Sinne der Statistik?" ("Does Schrödinger's wave mechanics determine the motion of a system completely or only in the statistical sense?").[11] However, as the paper was being prepared for publication in the academy's journal, Einstein decided to withdraw it, possibly because he discovered that, contrary to his intention, it implied non-separability of entangled systems, which he regarded as absurd.[12]
At the Fifth Solvay Congress, held in Belgium in October 1927 and attended by all the major theoretical physicists of the era, Louis de Broglie presented his own version of a deterministic hidden-variable theory, apparently unaware of Einstein's aborted attempt earlier in the year. In his theory, every particle had an associated, hidden "pilot wave" which served to guide its trajectory through space. The theory was subject to criticism at the Congress, particularly by Wolfgang Pauli, which de Broglie did not adequately answer. De Broglie abandoned the theory shortly thereafter.
Declaration of completeness of quantum mechanics, and the Bohr–Einstein debatesEdit
Also at the Fifth Solvay Congress, Max Born and Werner Heisenberg made a presentation summarizing the recent tremendous theoretical development of quantum mechanics. At the conclusion of the presentation, they declared:
[W]hile we consider ... a quantum mechanical treatment of the electromagnetic field ... as not yet finished, we consider quantum mechanics to be a closed theory, whose fundamental physical and mathematical assumptions are no longer susceptible of any modification.... On the question of the 'validity of the law of causality' we have this opinion: as long as one takes into account only experiments that lie in the domain of our currently acquired physical and quantum mechanical experience, the assumption of indeterminism in principle, here taken as fundamental, agrees with experience.[13]
Although there is no record of Einstein responding to Born and Heisenberg during the technical sessions of the Fifth Solvay Congress, he did challenge the completeness of quantum mechanics during informal discussions over meals, presenting a thought experiment intended to demonstrate that quantum mechanics could not be entirely correct. He did likewise during the Sixth Solvay Congress held in 1930. Both times, Niels Bohr is generally considered to have successfully defended quantum mechanics by discovering errors in Einstein's arguments.
EPR paradoxEdit
The debates between Bohr and Einstein essentially concluded in 1935, when Einstein finally expressed what is widely considered his best argument against the completeness of quantum mechanics. Einstein, Podolsky, and Rosen had proposed their definition of a "complete" description as one that uniquely determines the values of all its measurable properties. Einstein later summarized their argument as follows:
Consider a mechanical system consisting of two partial systems A and B which interact with each other only during a limited time. Let the ψ function [i.e., wavefunction ] before their interaction be given. Then the Schrödinger equation will furnish the ψ function after the interaction has taken place. Let us now determine the physical state of the partial system A as completely as possible by measurements. Then quantum mechanics allows us to determine the ψ function of the partial system B from the measurements made, and from the ψ function of the total system. This determination, however, gives a result which depends upon which of the physical quantities (observables) of A have been measured (for instance, coordinates or momenta). Since there can be only one physical state of B after the interaction which cannot reasonably be considered to depend on the particular measurement we perform on the system A separated from B it may be concluded that the ψ function is not unambiguously coordinated to the physical state. This coordination of several ψ functions to the same physical state of system B shows again that the ψ function cannot be interpreted as a (complete) description of a physical state of a single system.[14]
Bohr answered Einstein's challenge as follows:
[The argument of] Einstein, Podolsky and Rosen contains an ambiguity as regards the meaning of the expression "without in any way disturbing a system." ... [E]ven at this stage [i.e., the measurement of, for example, a particle that is part of an entangled pair], there is essentially the question of an influence on the very conditions which define the possible types of predictions regarding the future behavior of the system. Since these conditions constitute an inherent element of the description of any phenomenon to which the term "physical reality" can be properly attached, we see that the argumentation of the mentioned authors does not justify their conclusion that quantum-mechanical description is essentially incomplete."[15]
Bohr is here choosing to define a "physical reality" as limited to a phenomenon that is immediately observable by an arbitrarily chosen and explicitly specified technique, using his own special definition of the term 'phenomenon'. He wrote in 1948:
As a more appropriate way of expression, one may strongly advocate limitation of the use of the word phenomenon to refer exclusively to observations obtained under specified circumstances, including an account of the whole experiment."[16][17]
This was, of course, in conflict with the definition used by the EPR paper, as follows:
Bell's theoremEdit
In 1964, John Bell showed through his famous theorem that if local hidden variables exist, certain experiments could be performed involving quantum entanglement where the result would satisfy a Bell inequality. If, on the other hand, statistical correlations resulting from quantum entanglement could not be explained by local hidden variables, the Bell inequality would be violated. Another no-go theorem concerning hidden variable theories is the Kochen–Specker theorem.
Physicists such as Alain Aspect and Paul Kwiat have performed experiments that have found violations of these inequalities up to 242 standard deviations[18] (excellent scientific certainty). This rules out local hidden variable theories, but does not rule out non-local ones. Theoretically, there could be experimental problems that affect the validity of the experimental findings.
Gerard 't Hooft has disputed the validity of Bell's theorem on the basis of the superdeterminism loophole and proposed some ideas to construct local deterministic models.[19]
Bohm's hidden variable theoryEdit
Assuming the validity of Bell's theorem, any deterministic hidden-variable theory that is consistent with quantum mechanics would have to be non-local, maintaining the existence of instantaneous or faster-than-light relations (correlations) between physically separated entities. The currently best-known hidden-variable theory, the "causal" interpretation of the physicist and philosopher David Bohm, originally published in 1952, is a non-local hidden variable theory. Bohm unknowingly rediscovered (and extended) the idea that Louis de Broglie had proposed in 1927 (and abandoned) – hence this theory is commonly called "de Broglie-Bohm theory". Bohm posited both the quantum particle, e.g. an electron, and a hidden 'guiding wave' that governs its motion. Thus, in this theory electrons are quite clearly particles—when a double-slit experiment is performed, its trajectory goes through one slit rather than the other. Also, the slit passed through is not random but is governed by the (hidden) guiding wave, resulting in the wave pattern that is observed.
Such a view does not contradict the idea of local events that is used in both classical atomism and relativity theory as Bohm's theory (and quantum mechanics) are still locally causal (that is, information travel is still restricted to the speed of light) but allow nonlocal correlations. It points to a view of a more holistic, mutually interpenetrating and interacting world. Indeed, Bohm himself stressed the holistic aspect of quantum theory in his later years, when he became interested in the ideas of Jiddu Krishnamurti.
In Bohm's interpretation, the (nonlocal) quantum potential constitutes an implicate (hidden) order which organizes a particle, and which may itself be the result of yet a further implicate order: a superimplicate order which organizes a field.[20] Nowadays Bohm's theory is considered to be one of many interpretations of quantum mechanics which give a realist interpretation, and not merely a positivistic one, to quantum-mechanical calculations. Some consider it the simplest theory to explain quantum phenomena.[21] Nevertheless, it is a hidden variable theory, and necessarily so.[22] The major reference for Bohm's theory today is his book with Basil Hiley, published posthumously.[23]
A possible weakness of Bohm's theory is that some (including Einstein, Pauli, and Heisenberg) feel that it looks contrived.[24] (Indeed, Bohm thought this of his original formulation of the theory.[25]) It was deliberately designed to give predictions that are in all details identical to conventional quantum mechanics.[25] Bohm's original aim was not to make a serious counterproposal but simply to demonstrate that hidden-variable theories are indeed possible.[25] (It thus provided a supposed counterexample to the famous proof by John von Neumann that was generally believed to demonstrate that no deterministic theory reproducing the statistical predictions of quantum mechanics is possible.) Bohm said he considered his theory to be unacceptable as a physical theory due to the guiding wave's existence in an abstract multi-dimensional configuration space, rather than three-dimensional space.[25] His hope was that the theory would lead to new insights and experiments that would lead ultimately to an acceptable one;[25] his aim was not to set out a deterministic, mechanical viewpoint, but rather to show that it was possible to attribute properties to an underlying reality, in contrast to the conventional approach to quantum mechanics.[26]
Recent developmentsEdit
In August 2011, Roger Colbeck and Renato Renner published a proof that any extension of quantum mechanical theory, whether using hidden variables or otherwise, cannot provide a more accurate prediction of outcomes, assuming that observers can freely choose the measurement settings.[27] Colbeck and Renner write: "In the present work, we have ... excluded the possibility that any extension of quantum theory (not necessarily in the form of local hidden variables) can help predict the outcomes of any measurement on any quantum state. In this sense, we show the following: under the assumption that measurement settings can be chosen freely, quantum theory really is complete".
In January 2013, GianCarlo Ghirardi and Raffaele Romano described a model which, "under a different free choice assumption [...] violates [the statement by Colbeck and Renner] for almost all states of a bipartite two-level system, in a possibly experimentally testable way".[28]
Classes of Hidden VariablesEdit
The general class, Λ, of hidden variables is composed of two subclasses ΛR (Recurrent) and ΛN (Non-recurrent) such that ΛR∪ΛN=Λ and ΛR∩ΛN={}. The class ΛN is very large and contains random variables whose domain is the continuum, the reals.[29] There are an uncountable infinite number of reals. Every instance of a real random variable is unique. The probability two instances are equal is zero, exactly zero.[30] ΛN induces sample independence. All correlations are context dependent but not in the usual sense. There is no "spooky action at a distance". That fact can be captured by using random variables which, for ΛN , are independent from one experiment to the next. The existence of the class ΛN makes it impossible to derive any of the standard inequalities used to define quantum entanglement. Attempting to derive Bell's inequality when the hidden variables belong to ΛN blocks the step assuming A of AB and A of AC are equal. Those experiments occur at different times and hence share no continuous hidden variable instances. AB and AC are independent, the expected value of their product is the product of their expected values: <(AB)(AC)>=<AB><AC>. That leads to a different inequality from Bell's. The non-recurrent inequality places no constraints on the correlations. Those correlations can violate Bell's inequality even though they arise from a local model.
Examples of ΛN distributions are:
Normal (Gaussian)
any piece-wise continuous density
See alsoEdit
1. ^ The Born-Einstein letters: correspondence between Albert Einstein and Max and Hedwig Born from 1916–1955, with commentaries by Max Born. Macmillan. 1971. p. 158. , (Private letter from Einstein to Max Born, 3 March 1947: "I admit, of course, that there is a considerable amount of validity in the statistical approach which you were the first to recognize clearly as necessary given the framework of the existing formalism. I cannot seriously believe in it because the theory cannot be reconciled with the idea that physics should represent a reality in time and space, free from spooky actions at a distance.... I am quite convinced that someone will eventually come up with a theory whose objects, connected by laws, are not probabilities but considered facts, as used to be taken for granted until quite recently".)
2. ^ private letter to Max Born, 4 December 1926, Albert Einstein Archives reel 8, item 180
3. ^ a b Einstein, A.; Podolsky, B.; Rosen, N. (1935). "Can Quantum-Mechanical Description of Physical Reality Be Considered Complete?". Physical Review. 47 (10): 777–780. Bibcode:1935PhRv...47..777E. doi:10.1103/PhysRev.47.777.
4. ^ "The debate whether Quantum Mechanics is a complete theory and probabilities have a non-epistemic character (i.e. nature is intrinsically probabilistic) or whether it is a statistical approximation of a deterministic theory and probabilities are due to our ignorance of some parameters (i.e. they are epistemic) dates to the beginning of the theory itself". See: arXiv:quant-ph/0701071v1 12 Jan 2007
5. ^ Senechal M, Cronin J (2001). "Social influences on quantum mechanics?-I". The Mathematical Intelligencer. 23 (4): 15–17. doi:10.1007/BF03024596.
6. ^ Individual diagrams are often split into several parts, which may occur beyond observation; only the diagram as a whole describes an observed event.
7. ^ For every subset of points within a range, a value for every argument from the subset will be determined by the points in the neighbourhood. Thus, as a whole, the evolution in time can be described (for a specific time interval) as a function, e.g. a linear one or an arc. See Continuous function#Definition in terms of limits of functions
8. ^ The Born–Einstein letters: correspondence between Albert Einstein and Max and Hedwig Born from 1916–1955, with commentaries by Max Born. Macmillan. 1971. p. 91.
9. ^ Cache of the Einstein section of the American Museum of Natural History[permanent dead link]
11. ^ Albert Einstein Archives reel 2, item 100
13. ^ Max Born and Werner Heisenberg, "Quantum mechanics", proceedings of the Fifth Solvay Congress.
14. ^ Einstein A (1936). "Physics and Reality". Journal of the Franklin Institute. 221.
15. ^ Bohr N (1935). "Can Quantum-Mechanical Description of Physical Reality be Considered Complete?". Physical Review. 48 (8): 700. Bibcode:1935PhRv...48..696B. doi:10.1103/physrev.48.696.
16. ^ Bohr N. (1948). "On the notions of causality and complementarity". Dialectica. 2 (3–4): 312–319 [317]. doi:10.1111/j.1746-8361.1948.tb00703.x.
17. ^ Rosenfeld, L. (). 'Niels Bohr's contribution to epistemology', pp. 522–535 in Selected Papers of Léon Rosenfeld, Cohen, R.S., Stachel, J.J. (editors), D. Riedel, Dordrecht, ISBN 978-90-277-0652-2, p. 531: "Moreover, the complete definition of the phenomenon must essentially contain the indication of some permanent mark left upon a recording device which is part of the apparatus; only by thus envisaging the phenomenon as a closed event, terminated by a permanent record, can we do justice to the typical wholeness of the quantal processes."
18. ^ Kwiat P. G.; et al. (1999). "Ultrabright source of polarization-entangled photons". Physical Review A. 60 (2): R773–R776. arXiv:quant-ph/9810003 . Bibcode:1999PhRvA..60..773K. doi:10.1103/physreva.60.r773.
19. ^ G 't Hooft, The Free-Will Postulate in Quantum Mechanics [1]; Entangled quantum states in a local deterministic theory [2]
20. ^ David Pratt: "David Bohm and the Implicate Order". Appeared in Sunrise magazine, February/March 1993, Theosophical University Press
21. ^ Michael K.-H. Kiessling: "Misleading Signposts Along the de Broglie–Bohm Road to Quantum Mechanics", Foundations of Physics, volume 40, number 4, 2010, pp. 418–429 (abstract)
22. ^ "While the testable predictions of Bohmian mechanics are isomorphic to standard Copenhagen quantum mechanics, its underlying hidden variables have to be, in principle, unobservable. If one could observe them, one would be able to take advantage of that and signal faster than light, which – according to the special theory of relativity – leads to physical temporal paradoxes." J. Kofler and A. Zeiliinger, "Quantum Information and Randomness", European Review (2010), Vol. 18, No. 4, 469–480.
23. ^ D. Bohm and B. J. Hiley, The Undivided Universe, Routledge, 1993, ISBN 0-415-06588-7.
24. ^ Wayne C. Myrvold (2003). "On some early objections to Bohm's theory" (PDF). International Studies in the Philosophy of Science. 17 (1): 8–24. doi:10.1080/02698590305233. Archived from the original on 2014-07-02.
25. ^ a b c d e David Bohm (1957). Causality and Chance in Modern Physics. Routledge & Kegan Paul and D. Van Nostrand. p. 110. ISBN 0-8122-1002-6.
27. ^ Roger Colbeck; Renato Renner (2011). "No extension of quantum theory can have improved predictive power". Nature Communications. 2 (8): 411. arXiv:1005.5173 . Bibcode:2011NatCo...2E.411C. doi:10.1038/ncomms1416.
28. ^ Giancarlo Ghirardi; Raffaele Romano (2013). "Onthological models predictively inequivalent to quantum theory". Physical Review Letters. 110 (17): 170404. arXiv:1301.2695 . Bibcode:2013PhRvL.110q0404G. doi:10.1103/PhysRevLett.110.170404. PMID 23679689.
29. ^ W. Ruden, ""Principles of Mathematical Analysis"", McGraw Hill, p22 (1964).
30. ^ W. Feller,""An Introduction To Probability Theory and its Applications"",, Vol II, p 4 (1971).
• Albert Einstein, Boris Podolsky, and Nathan Rosen, "Can Quantum-Mechanical Description of Physical Reality Be Considered Complete?" Physical Review 47, 777–780 (1935).
• John Stewart Bell, "On the Einstein–Podolsky–Rosen paradox", Physics 1, (1964) 195–200. Reprinted in Speakable and Unspeakable in Quantum Mechanics, Cambridge University Press, 2004.
• Wolfgang Pauli, letter to M. Fierz dated 10 August 1954, reprinted and translated in K. V. Laurikainen, Beyond the Atom: The Philosophical Thought of Wolfgang Pauli, Springer-Verlag, Berlin, 1988, p. 226.
• Werner Heisenberg, Physics and Beyond: Encounters and Conversations, translated by A. J. Pomerans, Harper & Row, New York, 1971, pp. 63–64.
• Claude Cohen-Tannoudji, Bernard Diu and Franck Laloë, Mecanique quantique (see also Quantum Mechanics translated from the French by Susan Hemley, Nicole Ostrowsky, and Dan Ostrowsky; John Wiley & Sons 1982) Hermann, Paris, France. 1977.
• P. S. Hanle, Indeterminacy before Heisenberg: The Case of Franz Exner and Erwin Schrödinger, Historical Studies in the Physical Sciences 10, 225 (1979).
• Asher Peres and Wojciech Zurek, "Is quantum theory universally valid?" American Journal of Physics 50, 807 (1982).
• Wojciech Zurek "Environment-induced superselection rules" Physical Review D 26 1862. 1982.
• Max Jammer, "The EPR Problem in Its Historical Development", in Symposium on the Foundations of Modern Physics: 50 years of the Einstein–Podolsky–Rosen Gedankenexperiment, edited by P. Lahti and P. Mittelstaedt (World Scientific, Singapore, 1985), pp. 129–149.
• Arthur Fine, The Shaky Game: Einstein Realism and the Quantum Theory, University of Chicago Press, Chicago, 1986.
• Thomas Kuhn. Black-Body Theory and the Quantum Discontinuity, 1894–1912 Chicago University Press. 1987.
• Carlton M. Caves and Christopher A. Fuchs, "Quantum Information: How Much Information in a State Vector?", in The Dilemma of Einstein, Podolsky and Rosen – 60 Years Later, edited by A. Mann and M. Revzen, Ann. Israel Physical Society 12, 226–257 (1996).
• Carlo Rovelli. "Relational quantum mechanics" International Journal of Theoretical Physics 35 1637–1678. 1996.
• Roland Omnès, Understanding Quantum Mechanics, Princeton University Press, 1999.
• Roman Jackiw and Daniel Kleppner, "One Hundred Years of Quantum Physics", Science, Vol. 289 Issue 5481, p. 893, August 2000.
• Orly Alter and Yoshihisa Yamamoto (2001). Quantum Measurement of a Single System (PDF). Wiley-Interscience. 136 pp. doi:10.1002/9783527617128. ISBN 9780471283089. Slides. Archived from the original (PDF) on 2014-02-03.
• Erich Joos, et al., Decoherence and the Appearance of a Classical World in Quantum Theory, 2nd ed., Berlin, Springer, 2003.
• Wojciech Zurek (2003). "Decoherence and the transition from quantum to classical — Revisited", arXiv:quant-ph/0306072 (An updated version of Physics Today, 44:36–44 (1991) article)
• Wojciech Zurek, "Decoherence, einselection, and the quantum origins of the classical" in Reviews of Modern Physics, vol.75, (715).
• Asher Peres and Daniel Terno, "Quantum Information and Relativity Theory", Reviews of Modern Physics 76 (2004) 93.
• Roger Penrose, The Road to Reality: A Complete Guide to the Laws of the Universe, Alfred Knopf 2004.
• Maximilian Schlosshauer, "Decoherence, the Measurement Problem, and Interpretations of Quantum Mechanics", in Reviews of Modern Physics, vol.76, pages 1267–1305, 2005.
• Federico Laudisa and Carlo Rovelli. "Relational Quantum Mechanics" The Stanford Encyclopedia of Philosophy (Fall 2005 Edition).
• Marco Genovese, "Research on hidden variable theories: a review of recent progresses", in Physics Reports, vol.413, 2005.
External linksEdit |
d8753701ca34f878 | The 17 Equations That Changed The World
We asked Professor Stewart why he decided to do this book:
"Equations definitely CAN be dull, and they CAN seem complicated, but that's because they are often presented in a dull and complicated way. I have an advantage over school math teachers: I'm not trying to show you how to do the sums yourself. You can appreciate the beauty and importance of equations without knowing how to solve them..... The intention is to locate them in their cultural and human context, and pull back the veil on their hidden effects on history. Equations are a vital part of our culture. The stories behind them --- the people who discovered/invented them and the periods in which they lived --- are fascinating."
Click here to see the 17 equations >
From an email exchange with Professor Stewart:
"It's actually a fairly simple equation, mathematically speaking. What caused trouble was the complexity of the system the mathematics was intended to model.... You don't really need to be a rocket scientist to understand that lending hundreds of billions of dollars to people who have no prospect of ever paying it back is not a great idea...."
You can buy the full book here.
The Pythagorean Theorem
Modern use: Triangulation is used to this day to pinpoint relative location for GPS navigation.
Source: In Pursuit of the Unknown: 17 Equations That Changed the World
The logarithm and its identities
Importance: Logarithms were revolutionary, making calculation faster and more accurate for engineers and astronomers. That's less important with the advent of computers, but they're still an essential to scientists.
Modern use: Logarithms still inform our understanding of radioactive decay.
The fundamental theorem of calculus
History: Calculus as we currently know it was described around the same in the late 17th century by Isaac Newton and Gottfried Leibniz. There was a lengthy debate over plagiarism and priority which may never be resolved. We use the leaps of logic and parts of the notation of both men today.
Importance: According to Stewart,"More than any other mathematical technique, it has created the modern world." Calculus is essential in our understanding of how to measure solids, curves, and areas. It is the foundation of many natural laws, and the source of differential equations.
Modern use: Any mathematical problem where an optimal solution is required. Essential to medicine, economics, and computer science.
Newton's universal law of gravitation
History: Isaac Newton derived his laws with help from earlier work by Johannes Kepler. He also used, and possibly plagiarized the work of Robert Hooke.
Importance: Used techniques of calculus to describe how the world works. Even though it was later supplanted by Einstein's theory of relativity, it is still essential for practical description of how objects interact with each other. We use it to this day to design orbits for satellites and probes.
Value: When we launch space missions, the equation is used to find optimal gravitational "tubes" or pathways so they can be as energy efficient as possible. Also makes satellite TV possible.
The origin of complex numbers
History: Imaginary numbers were originally posited by famed gambler/mathematician Girolamo Cardano, then expanded by Rafael Bombelli and John Wallis. They still existed as a peculiar, but essential problem in math until William Hamilton described this definition.
Importance: According to Stewart".... most modern technology, from electric lighting to digital cameras could not have been invented without them." Imaginary numbers allow for complex analysis, which allows engineers to solve practical problems working in the plane.
Modern use: Used broadly in electrical engineering and complex mathematic theory.
Euler's formula for polyhedra
What does it mean?: Describes a space's shape or structure regardless of alignment.
Importance: Fundamental to the development of topography, which extends geometry to any continuous surface. An essential tool for engineers and biologists.
Modern use: Topology is used to understand the behavior and function of DNA.
The normal distribution
History: The initial work was by Blaise Pascal, but the distribution came into its own with Bernoulli. The bell curve as we currently comes from Belgian mathematician Adolphe Quetelet.
Modern use: Used to determine whether drugs are sufficiently effective relative to negative side effects in clinical trials.
The wave equation
History: The mathematicians Daniel Bournoulli and Jean D'Alembert were the first to describe this relationship in the 18th century, albeit in slightly different ways.
Modern use: Oil companies set off explosives, then read data from the ensuing sound waves to predict geological formations.
The Fourier transform
Modern use: Used to compress information for the JPEG image format and discover the structure of molecules.
The Navier-Stokes equations
Importance: Once computers became powerful enough to solve this equation, it opened up a complex and very useful field of physics. It is particularly useful in making vehicles more aerodynamic.
Modern use: Among other things, allowed for the development of modern passenger jets.
Maxwell's equations
Modern use: Radar, television, and modern communications.
Second law of thermodynamics
Einstein's theory of relativity
History: The less known (among non-physicists) genesis of Einstein's equation was an experiment by Albert Michelson and Edward Morley that proved light did not move in a Newtonian manner in comparison to changing frames of reference. Einstein followed up on this insight with his famous papers on special relativity (1905) and general relativity (1915).
Modern use: Helped lead to nuclear weapons, and if GPS didn't account for it, your directions would be off thousands of yards.
The Schrödinger equation
Shannon's information theory
Importance: According to Stewart, "It is the equation that ushered in the information age." By stopping engineers from seeking codes that were too efficient, it established the boundaries that made everything from CDs to digital communication possible.
The logistic model for population growth
Modern use: Used to model earthquakes and forecast the weather.
The Black–Scholes model
More: Features |
8a84299ecd432179 | Correspondence principle
From Wikipedia, the free encyclopedia
Jump to navigation Jump to search
The term codifies the idea that a new theory should reproduce under some conditions the results of older well-established theories in those domains where the old theories work. This concept is somewhat different from the requirement of a formal limit under which the new theory reduces to the older, thanks to the existence of a deformation parameter.[clarification needed]
Quantum mechanics[edit]
The rules of quantum mechanics are highly successful in describing microscopic objects, atoms and elementary particles. But macroscopic systems,[4] like springs and capacitors, are accurately described by classical theories like classical mechanics and classical electrodynamics. If quantum mechanics were to be applicable to macroscopic objects, there must be some limit in which quantum mechanics reduces to classical mechanics. Bohr's correspondence principle demands that classical physics and quantum physics give the same answer when the systems become large.[5] A. Sommerfeld (1924) referred to the principle as "Bohrs Zauberstab" (Bohr's magic wand).
The conditions under which quantum and classical physics agree are referred to as the correspondence limit, or the classical limit. Bohr provided a rough prescription for the correspondence limit: it occurs when the quantum numbers describing the system are large. A more elaborated analysis of quantum-classical correspondence (QCC) in wavepacket spreading leads to the distinction between robust "restricted QCC" and fragile "detailed QCC".[6] "Restricted QCC" refers to the first two moments of the probability distribution and is true even when the wave packets diffract, while "detailed QCC" requires smooth potentials which vary over scales much larger than the wavelength, which is what Bohr considered.
The post-1925 new quantum theory came in two different formulations. In matrix mechanics, the correspondence principle was built in and was used to construct the theory. In the Schrödinger approach classical behavior is not clear because the waves spread out as they move. Once the Schrödinger equation was given a probabilistic interpretation, Ehrenfest showed that Newton's laws hold on average: the quantum statistical expectation value of the position and momentum obey Newton's laws.
The correspondence principle is one of the tools available to physicists for selecting quantum theories corresponding to reality. The principles of quantum mechanics are broad: states of a physical system form a complex vector space and physical observables are identified with Hermitian operators that act on this Hilbert space. The correspondence principle limits the choices to those that reproduce classical mechanics in the correspondence limit.
Because quantum mechanics only reproduces classical mechanics in a statistical interpretation, and because the statistical interpretation only gives the probabilities of different classical outcomes, Bohr has argued that quantum physics does not reduce to classical mechanics similarly as classical mechanics emerges as an approximation of special relativity at small velocities. He argued that classical physics exists independently of quantum theory and cannot be derived from it. His position is that it is inappropriate to understand the experiences of observers using purely quantum mechanical notions such as wavefunctions because the different states of experience of an observer are defined classically, and do not have a quantum mechanical analog. The relative state interpretation of quantum mechanics is an attempt to understand the experience of observers using only quantum mechanical notions. Niels Bohr was an early opponent of such interpretations.
Many of these conceptual problems, however, resolve in the phase-space formulation of quantum mechanics, where the same variables with the same interpretation are utilized to describe both quantum and classical mechanics.
Other scientific theories[edit]
The term "correspondence principle" is used in a more general sense to mean the reduction of a new scientific theory to an earlier scientific theory in appropriate circumstances. This requires that the new theory explain all the phenomena under circumstances for which the preceding theory was known to be valid, the "correspondence limit".
For example,
• Einstein's special relativity satisfies the correspondence principle, because it reduces to classical mechanics in the limit of velocities small compared to the speed of light (example below);
• General relativity reduces to Newtonian gravity in the limit of weak gravitational fields;
• Laplace's theory of celestial mechanics reduces to Kepler's when interplanetary interactions are ignored, and Kepler's reproduces Ptolemy's equant in a coordinate system where the Earth is stationary;
• Statistical mechanics reproduces thermodynamics when the number of particles is large;
• In biology, chromosome inheritance theory reproduces Mendel's laws of inheritance, in the domain that the inherited factors are protein coding genes.
In order for there to be a correspondence, the earlier theory has to have a domain of validity—it must work under some conditions. Not all theories have a domain of validity. For example, there is no limit where Newton's mechanics reduces to Aristotle's mechanics because Aristotle's mechanics, although academically dominant for 18 centuries, does not have any domain of validity.[citation needed][dubious ]
Bohr model[edit]
The angular momentum L of the circular orbit scales as r. The energy in terms of the angular momentum is then
This is how Bohr arrived at his model. Since only the level spacing is determined heuristically by the correspondence principle, one could always add a small fixed offset to the quantum number— L could just as well have been (n+.338) ħ.
Bohr used his physical intuition to decide which quantities were best to quantize. It is a testimony to his skill that he was able to get so much from what is only the leading order approximation. A less heuristic treatment accounts for needed offsets in the ground state L2, cf. Wigner–Weyl transform.
One-dimensional potential[edit]
Bohr's correspondence condition can be solved for the level energies in a general one-dimensional potential. Define a quantity J(E) which is a function only of the energy, and has the property that
This is the analog of the angular momentum in the case of the circular orbits. The orbits selected by the correspondence principle are the ones that obey J = nh for n integer, since
This quantity J is canonically conjugate to a variable θ which, by the Hamilton equations of motion changes with time as the gradient of energy with J. Since this is equal to the inverse period at all times, the variable θ increases steadily from 0 to 1 over one period.
The angle variable comes back to itself after 1 unit of increase, so the geometry of phase space in J,θ coordinates is that of a half-cylinder, capped off at J = 0, which is the motionless orbit at the lowest value of the energy. These coordinates are just as canonical as x,p, but the orbits are now lines of constant J instead of nested ovoids in x-p space.
The area enclosed by an orbit is invariant under canonical transformations, so it is the same in x-p space as in J-θ. But in the J-θ coordinates, this area is the area of a cylinder of unit circumference between 0 and J, or just J. So J is equal to the area enclosed by the orbit in x-p coordinates too,
The quantization rule is that the action variable J is an integer multiple of h.
Multiperiodic motion: Bohr–Sommerfeld quantization[edit]
Bohr's correspondence principle provided a way to find the semiclassical quantization rule for a one degree of freedom system. It was an argument for the old quantum condition mostly independent from the one developed by Wien and Einstein, which focused on adiabatic invariance. But both pointed to the same quantity, the action.
Bohr was reluctant to generalize the rule to systems with many degrees of freedom. This step was taken by Sommerfeld, who proposed the general quantization rule for an integrable system,
Each action variable is a separate integer, a separate quantum number.
This condition reproduces the circular orbit condition for two dimensional motion: let r,θ be polar coordinates for a central potential. Then θ is already an angle variable, and the canonical momentum conjugate is L, the angular momentum. So the quantum condition for L reproduces Bohr's rule:
This allowed Sommerfeld to generalize Bohr's theory of circular orbits to elliptical orbits, showing that the energy levels are the same. He also found some general properties of quantum angular momentum which seemed paradoxical at the time. One of these results was that the z-component of the angular momentum, the classical inclination of an orbit relative to the z-axis, could only take on discrete values, a result which seemed to contradict rotational invariance. This was called space quantization for a while, but this term fell out of favor with the new quantum mechanics since no quantization of space is involved.
In modern quantum mechanics, the principle of superposition makes it clear that rotational invariance is not lost. It is possible to rotate objects with discrete orientations to produce superpositions of other discrete orientations, and this resolves the intuitive paradoxes of the Sommerfeld model.
The quantum harmonic oscillator[edit]
Here is a demonstration[7] of how large quantum numbers can give rise to classical (continuous) behavior.
Consider the one-dimensional quantum harmonic oscillator. Quantum mechanics tells us that the total (kinetic and potential) energy of the oscillator, E, has a set of discrete values,
where ω is the angular frequency of the oscillator.
However, in a classical harmonic oscillator such as a lead ball attached to the end of a spring, we do not perceive any discreteness. Instead, the energy of such a macroscopic system appears to vary over a continuum of values. We can verify that our idea of macroscopic systems fall within the correspondence limit. The energy of the classical harmonic oscillator with amplitude A, is
Thus, the quantum number has the value
If we apply typical "human-scale" values m = 1kg, ω = 1 rad/s, and A = 1 m, then n ≈ 4.74×1033. This is a very large number, so the system is indeed in the correspondence limit.
It is simple to see why we perceive a continuum of energy in this limit. With ω = 1 rad/s, the difference between each energy level is ħω ≈ 1.05 × 10−34J, well below what we normally resolve for macroscopic systems. One then describes this system through an emergent classical limit.
Relativistic kinetic energy[edit]
Here we show that the expression of kinetic energy from special relativity becomes arbitrarily close to the classical expression, for speeds that are much slower than the speed of light, v ≪ c.
Einstein's mass-energy equation
where the velocity, v is the velocity of the body relative to the observer, is the rest mass (the observed mass of the body at zero velocity relative to the observer), and c is the speed of light.
When the velocity v vanishes, the energy expressed above is not zero, and represents the rest energy,
When the body is in motion relative to the observer, the total energy exceeds the rest energy by an amount that is, by definition, the kinetic energy,
Using the approximation
we get, when speeds are much slower than that of light, or v ≪ c,
which is the Newtonian expression for kinetic energy.
See also[edit]
2. ^ Bohr, N. (1920), "Über die Serienspektra der Element", Zeitschrift für Physik, 2 (5): 423–478, Bibcode:1920ZPhy....2..423B, doi:10.1007/BF01329978 (English translation in (Bohr 1976, pp. 241–282))
3. ^ Jammer, Max (1989), The conceptual development of quantum mechanics, Los Angeles, CA: Tomash Publishers, American Institute of Physics, ISBN 0-88318-617-9 , Section 3.2
5. ^ Bohr, Niels (1976), Rosenfeld, L.; Nielsen, J. Rud, eds., Niels Bohr, Collected Works, Volume 3, The Correspondence Principle (1918–1923), 3, Amsterdam: North-Holland, ISBN 0-444-10784-3
6. ^ Stotland, A.; Cohen, D. (2006), "Diffractive energy spreading and its semiclassical limit", Journal of Physics A, 39 (10703), arXiv:cond-mat/0605591Freely accessible, Bibcode:2006JPhA...3910703S, doi:10.1088/0305-4470/39/34/008, ISSN 0305-4470
7. ^ Sells, Robert L.; Weidner, Richard T. (1980), Elementary modern physics, Boston: Allyn and Bacon, ISBN 978-0-205-06559-2 |
de0354c60a0b3da2 | fredag 8 november 2013
Mathematics: Backward Magics or Forward Reason
A primitive function magically pulled out of a hat as an area under a function graph.
There are two approaches to mathematics:
1. Symbolic mathematics: magics: objects pulled out of hats.
2. Constructive mathematics: reason: objects constructed in stepwise computation.
Let me give two examples:
The Fundamental Theorem of Calculus
The presentation of the Fundamental Theorem of Calculus in standard text books of Calculus, is the following: Consider the integral
• $u(t) =\int_0^t f(s)\, ds$ for $t > 0$,
defined as the area under the curve determined by the function $s\rightarrow f(s)$ for $s\in [0,t]$.
Compute the derivate $\dot u=\frac{du}{dt}$ of the function $t\rightarrow u(t)$ with respect to $t$, to find that, assuming some suitable continuity property of $s\rightarrow f(s)$:
• $\dot u (t) = \lim_{\Delta t\rightarrow 0}\frac{u(t+\Delta t) - u(t)}{\Delta t}= \lim_{\Delta t\rightarrow 0}\frac{1}{\Delta t}\int_t^{t+\Delta t} f(s)\, ds = f(t)$ for $t >0$.
In short, the key argument is to show that the integral $u(t)$, defined as an area, satisfies a differential equation
• $\dot u(t) = f(t)$ for $t > 0$
or solves an initial value problem
• $\dot u(t) = f(t)$ for $t > 0$ with $u(0)=0$. (*)
We thus start with a given function, the integral $u(t)$, which is shown to be the solution of a certain initial value problem. The process leads from solution to equation satisified by the solution. The equation appears as magics without reason, since the reason is put into the specification of the solution or integral $u(t)$, with appeal to a concept of area which has to be defined, and not into the equation.
But this is backwards: The more reasonable forward procedure is to start with the initial value problem (*) expressing that the rate of change $\dot u$ of $u$ is equal to $f$ as balance equation expressing some basic physics, and then proceed to the integral $u(t)$ as the solution to the balance equation constructed by time stepping. This is the approach followed in BodyandSoul. We sum up as follows
• To proceed from solution to equation is backwards magical.
• To proceed from equation to solution by forward time-stepping is reasonable and not magical.
There are many specific examples of this form including trigonometric and exponential functions and more generally elementary functions all better constructed by time stepping basic differential equations than magically being picked out of hats.
For example, the trigonometric functions $\sin(t)$ and $\cos(t)$ are better defined as solutions to $\ddot u + u =0$, which can be constructed by time stepping, rather than geometrically as in standard calculus as ratios of the lengths of sides of a right-angled triangle, which is not computationally constructive.
Quantum Mechanics
The same situation is met in quantum mechanics:
The backward magical process is to start from a wave function solution and discover an equation satisfied by the solution, a magical Schrödinger equation without physical basis which is a mystery to all physicists.
The more natural procedure is to start from the Schrödinger equation, which can be formulated as a rational balance equation of smoothed particle dynamics, and then construct the solution (the wave function) by forward time stepping.
Concluding Remark: In the discussion of the mathematics program at Chalmers, the standard text book by Adams represents backwards magics, while BodyandSoul represents forward reason. Pick what you think is best. But after all, who cares?
Inga kommentarer:
Skicka en kommentar |
7a908ae89e90ee89 | Momentum in Quantum Mechanics
For a particle in state Psi, the expectation value of x is langle xrangle=int_{-infty}^{+infty}x|Psi(x,t)|^2,dx.
langle prangle=mfrac{dlangle xrangle}{dt}=-ihbarint_{-infty}^{+infty}left(Psi^*frac{partialPsi}{partial x}right),dx.
In general, langle Q(x,p)rangle=int_{-infty}^{+infty}Psi^*,Qleft(x,frac{hbar}{i}frac{partial}{partial x}right)Psi,dx.
For example, T=frac12mv^2=frac{p^2}{2m}, so langle Trangle=-frac{hbar^2}{2m}int_{-infty}^{+infty}Psi^*frac{partial^2Psi}{partial x^2},dx.
Quantum Mechanics I: Wave Functions
Wave functions: The wave function for a particle contains all of the information about that particle. If the particle moves in one dimension in the presence of a potential energy function U(x), the wave function Psi(x,t) obeys the one-dimensional Schrödinger equation: -frac{hbar^2}{2m}frac{partial^2Psi(x,t)}{partial x^2}+U(x)Psi(x,t)=ihbarfrac{partialPsi(x,t)}{partial t}. (For a free particle on which no forces act, U(x)=0.) The quantity |Psi(x,t)|^2, called the probability distribution function, determines the relative probability of finding a particle near a given position at a given time. If the particle is in a state of definite energy, called a stationary state, Psi(x,t) is a product of a function psi(x) that depends on only spatial coordinates and a function e^{-iEt/hbar} that depends on only time: Psi(x,t)=psi(x)e^{iEt/hbar}. For a stationary state, the probability distribution function is independent of time.
A spatial stationary-state wave function psi(x) for a particle that moves in one dimension in the presence of a potential-energy function U(x) satisfies the time-independent Schrödinger equation: -frac{hbar^2}{2m}frac{d^2psi(x)}{dx^2}+U(x)psi(x)=Epsi(x). More complex wave functions can be constructed by super-imposing stationary-state wave functions. These can represent particles that are localized in a certain region, thus representing both particle and wave aspects.
Particle in a box: The energy levels for a particle of mass m in a box (an infinitely deep square potential well) with width L are given by the equation: E_n=frac{p_n^2}{2m}=frac{n^2h^2}{8mL^2}=frac{n^2pi^2hbar^2}{2mL^2} (n=1,2,3,ldots). The corresponding normalized stationary-state wave functions of the particle are given by the equation psi_n(x)=sqrt{frac2L}sinfrac{npi x}L (n=1,2,3,ldots).
Wave functions and normalization: To be a solution of the Schrödinger equation, the wave function psi(x) and its derivative dpsi(x)/dx must be continuous everywhere except where the potential-energy function U(x) has an infinity discontinuity. Wave functions are usually normalized so that the total probability of finding the particle somewhere is unity: int_{-infty}^{+infty}|psi(x)|^2,dx=1.
Finite potential well: In a potential well with finite depth U_0, the energy levels are lower than those for an infinitely deep well with the same width, and the number of energy levels corresponding to bound states is finite. The levels are obtained by matching wave functions at the well walls to satisfy the continuity of psi(x) and dpsi(x)/dx.
Potential barriers and tunneling: There is a certain probability that a particle will penetrate a potential-energy barrier even though its initial energy is less than the barrier height. This process is called tunneling.
Quantum harmonic oscillator: The energy levels for the harmonic oscillator (for which U(x)=frac12k'x^2) are given by the equation: E_n=(n+frac12)hbarsqrt{frac{k'}{m}}=(n+frac12)hbaromega (n=1,2,3,ldots). The spacing between any two adjacent levels is hbaromega, where omega=sqrt{k'/m} is the oscillation angular frequency of the corresponding Newtonian harmonic oscillator.
Measurement in quantum mechanics: If the wave function of a particle does not correspond to a definite value of a certain physical property (such as momentum or energy), the wave function changes when we measure that property. This phenomenon is called wave-function collapse.
Particles Behaving as Waves
De Broglie waves and electron diffraction: Electrons and other particles have wave properties. A particle’s wavelength depends on its momentum in the same way as for photons: lambda=frac hp=frac h{mv}, E=hf. A non-relativistic electron accelerated from rest through a potential difference V_{ba} has a wavelength lambda=frac hp=frac h{sqrt{2meV_{ba}}}. Electron microscopes use the very small wavelengths of fast-moving electrons to make images with resolution thousands of times finer than is possible with visible light.
The nuclear atom: The Rutherford scattering experiments show that most of an atom’s mass and all of its positive charge are concentrated in a tiny, dense nucleus at the center of the atom.
Atomic line spectra and energy levels: The energies of atoms are quantized: They can have only certain definite values, called energy levels. When an atom makes a transition from an energy level E_i to a lower level E_f, it emits a photon of energy E_i-E_f: hf=frac{hc}{lambda}=E_i-E_f. The same photon can be absorbed by an atom in the lower energy level, which excites the atom to the upper level.
The Bohr model: In the Bohr model of the hydrogen atom, the permitted values of angular momentum are integral multiples of h/2pi: L_n=mv_nr_n=nfrac{h}{2pi}, (n=1,2,3,ldots). The integer multiplier n is called the principal quantum number for the level. The orbital radii are proportional to n^2: r_n=epsilon_0frac{n^2h^2}{pi me^2}=n^2a_0, v_n=frac{1}{epsilon_0}frac{e^2}{2nh}. The energy levels of the hydrogen atoms are given by E_n=-frac{hcR}{n^2}=-frac{13.60,mathrm{eV}}{n^2}, (n=1,2,3,ldots), where R is the Rydberg constant.
The laser: The laser operates on the principle of stimulated emission, by which many photons with identical wavelength and phase are emitted. Laser operation requires a nonequilibrium condition called population inversion, in which more atoms are in a higher-energy state than are in a lower-energy state.
Blackbody radiation: The total radiated intensity (average power radiated per area) from a blackbody surface is proportional to the fourth power of the absolute temperature T: I=sigma T^4 (Stefan-Boltzmann law). The quantity sigma=5.67times 10^{-8},mathrm{W/m^2cdot K^4} is called the Stefan-Boltzmann constant. The wavelength lambda_m at which a blackbody radiates most strongly is inversely proportional to T: lambda_mT=2.90times 10^{-3},mathrm{mcdot K} (Wien displacement law). The Planck radiation law gives the spectral emittance I(lambda) (intensity per wavelength interval in blackbody radiation): I(lambda)=frac{2pi hc^2}{lambda^5(e^{hc/lambda kT}-1)}.
The Heisenberg uncertainty principle for particles: The same uncertainty considerations that apply to photons also apply to particles such as electrons. The uncertainty Delta E in the energy of a state that is occupied for a time Delta t is given by equation Delta tDelta Egeqhbar/2. |
27f9ee053ee249ee | The world's most-cited Neurosciences journals
This article is part of the Research Topic
Neurodynamics of will
Front. Integr. Neurosci., 12 October 2012 |
How quantum brain biology can rescue conscious free will
• 1Department of Anesthesiology, Center for Consciousness Studies, University of Arizona, Tucson, AZ, USA
• 2Department of Psychology, Center for Consciousness Studies, University of Arizona, Tucson, AZ, USA
Conscious “free will” is problematic because (1) brain mechanisms causing consciousness are unknown, (2) measurable brain activity correlating with conscious perception apparently occurs too late for real-time conscious response, consciousness thus being considered “epiphenomenal illusion,” and (3) determinism, i.e., our actions and the world around us seem algorithmic and inevitable. The Penrose–Hameroff theory of “orchestrated objective reduction (Orch OR)” identifies discrete conscious moments with quantum computations in microtubules inside brain neurons, e.g., 40/s in concert with gamma synchrony EEG. Microtubules organize neuronal interiors and regulate synapses. In Orch OR, microtubule quantum computations occur in integration phases in dendrites and cell bodies of integrate-and-fire brain neurons connected and synchronized by gap junctions, allowing entanglement of microtubules among many neurons. Quantum computations in entangled microtubules terminate by Penrose “objective reduction (OR),” a proposal for quantum state reduction and conscious moments linked to fundamental spacetime geometry. Each OR reduction selects microtubule states which can trigger axonal firings, and control behavior. The quantum computations are “orchestrated” by synaptic inputs and memory (thus “Orch OR”). If correct, Orch OR can account for conscious causal agency, resolving problem 1. Regarding problem 2, Orch OR can cause temporal non-locality, sending quantum information backward in classical time, enabling conscious control of behavior. Three lines of evidence for brain backward time effects are presented. Regarding problem 3, Penrose OR (and Orch OR) invokes non-computable influences from information embedded in spacetime geometry, potentially avoiding algorithmic determinism. In summary, Orch OR can account for real-time conscious causal agency, avoiding the need for consciousness to be seen as epiphenomenal illusion. Orch OR can rescue conscious free will.
Introduction: Three Problems with Free Will
We have the sense of conscious control of our voluntary behaviors, of free will, of our mental processes exerting causal actions in the physical world. But such control is difficult to scientifically explain for three reasons:
Consciousness and Causal Agency
What is meant, exactly, by “we” (or “I”) exerting conscious control? The scientific basis for consciousness, and “self,” are unknown, and so a mechanism by which conscious agency may act in the brain to exert causal effects in the world is also unknown.
Does Consciousness Come Too Late?
Brain electrical activity correlating with conscious perception of a stimulus apparently can occur after we respond to that stimulus, seemingly consciously. Accordingly, science and philosophy generally conclude that we act non-consciously, and have subsequent false memories of conscious action, and thus cast consciousness as epiphenomenal and illusory (e.g., Dennett, 1991; Wegner, 2002).
Even if consciousness and a mechanism by which it exerts real-time causal action came to be understood, those specific actions could be construed as entirely algorithmic and inevitably pre-ordained by our deterministic surroundings, genetics and previous experience.
We do know that causal behavioral action and other cognitive functions derive from brain neurons, and networks of brain neurons, which integrate inputs to thresholds for outputs as axonal firings, which then collectively control behavior. Such actions may be either (seemingly, at least) conscious/voluntary, or non-conscious (i.e., reflexive, involuntary, or “auto-pilot”). The distinction between conscious and non-conscious activity [the “neural correlate of consciousness (NCC)”] is unknown, but often viewed as higher order emergence in computational networks of integrate-and-fire neurons in cortex and other brain regions (Scott, 1995). Cortical-cortical, cortical-thalamic, brainstem and limbic networks of neurons connected by chemical synapses are generally seen as neurocomputational frameworks for conscious activity, (e.g., Baars, 1988; Crick and Koch, 1990; Edelman and Tononi, 2000; Dehaene and Naccache, 2001), with pre-frontal and pre-motor cortex considered to host executive functions, planning and decision making.
But even if specific networks, neurons, membrane, and synaptic activities involved in consciousness were completely known, questions would remain. Aside from seemingly occurring too late for conscious control, neurocomputational activity fails to: (1) distinguish between conscious and non-conscious (“auto-pilot”) cognition, (2) account for long-range gamma synchrony electro-encephalography (“EEG”), the best measurable NCC (Singer and Gray, 1995), for which gap junction electrical synapses are required, (3) account for “binding” of disparate activities into unified percepts, (4) consider scale-invariant (“fractal-like,” “1/f”) brain dynamics and structure, and (5) explain the “hard problem” of subjective experience (e.g., Chalmers, 1996). A modified type of neuronal network can resolve some of these issues, but to fully address consciousness and free will, something else is needed. Here I propose the missing ingredient is finer scale, deeper order, molecular-level quantum effects in cytoskeletal microtubules inside brain neurons.
In particular, the Penrose–Hameroff “Orch OR” model suggests that quantum computations in microtubules inside brain neurons process information and regulate membrane and synaptic activities. Microtubules are lattice polymers of subunit proteins called “tubulin.” Orch OR proposes tubulin states in microtubules act as interactive information “bits,” and also as quantum superpositions of multiple possible tubulin states (e.g., quantum bits or qubits). During integration phases, tubulin qubits interact by entanglement, evolve and compute by the Schrödinger equation, and then reduce, or collapse to definite states, e.g., after 25 ms in gamma synchrony. The quantum state reduction is due to an objective threshold [“objective reduction (OR)”] proposed by Penrose, accompanied by a moment of conscious awareness. Synaptic inputs and other factors “orchestrate” the microtubule quantum computations, hence “orchestrated objective reduction (Orch OR).”
Orch OR directly addresses conscious causal agency. Each reduction/conscious moment selects particular microtubule states which regulate neuronal firings, and thus control conscious behavior. Regarding consciousness occurring “too late,” quantum state reductions seem to involve temporal non-locality, able to refer quantum information both forward and backward in what we perceive as time, enabling real-time conscious causal action. Quantum brain biology and Orch OR can thus rescue free will.
Consciousness, Brain, and Causality
Consciousness involves awareness, phenomenal experience (composed of what philosophers term “qualia”), sense of self, feelings, apparent choice and control of actions, memory, a model of the world, thought, language, and, e.g., when we close our eyes, or meditate, internally-generated images and geometric patterns. But what consciousness actually is remains unknown.
Most scientists and philosophers view consciousness as an emergent property of complex computation among networks of the brain's 100 billion “integrate-and-fire” neurons. In digital computers, discrete voltage levels represent information units (e.g., “bits”) in silicon logic gates. McCulloch and Pitts (1943) arranged logic gates as integrate-and-fire silicon neurons, leading to “perceptrons” (Rosenblatt, 1962; Figure 1) and self-organizing “artificial neural networks” capable of learning and self-organized behavior. Similarly, according to the standard “Hodgkin and Huxley” (1952) model, biological neurons are “integrate-and-fire” threshold logic device in which multiple branched dendrites and a cell body (soma) receive and integrate synaptic inputs as membrane potentials. The integrated potential is then compared to a threshold potential at the axon hillock, or axon initiation segment (AIS). When AIS threshold is reached by the integrated potential, an all-or-none action potential “firing,” or “spike” is triggered as output, conveyed along the axon to the next synapse. Axonal firings can manifest will and behavior, e.g., causing other neurons to move muscles or speak words.
Figure 1. Three characterizations of integrate-and-fire neurons. Top: Biological neuron with multiple dendrites and one cell body (soma) receive and integrate synaptic inputs as membrane potentials which are compared to a threshold at the axon initiation segment (AIS). If threshold is met, axonal spikes/firings are triggered along a single axon which branches distally to convey outputs. Middle: computer-based artificial neuron (e.g., a “perceptron,” Rosenblatt, 1962) with multiple weighted inputs and single branched output. Bottom: model neuron (see subsequent figures) showing the same essential features with three inputs on one dendrite and single axonal output which branches distally.
Some contend that consciousness emerges from axonal firing outputs, “volleys,” or “explosions” from complex neurocomputation (Koch, 2004; Malach, 2007). But coherent axonal firings are preceded and caused by synchronized dendritic/somatic integrations, suggesting consciousness involves neuronal dendrites and cell bodies/soma, i.e., in integration phases of “integrate-and-fire” sequences (Pribram, 1991; Eccles, 1992; Woolf and Hameroff, 2001; Tononi, 2004). Integration implies merging and consolidation of multiple input sources to one output, e.g., chemical synaptic inputs integrated toward threshold for firing, commonly approximated as linear summation of dendritic/somatic membrane potentials. However actual integration is active, not passive, and involves complex logic and signal processing in dendritic spines, branch points and local regions, amplification of distal inputs, and changing firing threshold at the AIS trigger zone (Shepherd, 1996; Sourdet and Debanne, 1999; Poirazi and Mel, 2001). Dendrites and soma are primary sources of EEG, and sites of anesthetic action which erase consciousness with little or no effects on axonal firing capabilities. Arguably, dendritic/somatic integration is closely related to consciousness, with axonal firings the outputs of conscious (or non-conscious) processes. Nonetheless, according to the Hodgkin–Huxley model, integration is assumed to be completely algorithmic and deterministic (Figure 2A), leaving no apparent room for conscious free will.
Figure 2. Integrate-and-fire neuronal behaviors. (A) The Hodgkin–Huxley model predicts integration by membrane potential in dendrites and soma reach a specific, narrow threshold potential at the proximal axon (AIS), and fire with very low temporal variability (small tbta) for given inputs. (B) Recordings from cortical neurons in awake animals (Naundorf et al., 2006) show a large variability in effective firing threshold and timing. Some unknown “x-factor” (related to consciousness?) exerts causal influence on firing and behavior. Here, quantum temporal non-locality results in backward time referral, suggested as the “x-factor” modulating firing threshold.
However, Naundorf et al. (2006) showed that firing threshold in cortical neurons in brains of awake animals (compared to neurons in slice preparations) varies widely on a spike-to-spike, firing-to-firing basis. Some factor other than the integrated AIS membrane potential contributes to firing, or not firing (Figure 2B). Firings control behavior. This “x-factor,” modulating integration and adjusting firing threshold and timing, is perfectly positioned for causal action, for conscious free will. What might it involve? Figure 2B indicates possible modification of integration and firing threshold by backward time referral.
Anatomically, a source for integration and firing threshold modification comes from lateral connections among neurons via gap junctions, or electrical synapses (Figure 3). Gap junctions are membrane protein complexes in adjacent neurons (or glia) which fuse the two cells and synchronize their membrane polarization states e.g., in gamma synchrony EEG (Dermietzel, 1998; Draguhn et al., 1998; Galarreta and Hestrin, 1999; Bennett and Zukin, 2004; Fukuda, 2007), the best measurable NCC (Gray and Singer, 1989; Fries et al., 2002; Kokarovtseva et al., 2009). Gap junction-connected cells also have continuous intracellular spaces, as open gap junctions between cells act like windows, or doors between adjacent rooms. Neurons connected by dendritic-dendritic gap junctions have synchronized local field potentials (EEG) in integration phase, but not necessarily synchronous axonal firing outputs. Thus gap junction synchronized dendritic networks can collectively integrate inputs, and provide an x-factor in selectively controlling firing outputs (Hameroff, 2010). Gap junction dynamics may also enable mobile agency in the brain. As gap junctions open and close, synchronized zones of collective integration and conscious causal agency can literally move through the brain, modulating integration, firing thresholds and behavior (Figure 4; Hameroff, 2010; Ebner and Hameroff, 2011). As consciousness can occur in different brain locations at different times, the NCC may be a mobile zone exerting conscious causal agency in various brain regions at different times.
Figure 3. (A) Dendrites of adjacent neurons linked by gap junction which remain closed. The gap junction connection is “sideways,” lateral to the flow of synaptic information. (B) Dendritic-dendritic gap junction open, synchronizing (vertical stripes) electrophysiology and enabling collective integration among gap junction-connected neurons.
Figure 4. Two timesteps in a neurocomputational network of integrate-and-fire neurons. Inputs come from left, outputs go to top, bottom and right. Dendritic-dendritic gap junctions may open, e.g., between striped dendrites and soma to form “synchronized webs.” As gap junctions open and close, the synchronized web can move through the network, e.g., Step 1, 2. Mobile webs are candidates for the neural correlates of consciousness (NCC). Outputs marked by * reflect collective integration and suggest conscious causal agency.
But why would such causal agency be conscious? And with membranes synchronized, how do gap junction-connected neurons share and integrate information? Evidence points to the origins of behavior and consciousness at a deeper order, finer scale within neurons, e.g., in cytoskeletal structures such as microtubules which organize cell interiors.
A Finer Scale?
Single cell organisms like Paramecium swim about, avoid obstacles and predators, find food and mates, and have sex, all without any synaptic connections. They utilize cytoskeletal structures such as microtubules (in protruding cilia and within their internal cytoplasm) for sensing and movement. The single cell slime mold Physarum polycephalum sends out numerous tendrils composed of bundles of microtubules, forming patterns which, seeking food, can solve problems and escape a maze (e.g., Adamatzky, 2012). Observing the purposeful behavior of single cell creatures, neuroscientist Charles Sherrington (1957) remarked: “of nerve there is no trace, but perhaps the cytoskeleton might serve.”
Interiors of animal cells are organized by the cytoskeleton, a scaffolding-like protein network of microtubules, microtubule-associated proteins (MAPs), actin and intermediate filaments (Figure 5A). Microtubules are cylindrical polymers 25 nm (nm = 10−9 m) in diameter, composed usually of 13 longitudinal protofilaments, each a chain of the peanut-shaped protein tubulin (Figure 5B). Microtubules self-assemble from tubulin, a ferroelectric dipole arranged within microtubules in two types of hexagonal lattices (A-lattice and B-lattice; Tuszynski et al., 1995), each slightly twisted, resulting in differing neighbor relationships among each subunit and its six nearest neighbors. Pathways along contiguous tubulins in the A-lattice form helical pathways which repeat every 3, 5, and 8 rows on any protofilament (the Fibonacci series; Figure 5B).
Figure 5. (A) Axon terminal (left) with two internal microtubules releasing neurotransmitters into synapse and onto receptors in membrane of dendritic spine. Actin filaments (as well as soluble second messengers, not shown) connect to cytoskeletal microtubules in main dendrite. Dendritic microtubules (right) are arranged in local networks, interconnected by microtubule-associated proteins (MAPs). (B) Larger scale showing two types of microtubule information processing. Top row: four timesteps in a microtubule automata simulation, each tubulin holding a bit state, switching e.g., at 10 megahertz (Rasmussen et al., 1990; Sahu et al., 2012). Bottom row: four topological bits in a microtubule. Information represented as specific helical pathways of conductance and information transfer. Microtubule mechanical resonances come into play (Hameroff et al., 2002; Sahu et al., 2012).
Each tubulin may differ from among its neighbors by genetic variability, post-translational modifications, binding of ligands and MAPs, and moment to moment dipole state transitions. Thus microtubules have enormous capacity for complex information representation and processing, are particularly prevalent in neurons (109 tubulins/neuron), and uniquely stable and configured in dendrites and cell bodies (Craddock et al., 2012a). Microtubules in axons (and non-neuronal cells) are arrayed radially, extending continuously (all with the same polarity) from the centrosome near the nucleus, outward toward the cell membrane. However microtubules in dendrites and cell bodies are interrupted, of mixed polarity, stabilized, and arranged in local recursive networks suitable for learning and information processing (Figure 5A; Rasmussen et al., 1990).
Neuronal microtubules regulate synapses in several ways. They serve as tracks and guides for motor proteins (dynein and kinesin) which transport synaptic precursors from cell body to distal synapses, encountering, and choosing among several dendritic branch points and many microtubules. The guidance mechanism for such delivery, choosing the proper path, is unknown, but seems to involve the MAP tau as a traffic signal (placement of tau at specific sites on microtubules being the critical feature). In Alzheimer's disease, tau is hyperphosphorylated and dislodged from destabilized microtubules. Disruption of microtubules and formation of neurofibrillary tangles composed of free, hyperphosphorylated tau correlates with memory loss in Alzheimer's disease (Matsuyama and Jarvik, 1989; Craddock et al., 2012b), and post-anesthetic cognitive dysfunction (Craddock et al., 2012c).
Due to their lattice structure and direct involvement in organizing cellular functions, microtubules have been suggested to function as information processing devices. After Sherrington's (1957) broad observation about cytoskeletal information processing, Atema (1973) proposed that tubulin conformational changes propagate as signals along microtubules. Hameroff and Watt (1982) suggested that microtubule lattices act as two-dimensional Boolean computational switching matrices with input/output occurring via MAPs. Microtubule information processing has also been viewed in the context of cellular (“molecular”) automata in which tubulin states interact with hexagonal lattice neighbor tubulin states by dipole couplings, synchronized by biomolecular coherence as proposed by Fröhlich (1968, 1970, 1975); (Smith et al., 1984; Rasmussen et al., 1990). Simulations of microtubule automata based on tubulin states show rapid information integration and learning. Recent evidence indicates microtubules have resonances at frequency ranges from 10 kHz to 10 MHz, and possibly higher (Sahu et al., 2012). Topological computing can also occur in which helical pathways through the skewed hexagonal lattice are the relevant states, or bits (Figure 2B, bottom). Particular resonance frequencies may correlate with specific helical pathways.
With roughly 109 tubulins per neuron switching at e.g., 10 MHz (107), the potential capacity for microtubule-based information processing is 1016 operations/s per neuron. Integr-ation in microtubules (influenced by encoded memory), and synchronized in collective integration by gap junctions may be an x-factor in altering firing threshold and exerting causal agency in sets of synchronized neurons. But even a deeper order, finer scale microtubule-based process in a self-organizing zone of conscious agency would still be algorithmic and deterministic, and fail to address completely the problems of consciousness and free will.
And another problem looms.
Is Consciousness Too Late?
Several lines of evidence suggest that real time conscious action is an illusion, that we act non-consciously and have belated, false impressions of conscious causal action. This implies that free will does not exist, that consciousness is epiphenomenal, and that we are, as Huxley (1893/1986) bleakly summarized, “merely helpless spectators.” Apparent evidence against real-time conscious action includes the following:
Sensory Consciousness Comes Too Late for Conscious Response
Neural correlates of conscious perception occur 150–500 ms after impingement on our sense organs, apparently too late for causal efficacy in seemingly conscious perceptions and willful actions, often initiated or completed within 100 ms after sensory impingement. Velmans (1991, 2000) listed a number of examples: analysis of sensory inputs and their emotional content, phonological, and semantic analysis of heard speech and preparation of one's own spoken words and sentences, learning and formation of memories, and choice, planning and execution of voluntary acts. Consequently, the subjective feeling of conscious control of these behaviors is deemed illusory (Dennett, 1991; Wegner, 2002).
In speech, evoked potentials (EPs) indicating conscious word recognition occur about 400 ms after auditory input, however semantic meaning is appreciated (and response initiated) after only 200 ms. As Velmans points out, only two phonemes are heard by 200 ms, and an average of 87 words share their first two phonemes. Even when contextual effects are considered, semantic processing and initiation of response occur before conscious recognition (Van Petten et al., 1999).
Gray (2004) observes that in tennis “The speed of the ball after a serve is so great, and the distance over which it has to travel so short, that the player who receives the serve must strike it back before he has had time consciously to see the ball leave the server's racket. Conscious awareness comes too late to affect his stroke.” McCrone (1999): “[for] tennis players … facing a fast serve … even if awareness were actually instant, it would still not be fast enough ….” Nonetheless tennis players claim to see the ball consciously before they attempt to return it.
Readiness Potentials
Kornhuber and Deecke (1965) recorded brain electrical activity over pre-motor cortex in subjects who were asked to move their finger randomly, at no prescribed time. They found that brain electrical activity preceded finger movement by ~800 ms, calling this activity the readiness potential (“RP,” Figure 6A). Libet and colleagues (1983) repeated the experiment, except they also asked subjects to note precisely when they consciously decided to move their finger. (To do so, and to avoid delays caused by verbal report, Libet et al. used a rapidly moving clock and asked subjects to note when on the clock they consciously decided to move their finger). This conscious decision came ~200 ms before actual finger movement, hundreds of milliseconds after onset of the RP. Libet and many authorities concluded that the RP represented non-conscious determination of movement, that many seemingly conscious actions are actually initiated by nonconscious processes, and that conscious intent was an illusion. Consciousness apparently comes too late. However, as shown in Figure 6B, temporal non-locality enabling backward time referral of (quantum) information from the moment of conscious intent can account for necessary RP preparation.
Figure 6. The “readiness potential (RP)” (Libet et al., 1983). (A) Cortical potentials recorded from a subject instructed to move his/her hand whenever he/she feels ready, and to note when the decision was made (Conscious intent), followed quickly by the finger actually moving. (Time between Conscious intent, and finger moving is fixed.) Readiness potential, RP, preceding Conscious intent is generally interpreted as representing the Non-conscious choice to move the finger, with Conscious intent being illusion. (B) Assuming RP is necessary preparation for conscious finger movement, Actual conscious intent could initiate the earlier RP by (quantum) temporal non-locality and backward time referral, enabling preparation while preserving real time conscious intent and control.
And yet we feel as though we act consciously in real time. To account for this paradox, Dennett (1991); (cf. Dennett and Kinsbourne, 1992) described real time conscious perception and action as retrospective construction, as illusion. His multiple drafts model proposed sensory inputs and cognitive processing produced tentative contents under continual revision, with the definitive, final edition only inserted into memory, overriding previous drafts (“Orwellian Revisionism” after George Orwell's fictional, retroactive “Ministry of Truth” in the novel 1984). Perceptions are edited and revised over hundreds of milliseconds, a final version inserted into memory. In this view (more or less the standard in modern philosophy and neuroscience) the brain retrospectively creates content or judgment, e.g., of real time conscious control which is recorded in memory as veridical truth. In other words, we act non-consciously in real time, but then falsely remember acting consciously. Consciousness, in this view, is an epiphenomenal illusion occurring after-the-fact. We are living in the past.
For example in the “color phi” effect (Kolers and von Grunau, 1976) a red spot appears briefly on the left side of a screen, followed after a pause by a green spot on the right side. Conscious observers report one spot moving back and forth, changing to green halfway across the screen, the brain seemingly “filling in” (Figure 7). Yet after a sequence of such observations, if the spot on the right is suddenly red (instead of green), the subject is not fooled and fills in continuously with red halfway across. Does the brain know in advance to which color the dot will change? No, says Dennett. The brain fills in the proper color in a subsequent draft, and belatedly imprints it into conscious memory. Consciousness occurs after the fact (Figure 7A). Any conscious response to the color change would occur well after presentation, dooming free will. However a quantum explanation with temporal non-locality and backward time referral enables constructive “filling in” from near future brain activity, allowing real time conscious perception (Figure 7B). Is there any evidence for backward time effects in the brain?
Figure 7. In the “color phi” phenomenon (Kolers and von Grunau, 1976). A red circle appears on the left side of a screen, disappears, and then, a fraction of a second later, a green circle appears on the right side. An observer consciously “sees” a red circle moving continuously from left to right, changing to green halfway across. (A) According to Dennett's “Orwellian Revisionism,” the brain constructs, or fills in the movement and transition after the fact, and inserts a constructed perception into memory. Real-time perception is not conscious. (B) In a “Quantum Explanation,” temporal non-locality and backward time referral allow real-time, veridical conscious perception.
Backward Time Effects in the Brain? Three Lines of Evidence
Libet's “Open Brain” Sensory Experiments
In addition to volitional studies (moving a finger), Libet and colleagues studied the timing of conscious sensory experience in awake, cooperative patients undergoing brain surgery with local anesthesia (e.g., Libet et al., 1964, 1979; Libet, 2004). With his neurosurgical colleagues, in these patients Libet was able to record from, and stimulate specific areas of somatosensory cortex, e.g., corresponding to the skin of each patient's hand, and the hand itself (Figures 8 and 9), as well as communicate with the conscious patients.
Figure 8. Cortical potentials in Libet's sensory experiments. (A) Peripheral stimulation, e.g., at the hand, results in near-immediate conscious experience of the stimulation, an evoked potential EP at ~30 ms in the “hand area” of somatosensory cortex, and several 100 ms of ongoing cortical electrical activity. (B) Direct cortical activity of the somatosensory cortical hand area for several 100 ms results in no EP, ongoing cortical activity, and conscious sensory experience of the hand, but only after ~500 ms. Libet termed the 500 ms of cortical activity resulting in conscious experience “neuronal adequacy.”
Figure 9. Libet's sensory experiments, continued. (A) Libet et al. stimulated medial lemniscus of thalamus in the sensory pathway to produce an EP (~30 ms) in somatosensory cortex, but only brief post-EP stimulation, resulting in only brief cortical activity. There was no apparent “neuronal adequacy,” and no conscious experience. An EP and several 100 ms of post-EP cortical activity (neuronal adequacy) were required for conscious experience at the time of EP. (B) To account for his findings, Libet concluded that subjective information was referred backward in time from neuronal adequacy (~500 ms) to the EP.
As depicted in Figure 8A, peripheral stimulus, e.g., of the skin of the hand, resulted in an “EP” spike in the somatosensory cortical area for the hand ~30 ms after skin contact, consistent with the time required for a neuronal signal to travel from hand to spinal cord, thalamus, and brain. The stimulus also caused several 100 ms of ongoing cortical activity following the EP. Subjects reported conscious experience of the stimulus (using Libet's rapidly moving clock) near-immediately, e.g., at the time of the EP at 30 ms.
Libet also stimulated the “hand area” of subjects' brain somatosensory cortex directly (Figure 8B). This type of stimulation did not cause an EP spike, but did result in ongoing brain electrical activity. Conscious sensation referred to (“felt in”) the hand occurred, but only after stimulation and ongoing brain activity lasting up to 500 ms (Figure 8B). This requirement of ongoing, prolonged electrical activity (what Libet termed “neuronal adequacy”) to produce conscious experience (“Libet's 500 ms”) was subsequently confirmed by Amassian et al. (1991), Ray et al. (1999), Pollen (2004) and others.
But if hundreds of milliseconds of brain activity are required for neuronal adequacy, how can conscious sensory experience occur at 30 ms? To address this issue, Libet also performed experiments in which stimulation of thalamus resulted in an EP at 30 ms, but only brief ongoing activity, i.e., without neuronal adequacy (Figure 9A). No conscious experience occurred. Libet concluded that for real-time conscious perception (e.g., at the 30 ms EP), two factors were necessary: an EP at 30 ms, and several 100 ms of ongoing cortical activity (neuronal adequacy) after the EP. Somehow, apparently, the brain seems to know what will happen after the EP. Libet concluded the hundreds of milliseconds of ongoing cortical activity (“neuronal adequacy”) is the sine qua non for conscious experience—the NCC, even if it occurs after the conscious experience. To account for his results, he further concluded that subjective information is referred backwards in time from the time of neuronal adequacy to the time of the EP (Figure 9B). Libet's backward time assertion was disbelieved and ridiculed (e.g., Churchland, 1981; Pockett, 2002), but never refuted (Libet, 2002, 2003).
Pre-Sentiment and Pre-Cognition
Electrodermal activity measures skin impedance, usually with a probe wrapped around a finger, as an index of autonomic, sympathetic neuronal activity causing changes in blood flow and sweating, in turn triggered by emotional response in the brain. Over many years, researchers (Bierman and Radin, 1997; Bierman and Scholte, 2002; Radin, 2004) have published a number of well-controlled studies using electrodermal activity, fMRI and other methods to look for emotional responses, e.g., to viewing images presented at random times on a computer screen. They found, not surprisingly, that highly emotional (e.g., violent, sexual) images elicited greater responses than neutral, non-emotional images. But surprisingly, the changes occurred half a second to two seconds before the images appeared. They termed the effect pre-sentiment because the subjects were not consciously aware of the emotional feelings. Non-conscious emotional sentiment (i.e., feelings) appeared to be referred backward in time. These studies were published in the parapsychology literature, as mainstream academic journals refused to consider them.
Bem (2012) published “Feeling the future: experimental evidence for anomalous retroactive influences on cognition and affect” in the mainstream J. Pers. Soc. Psychol. The article reported on eight studies showing statistically significant backward time effects, most involving non-conscious influence of future emotional effects (e.g., erotic or threatening stimuli) on cognitive choices. Studies by others have reported both replication, and failure to replicate, the controversial results.
Quantum Delayed Choice Experiments
In the famous “double slit experiment,” quantum entities (e.g., photons, electrons) can behave as either waves, or particles, depending on the method chosen to measure them. Wheeler (1978) described a thought experiment in which the measurement choice (by a conscious human observer) was delayed until after the electron or other quantum entity passed though the slits, presumably as either wave or particle. Wheeler suggested the observer's delayed choice could retroactively influence the behavior of the electrons, e.g., as waves or particles. The experiment was eventually performed (Kim et al., 2000) and confirmed Wheeler's prediction; conscious choices can affect previous events, as long as the events had not been consciously observed in the interim.
In “delayed choice entanglement swapping,” originally a thought experiment proposed by Asher Peres (2000); Ma et al. (2012) went a step further. Entanglement is a characteristic feature of quantum mechanics in which unified quantum particles are separated but remain somehow connected, even over distance. Measurement or perturbation of one separated-but-still-entangled particle instantaneously affects the other, what Einstein referred to (mockingly) as “spooky action at a distance.” Despite its bizarre nature, entanglement has been demonstrated repeatedly, and is the foundation for quantum cryptography, quantum teleportation and quantum computing (Deutsch, 1985). In entanglement swapping, two pairs of unified/entangled particles are separated, and one from each pair is sent to two measurement devices, each associated with a conscious observer (“Alice” and “Bob,” as is the convention in such quantum experiments). The other entangled particle from each pair is sent to a third observer, “Victor.” How Victor decides to measure the two particles (as an entangled pair, or as separable particles) determines whether Alice and Bob observe them as entangled (showing quantum correlations) or separable (showing classical correlations). This happens even if Victor decides after Alice's and Bob's devices have measured them (but before Alice and Bob consciously view the results). Thus, conscious choice affects behavior of previously measured, but unobserved, events.
How can backward time effects be explained scientifically? The problem may be related to our perception of time in classical (non-quantum) physics. Anton Zeilinger, senior author on the Ma et al. study, said: “Within a naïve classical worldview, quantum mechanics can even mimic an influence of future actions on past events.”
Time and Conscious Moments
What is time? St. Augustine remarked that when no one asked him, he knew what time was; however when someone asked him, he did not. The (“naïve”) worldview according to classical Newtonian physics is that time is either a process which flows, or a dimension in 4-dimensional space-time along which processes occur. But if time flows, it would do so in some medium or dimension (e.g., minutes per what?). If time is a dimension, why would processes occur unidirectionally in time? Yet we consciously perceive a unidirectional time-like reality. An alternative explanation is that time does not exist as process or dimension, but as a collage of discrete configurations of the universe, connected in some way by consciousness and memory (Barbour, 1999). This follows Leibniz “monads” (e.g., Rescher, 1991; c.f. Spinoza, 1677), momentary, snapshot-like arrangements of spatiotemporal reality based on Mach's principle that the universe has an underlying structure related to mass distribution (also a foundation of Einstein's general relativity). Whitehead (1929, 1933) expounded on Leibniz monads, conferring mental aspects to occasions occurring in a wider field of “proto-conscious experience” (“occasions of experience”). These views from philosophy and physics link consciousness to discrete events in the fine structure of physical reality.
Consciousness has also been seen as discrete events in psychology, e.g., James, (1890) “specious present, the short duration of which we are immediately and incessantly sensible” (though James was vague about duration, and also described a continual “stream of consciousness”). The “perceptual moment” theory of Stroud (1956) described consciousness as a series of discrete events, like sequential frames of a movie [modern film and video present 24–72 frames/s, 24–72 cycles/s, i.e., Hertz (“Hz”)]. Periodicities for perception and reaction times are in the range of 20–50 ms, i.e., gamma synchrony EEG (30–90 Hz). Slower periods, e.g., 4–7 Hz theta frequency, with nested gamma waves may correspond with saccades and visual gestalts (Woolf and Hameroff, 2001; VanRullen and Koch, 2003).
Support for consciousness as sequences of discrete events is also found in Buddhism, trained meditators describing distinct “flickerings” in their experience of pure undifferentiated awareness (Tart, 1995, pers. communication). Buddhist texts portray consciousness as “momentary collections of mental phenomena,” and as “distinct, unconnected and impermanent moments which perish as soon as they arise.” Buddhist writings even quantify the frequency of conscious moments. For example the Sarvaastivaadins (von Rospatt, 1995) described 6,480,000 “moments” in 24 h (an average of one “moment” per 13.3 ms, 75 Hz), and some Chinese Buddhism as one “thought” per 20 ms (50 Hz), both in gamma synchrony range.
Long-range gamma synchrony in the brain is the best measurable NCC. In surgical patients undergoing general anesthesia, gamma synchrony between frontal and posterior cortex is the specific marker which disappears with loss of consciousness and returns upon awakening (Hameroff, 2006). In what may be considered enhanced or optimized levels of consciousness, high frequency (more than 80 Hz) phase coherent gamma synchrony was found spanning cortical regions in meditating Tibetan monks, at the highest amplitude ever recorded (Lutz et al., 2004). Faster rates of conscious moments may correlate with subjective perception of slower time flow, e.g., as in a car accident, or altered state. But what are conscious moments? Shimony (1993) recognized that Whitehead's occasions were compatible with quantum state reductions, or “collapses of the wave function.” Several lines of evidence suggest consciousness could be identified with sequences of quantum state reductions. What exactly are quantum state reductions?
Consciousness and Quantum State Reduction
Reality is described by quantum physical laws which appear to reduce to classical rules (e.g., Newton's laws of motion) at certain scale limits, though those limits are unknown. According to quantum physical laws:
• Objects/particles may exist in two or more places or states simultaneously—more like waves than particles and governed by a quantum wavefunction. This property of multiple coexisting possibilities is known as quantum superposition.
• Multiple objects/particles can be unified, acting as a single coherent object governed by one wavefunction. If a component is perturbed, others feel it and react, e.g., in Bose-Einstein condensation.
• If unified objects are spatially separated they remain unified. This non-locality is also known as quantum entanglement.
But we don't see quantum superpositions in our macroscale world. How and why do quantum laws reduce to classical behavior? Various interpretations of quantum mechanics address this issue:
• Copenhagen and the conscious observer: In the early days of quantum mechanics, Bohr (1934/1987) and colleagues recognized that quantum superpositions persist until measured by a device (the “Copenhagen interpretation”, after Bohr's Danish origin). Wigner (1961) and von Neumann (1932/1955) further stipulated that the superposition continues in the device until the results are observed by a conscious human, that conscious observation “collapses the wave function.” These interpretations enabled quantum experiments to flourish, but put consciousness outside science, and failed to account for fundamental reality. Schrödinger (1935) took exception, posing his famous (“Schrödinger's cat”) thought experiment in which the fate of a cat in a box is tied to a quantum superposition, reasoning that, according to the Wigner and von Neumann interpretation, the cat would remain both dead and alive until the box is opened and observed by a conscious human. Despite the absurdity, limitations on quantum superposition remain unknown.
• The multiple worlds view suggests each superposition is a separation in reality, evolving to a new universe (Everett, 1957). There is no collapse, but an infinity of realities (and conscious minds) is required.
• David Bohm's interpretation (Bohm and Hiley, 1993) avoids reduction/collapse by postulating another layer of reality. Matter exists as objects guided by complex “pilot” waves of possibility.
• Henry Stapp (1993) views the universe as a single quantum wave function. Reduction within the brain is a conscious moment (akin to Whitehead's “occasion of experience”—Whitehead, 1929, 1933). Reduction/collapse is consciousness, but its cause and distinction between universal wave function and that within the brain are unclear.
• In decoherence theory (e.g., Zurek, 2003) any interaction (loss of isolation) of a quantum superposition with a classical system (e.g., through heat, direct interaction or information exchange) erodes the quantum system. But (1) the fate of isolated superpositions is not addressed, (2) no quantum system is ever truly isolated, (3) decoherence doesn't actually disrupt superposition, just buries it in noise, and (4) some quantum processes are enhanced by heat and/or noise.
• An objective threshold for quantum state reduction (OR) exists due to e.g., the number of superpositioned particles (GRW theory—Ghirardi et al., 1986) or a factor related to quantum gravity or underlying properties of spacetime geometry, as in the OR proposals of Károlyházy et al. (1986); Diȼsi (1989) and Penrose (1989, 1996). Penrose OR also includes consciousness, each OR event being associated with a moment of conscious experience.
Penrose (1989, 1994) uniquely brings consciousness into physics, and directly approaches superpositioned objects as actual separations in underlying reality at its most basic level (fundamental space-time geometry at the Planck scale of 10−33 cm). Separation is akin to the multiple worlds view in which each possibility branches to form and evolve its own universe. However according to Penrose the space-time separations are unstable and (instead of branching off) spontaneously reduce (self-collapse) to one particular space-time geometry or another. This OR self-collapse occurs at a threshold given by E = ħ/t, where E is the magnitude (gravitational self-energy) of the superposition, e.g., the number of tubulins (E is also proportional to intensity of conscious experience), ħ is Planck's constant (over 2π), and t the time interval at which superposition E will self-reduce by OR, choosing classical states in a moment of consciousness (Figure 10).
Figure 10. Location or state of a particle/object is equivalent to curvature in underlying spacetime geometry. From left, a superposition develops over time, e.g., a particle separating from itself, shown as simultaneous curvatures in opposite directions. The magnitude of the separation is related to E, the gravitational self-energy. At a particular time t, E reaches threshold by E = ħ/t, and spontaneous OR occurs, one particular curvature is selected. This OR event is accompanied by a moment of conscious experience (“NOW”), its intensity proportional to E. Each OR event also results in temporal non-locality, referring quantum information backward in classical time (curved arrows).
Penrose E = ħ/t is related to the Heisenberg “uncertainty principle” which asserts a fundamental limit to the precision with which values for certain pairs of physical properties can be simultaneously known. The most common examples are uncertainty in position (x) and momentum (p) of a particle, given by their standard deviations (σx and σp) whose product σxσp is the uncertainty which must meet or exceed a fundamental limit related to ħ, Planck's constant over 2π. The uncertainty principle is thus usually written as σxσpħ/2. Uncertainty can pertain to properties other than position and momentum, and Penrose equated superposition/separation to uncertainty in the underlying structure of space-time itself. Heisenberg's uncertainty principle imposes a limit, causing quantum state reduction.
Space-time uncertainty is expressed as the gravitational self-energy E, the energy required for an object of mass m and radius r (or it's equivalent spacetime geometry) to separate from itself by a distance a. For Orch OR, E was calculated for superposition/separation of tubulin proteins at three levels, with three sets of m, r, and a. E was calculated for separation at the level of (1) the entire tubulin protein, (2) atomic nuclei within tubulin, and (3) nucleons (protons and neutrons) within tubulin atomic nuclei. Separation at the level of atomic nuclei (femtometers) was found to dominate, and used to calculate E (in terms of number of tubulins) for various values of time t corresponding with neurophysiology, e.g., 25 ms for gamma synchrony at 40 Hz. For a conscious event occurring at 25 ms, superposition/separation of 2 × 1010 tubulins are required, involving microtubules in roughly tens of thousands of neurons (Hameroff and Penrose, 1996a).
Particular states are chosen in OR due to (1) algorithmic quantum computing by the Schrödinger equation evolving toward E = ħ/t, and (2) influence in the OR process at the moment of E = ħ/t. According to Penrose, this influence, unlike randomness associated with measurement and decoherence, reflects “non-computable values” intrinsic to spacetime geometry. Thus conscious choices in OR (and Orch OR) are neither random nor algorithmically deterministic.
Quantum state reductions are essential to quantum computing which involves superposition of information states, e.g., both 1 and 0 (quantum bits, or “qubits”). Superpositioned qubits entangle and compute (by the Schrödinger equation) until reduction/collapse of each qubit to classical values (“bits”) occurs as the solution. In technological quantum computers, reduction occurs by measurement/observation, introducing a component of randomness. Superposition, entanglement and reduction are also essential to quantum cryptography and quantum teleportation technologies (Bennett and Wiesner, 1992; Bouwmeester et al., 1997; Macikic et al., 2002). Entanglement implies non-locality, e.g., that complementary quantum particles (electrons in coupled spin-up and spin-down pairs) remain somehow connected when spatially (or temporally) separated, each pair member reacting instantaneously to perturbation of its separated partner. Einstein initially objected to entanglement, as it would appear to require signaling faster than light, and thus violate special relativity. He famously termed it “spooky action at a distance,” and described a thought experiment (“Einstein, Podolsky, and Rosen (EPR)”; Einstein et al., 1935) in which each member of an entangled pair of superpositioned electrons (“EPR pairs”) would be sent in different directions, each remaining in superposition and entangled. When one electron was measured at its destination and, say, spin-up was observed, its entangled twin miles away would, according to the prediction, correspondingly reduce instantaneously to spin-down when measured. The issue was unresolved at the time of Einstein's death, but since the early 1980s (Aspect et al., 1982; Tittel et al., 1998) this type of experiment has been repeatedly confirmed through wires, fiber optic cables and via microwave beams through atmosphere. Strange as it seems, EPR entanglement is a fundamental feature of quantum mechanics and reality. How can it be explained?
Penrose (1989; 2004, cf. Bennett and Wiesner, 1992) suggested quantum entanglements are not mediated in a normal causal way, that non-local entanglement (quantum information, or “quanglement,” as Penrose terms it) should be thought of as able to propagate in either direction in time (into the past or into the future). Along similar lines, Aharonov and Vaidman (1990) also proposed that quantum state reductions send quantum information both forward and backward in what we perceive as time, “temporal non-locality.” However it is generally agreed that quantum information going backward in time cannot, by itself, communicate or signal ordinary classical information; it is “acausal.” This restriction is related to elimination of possible causal paradox (e.g., signaling backward in time to kill one's ancestor, paradoxically preventing one's birth). Indeed quantum information going forward in time is also considered acausal, unable to signal classical information either. In quantum cryptography and teleportation, acausal quantum information can only influence or correlate with classical information, but nonetheless greatly enhance capabilities of causal, classical processes.
Penrose suggested acausal backward time effects used in conjunction with classical channels could influence classical results in a way unattainable by classical, future-directed means alone, and that temporal non-locality and acausal backward time effects were essential features of entanglement. He suggested that in EPR (Figure 11), quantum information/quanglement from the measurement/state reduction moves backward in (what we “naively” perceive as classical) time to the unified pair, then to the complementary twin, influencing and correlating its state when measured. Can quantum backward referral happen in the brain?
Figure 11. Backward time in EPR entanglement. The Einstein-Podolsky-Rosen (EPR) experiment verified by Aspect et al. (1982); Tittel et al. (1998), and many others. On the left is an isolated, entangled pair of superpositioned complementary quantum particles, e.g., two electrons in spin up and spin down states. The pair is separated and sent to two different, spatially-separated locations/measuring devices. The single electron at the top (in superposition of both spin up and spin down states) is measured, and reduces to a single classical state (e.g., spin down). Instantaneously its spatially-separated twin reduces to the complementary state of spin up (or vice versa). The effect is instantaneous over significant distance, hence appears to be transmitted faster than the speed of light. According to Penrose (2004; cf. Bennett and Wiesner, 1992), measurement/reduction of the electron at the top sends quantum information backward in time to the origin of the unified entanglement, then onward to the twin electron. No other reasonable explanation has been put forth.
Orchestrated Objective Reduction (Orch OR)
Penrose put forth OR as a mechanism for consciousness in physical science (the first, and still only specific proposal). For neurobiological implementation of OR, the Penrose–Hameroff model of “Orch OR” proposed quantum computations terminated by OR in microtubules within brain neurons, “orchestrated” by synaptic inputs, memory and other factors, hence “Orch OR” (Penrose and Hameroff, 1995, 2011; Hameroff and Penrose, 1996a,b; Hameroff, 1998, 2007). Starting with classical microtubule automata (e.g., Rasmussen et al., 1990) in which tubulins in microtubule lattices convey interactive bit states, e.g., of 1 or 0, and are thus capable of classical information processing (Figure 5B), Orch OR also proposed that quantum superpositioned tubulin bits, or “qubits,” e.g., of both 1 AND 0 compute via entanglement with tubulins in the same neuron, and also those in neighboring and distant neurons via gap junctions (Figure 12). The quantum computations evolve by the Schrödinger equation in entangled microtubules in dendrites and cell bodies during integration phases of gap junction-connected integrate-and-fire neurons. Entangled superpositions contribute to increasing gravitational self-energy E. When threshold is met by E = ħ/t, a conscious moment occurs as entangled tubulin qubits simultaneously undergo OR to classical tubulin states which then proceed to trigger (or not trigger) axonal firings, and adjust synapses. Microtubule quantum computations can thus be the “x-factor” in integration regulating axonal firing threshold. Compatible with known neurophysiology, Orch OR can account for conscious causal control of behavior.
Figure 12. Three toy neurons in an input/integration layer. Adjacent dendrites are connected by gap junction electrical synapses in “dendritic web,” showing internal cytoskeletal microtubules connected by microtubule-associated proteins. Insert: communication/correlation between microtubules through gap junctions by electromagnetic or quantum entanglement, enabling collective integration among gap junction-connected, synchronized neurons and glia.
Entangled superpositions leading to OR and moments of consciousness by E = ħ/t are seen as sequential, only one “consciousness” occurring in the brain at any one time (except perhaps for “split-brain” patients, or those with other cognitive disorders). Superpositions outside the largest, most rapidly evolving gap junction-connected web may decohere randomly, or continue and participate in a subsequent moment of consciousness. The results of each Orch OR conscious moment set initial conditions for the next.
By E = ħ/t, superposition of about 2 × 1010 tubulins would reach threshold at t = 25 ms, as in 40 Hz gamma synchrony, 40 conscious moments/s. Depending on the percentage of tubulins involved per neuron, this would entail thousands to hundreds of thousands of gap junction-connected neurons per conscious moment at 40 Hz as the NCC (Figure 12). With specific neuronal distributions and brain regions defined by gap junction openings and closings, synchronized “dendritic webs” as the NCC can move and redistribute moment to moment. Within the NCC, consciousness by E = ħ/t may occur on a spectrum of frequencies, at different fractal-like scales of brain activity (He et al., 2010), with deeper order, finer scale entangled processes in microtubules correlating with high frequency, high intensity experience, and larger proportions of brain involvement.
Proteins can act as quantum levers, able to amplify quantum effects into particular classical states (Conrad, 1994). Orch OR suggests that tubulin states and superpositions are initiated by electron cloud dipoles (van der Waals London forces) in clusters of aromatic resonance rings (e.g., in amino acids tryptophan, phenylalanine, tyrosine, Figures 13AC). London force dipoles are inherently quantum mechanical, tending to superposition. They also mediate effects of general anesthetic gases which act in aromatic clusters (“hydrophobic pockets”) in neuronal proteins including tubulin to selectively erase consciousness (Hameroff, 2006). This suggests a deeper order, finer scale component of the NCC.
Figure 13. (A) A microtubule, a cylindrical lattice of peanut-shaped tubulin proteins, with molecular model of enlarged single tubulin with C-termini tails (Craddock et al., 2012c). (B) Tubulin dimer, lower C terminus tail visible. Interior blowup shows aromatic rings clustered in a linear groove, and further blowup of ring structures. (C) Approximate locations of resonance rings suggesting trans-tubulin alignments (see Figure 14A).
Electron movements of one nanometer, e.g., in a London force dipole oscillation, displace atomic nuclei by one Fermi length, 10−15 m, the diameter of a carbon atom nucleus (Sataric et al., 1998), and also the superposition separation distance required for gravitational self-energy E in Orch OR (Hameroff and Penrose, 1996a,b). Thus London forces can induce superposition of an entire protein/tubulin mass, albeit by an extremely tiny separation distance. Nonetheless the protein-level (rather than electron only) superposition separation engenders significant gravitational self-energy E, and thus by E = ħ/t, usefully brief durations of time t for conscious moments and actions.
Orch OR has been criticized on the basis of decoherence in the “warm, wet and noisy” brain, preventing superposition long enough to reach threshold (Tegmark, 2000; cf. Hagan et al., 2001). But subsequently plant proteins have been shown to routinely use electron superposition for chemical energy (Engel et al., 2007). Further research has demonstrated warm quantum effects in bird brain navigation (Gauger et al., 2011), ion channels (Bernroider and Roy, 2005), sense of smell (Turin, 1996), DNA (Rieper et al., 2011), protein folding (Luo and Lu, 2011), and biological water (Reiter et al., 2011). Microtubules (Sahu et al., 2012) appear to have kilohertz and megahertz resonance related to enhanced (?quantum) conductance through spiral pathways.
Conductance pathways through aromatic ring arrays in each tubulin aligned with neighbor tubulin arrays following spiral geometry in microtubule lattices (Figure 14A) allow helical macroscopic “quantum highways” through microtubules (Figure 14A) suitable for topological quantum computing (Kitaev, 1997; Hameroff et al., 2002; Penrose and Hameroff, 2011). With particular spiral pathways as topological qubits (“braids”) rather than individual tubulins, overall microtubule information capacity is reduced, each topological bit/qubit pathway requiring many tubulins (Figure 14B, Bottom). But topological qubits are robust, resist decoherence, and reduce to classical helical pathways (or combinations) which can, with each conscious moment, regulate synapses and trigger axonal firings.
Figure 14. (A) Alignment of aromatic ring structures in tubulins and through microtubule lattice suggests different helical pathways, possible macroscopic “quantum highways” e.g., following the Fibonacci sequence in the A lattice. (B) Top: superpositioned tubulins (gray) increase through first three steps (neuronal integration) until threshold is met by E = ħ/t, resulting in Orch OR, a conscious moment, and selection of classical tubulin states which may trigger axonal firing. (B) Bottom: same as (A), but with topological qubits, i.e., different helical pathways represent information. One particular pathway is selected in the Orch OR conscious moment.
In Figure 15, two Orch OR conscious moments underlie gamma synchrony electrophysiology in an integrate-and-fire neuron. Quantum superposition E evolves during integration, increasing with time until threshold is met at E = ħ/t (t = 25 ms), at which instant an Orch OR conscious moment occurs (intensity proportional to E), and classical states of tubulin are selected which can trigger (or not trigger) axonal firings which control actions and behavior (as well as regulate synaptic strength and record memory).
Figure 15. Two Orch OR events (solid lines) underlie integrate-and-fire electrophysiology (dotted lines) in neurons. Orch OR and conscious moments occur here at t = 25 ms (gamma synchrony), with E then equivalent to superposition of approximately 2 × 1010 tubulins. Each Orch OR moment occurs with conscious experience, and selects tubulin states which can then trigger axonal firings. Each Orch OR event can also send quantum information backward in perceived time.
Each Orch OR quantum state reduction also causes temporal non-locality, sending quantum information/quanglement (with gravitational self-energy E) backward in what we perceive as classical time, integrating with forward-going E to help reach E = ħ/t, perhaps earlier than would otherwise occur (Figure 2B). As described previously, Orch OR temporal non-locality and backward time referral of quantum information can provide real-time conscious causal control of voluntary actions (Figure 6; cf. Wolf, 1998; Sarfatti, 2011).
Do backward time effects risk causal paradox? In classical physics, the cause of an effect must precede it. But backward-going quanglement is acausal, only able to influence or correlate with information in a classical channel, e.g., as occurs in quantum entanglement, cryptography and teleportation. And according to some quantum interpretations, backward time effects can't violate causality if they only alter past events whose subsequent effects had not been consciously observed (“If a tree falls ….”). In the experimental studies cited here (Libet, pre-sentiment/Bem, delayed choice) backward referral itself is non-conscious (though Libet refers to it as “subjective experience”) until reduction occurs in the present. There is no causal paradox.
If conscious experience is indeed rooted in Orch OR, with OR relating the classical to the quantum world, then temporal non-locality and referral of acausal quantum information backward in time is to be expected (Penrose and Hameroff, 2011). Temporal non-locality and backward time referral can rescue causal agency and conscious free will.
Conclusion: How Quantum Brain Biology can Rescue Conscious Free Will
Problems regarding conscious “free will” include: (1) the need for a neurobiological mechanism to account for consciousness and causal agency, (2) conscious perceptions apparently occurring too late for real-time conscious responses, and (3) determinism. Penrose–Hameroff “Orch OR” is a theory in which moments of conscious choice and experience are identified with quantum state reductions in microtubules inside neurons. Orch OR can help resolve the three problematic issues in the following ways.
A Mechanism for Consciousness and Causal Agency
Orch OR is based on sequences of quantum computations in microtubules during integration phases in dendrites and cell bodies of integrate-and-fire brain neurons linked by gap junctions. Each Orch OR quantum computation terminates in a moment of conscious experience, and selects a particular set of tubulin states which then trigger (or do not trigger) axonal firings, the latter exerting causal behavior. Orch OR can in principle account for conscious causal agency.
Does Consciousness Come Too Late?
Brain electrical activity appearing to correlate with conscious perception of a stimulus can occur after we respond to that stimulus, seemingly consciously. Accordingly, consciousness is deemed epiphenomenal and illusory (Dennett, 1991; Wegner, 2002). However evidence for backward time effects in the brain (Libet et al., 1983; Bem, 2012; Ma et al., 2012), and in quantum physics (e.g., to explain entanglement, Penrose, 1989, 2004; Aharonov and Vaidman, 1990; Bennett and Wiesner, 1992) suggest that quantum state reductions in Orch OR can send quantum information backward in (what we perceive as) time, on the order of hundreds of milliseconds. This enables consciousness to regulate axonal firings and behavioral actions in real-time, when conscious choice is felt to occur (and actually does occur), thus rescuing consciousness from necessarily being an epiphenomenal illusion.
Is the universe unfolding (in which case free will is possible), or does it exist as a “block universe” with pre-determined world-lines, our actions pre-determined by algorithmic processes? In Orch OR, consciousness unfolds the universe. The selection of states, according to Penrose, is influenced by a non-computable factor, a bias due to fine scale structure of spacetime geometry. According to Orch OR, conscious choices are not entirely algorithmic.
Orch OR is a testable quantum brain biological theory compatible with known neuroscience and physics, and able to account for conscious free will.
Conflict of Interest Statement
Thanks to Sir Roger Penrose for collaboration and ideas, to Dave Cantrell for artwork, and to Marjan Macphee, Abi Behar-Montefiore and Chris Duffield for manuscript support.
Adamatzky, A. (2012). Slime mould computes planar shapes. Int. J. Bio-Inspired Comput. 4, 149–154.
Aharonov, Y., and Vaidman, L. (1990). Properties of a quantum system during the time interval between two measurements. Phys. Rev. A 41, 11.
Pubmed Abstract | Pubmed Full Text | CrossRef Full Text
Amassian, V. E., Somasunderinn, M., Rothswell, J. C., Crocco, J. B., Macabee, P. J., and Day, B. L. (1991). Paresthesias are elicited by single pulse magnetic coil stimulation of motor cortex in susceptible humans. Brain 114, 2505–2520.
Pubmed Abstract | Pubmed Full Text | CrossRef Full Text
Aspect, A., Grangier, P., and Roger, G. (1982). Experimental realization of Einstein-Podolsky-Rosen-Bohm Gedankenexperiment: a new violation of Bell's inequalities. Phys. Rev. Lett. 48, 91–94.
Atema, J. (1973). Microtubule theory of sensory transduction. J. Theor. Biol. 38, 181–190.
Pubmed Abstract | Pubmed Full Text
Baars, B. J. (1988). A Cognitive Theory of Consciousness. Cambridge, MA: Cambridge University.
Barbour, J. (1999). The End of Time: the Next Revolution in our Understanding of the Universe. New York, NY: Oxford University Press.
Bem, D. J. (2012). Feeling the future: experimental evidence for anomalous retroactive influences on cognition and affect. J. Pers. Soc. Psychol. 100, 407–425.
Pubmed Abstract | Pubmed Full Text | CrossRef Full Text
Bennett, C. H., and Wiesner, S. J. (1992). Communication via 1- and 2-particle operators on Einstein-Podolsky-Rosen states. Phys. Rev. Lett. 69, 2881–2884.
Pubmed Abstract | Pubmed Full Text | CrossRef Full Text
Bennett, M. V., and Zukin, R. S. (2004). Electrical coupling and neuronal synchronization in the mammalian brain. Neuron 41, 495–511.
Pubmed Abstract | Pubmed Full Text | CrossRef Full Text
Bernroider, G., and Roy, S. (2005). Quantum entanglement of K ions, multiple channel states and the role of noise in the brain. Proc. SPIE 5841, 205–214.
Bierman, D. J., and Radin, D. I. (1997). Anomalous anticipatory response on randomized future conditions. Percept. Mot. Skills 84, 689–690.
Pubmed Abstract | Pubmed Full Text
Bierman, D. J., and Scholte, H. S. (2002). A fMRI brain imaging study of presentiment. BMC Neurosci. 5, 42.
Bohm, D., and Hiley, B. J. (1993). The Undivided Universe. New York, NY: Routledge.
Bohr, N. (1934/1987). Atomic Theory and the Description of Nature, Reprinted as The Philosophical Writings of Niels Bohr, Vol. I. Woodbridge, NJ: Ox Bow Press.
Chalmers, D. J. (1996). The Conscious Mind - in Search of a Fundamental Theory. New York, NY: Oxford University Press.
Churchland, P. S. (1981). On the alleged backwards referral of experiences and its relevance to the mind-body problem. Philos. Sci. 48, 165–181.
Conrad, M. (1994). Amplification of superpositional effects through electronic conformational interactions. Chaos Solitons Fractals 4, 423–438.
Craddock, T. J. A., Tuszynski, J. A., and Hameroff, S. (2012a). Cytoskeletal signaling: is memory encoded in microtubule lattices by CaMKII phosphorylation? PLoS Comput. Biol. 8:e1002421. doi: 10.1371/journal.pcbi.1002421
Pubmed Abstract | Pubmed Full Text | CrossRef Full Text
Craddock, T. J. A., Tuszynski, J. A., Chopra, D., Casey, N., Goldstein, L. E., and Hameroff, S. R. et al. (2012b). The zinc dyshomeostasis hypothesis of Alzheimer's disease. PLoS ONE 7:e33552. doi: 10.1371/journal.pone.0033552
Pubmed Abstract | Pubmed Full Text | CrossRef Full Text
Craddock, T. J. A., St. George, M., Freedman, H., Barakat, K. H., Damaraju, S., and Hameroff, S. et al. (2012c). Computational predictions of volatile anesthetic interactions with the microtubule cytoskeleton: implications for side effects of general anesthesia. PLoS ONE 7:e37251. doi: 10.1371/journal.pone.0037251
Pubmed Abstract | Pubmed Full Text | CrossRef Full Text
Crick, F., and Koch, C. (1990). Towards a neurobiological theory of consciousness. Semin. Neurosci. 2, 263–275.
Dehaene, S., and Naccache, L. (2001). Towards a cognitive neuroscience of consciousness: basic evidence and a workspace framework. Cognition 79, 1–37.
Pubmed Abstract | Pubmed Full Text | CrossRef Full Text
Dennett, D. C. (1991). Consciousness Explained. Boston, MA: Little Brown.
Dennett, D. C., and Kinsbourne, M. (1992). Time and the observer: the where and when of consciousness. Behav. Brain Sci. 15, 183–247.
Dermietzel, R. (1998). Gap junction wiring: a ‘new’ principle in cell-to-cell communication in the nervous system? Brain Res. Rev. 26, 176–183.
Pubmed Abstract | Pubmed Full Text
Deutsch, D. (1985). Quantum theory, the Church-Turing principle and the universal quantum computer. Proc. R. Soc. Lond. A 400, 97–117.
Dic/si, L. (1989). Models for universal reduction of macroscopic quantum fluctuations. Phys. Rev. A 40, 1165–1174.
Pubmed Abstract | Pubmed Full Text | CrossRef Full Text
Draguhn, A., Traub, R. D., Schmitz, D., and Jeffreys, J. G. (1998). Electrical coupling underlies high-frequency oscillations in the hippocampus in vitro. Nature 394, 189–192.
Pubmed Abstract | Pubmed Full Text | CrossRef Full Text
Ebner, M., and Hameroff, S. (2011). Lateral information processing by spiking neurons: a theoretical model of the neural correlate of consciousness. Comput. Intell. Neurosci. 2011 247879.
Pubmed Abstract | Pubmed Full Text | CrossRef Full Text
Eccles, J. C. (1992). Evolution of consciousness. Proc. Natl. Acad. Sci. U.S.A. 89, 7320–7324.
Pubmed Abstract | Pubmed Full Text
Edelman, G. M., and Tononi, G. (2000). A Universe of Consciousness: How Matter Becomes Imagination. London: Allen Lane.
Einstein, A., Podolsky, B., and Rosen, N. (1935). Can quantum mechanical descriptions of physical reality be complete? Phys. Rev. 47, 777–780.
Engel, G. S., Calhoun, T. R., Read, E. L., Ahn, T.-K., Mancal, T., and Cheng, Y.-C. et al. (2007). Evidence for wavelike energy transfer through quantum coherence in photosynthetic systems. Nature 446, 782–786.
Pubmed Abstract | Pubmed Full Text | CrossRef Full Text
Everett, H. (1957). Relative state formulation of quantum mechanics. Rev. Mod. Phys. 29, 454–462.
Fries, P., Schroder, J. H., Roelsfsema, P. R., Singer, W., and Engel, A. K. (2002). Oscillatory neuronal synchronization in primary visual cortex as a correlate of stimulus selection. J. Neurosci. 22, 3739–3754.
Pubmed Abstract | Pubmed Full Text
Fröhlich, H. (1968). Long-range coherence and energy storage in biological systems. Int. J. Quantum Chem. 2, 641–649.
Fröhlich, H. (1970). Long range coherence and the actions of enzymes. Nature 228, 1093.
Pubmed Abstract | Pubmed Full Text
Fröhlich, H. (1975). The extraordinary dielectric properties of biological materials and the action of enzymes. Proc. Natl. Acad. Sci. U.S.A. 72, 4211–4215.
Pubmed Abstract | Pubmed Full Text
Fukuda, T. (2007). Structural organization of the gap junction network in the cerebral cortex. Neuroscientist 13, 199–207.
Pubmed Abstract | Pubmed Full Text | CrossRef Full Text
Galarreta, M., and Hestrin, S. (1999). A network of fast-spiking cells in the neocortex connected by electrical synapses. Nature 402, 72–75.
Pubmed Abstract | Pubmed Full Text | CrossRef Full Text
Gauger, E., Rieper, E., Morton, J. J. L., Benjamin, S. C., and Vedral, V. (2011). Sustained quantum coherence and entanglement in the avian compass. Available online at:
Pubmed Abstract | Pubmed Full Text | CrossRef Full Text
Ghirardi, G. C., Rimini, A., and Weber, T. (1986). Unified dynamics for microscopic and macroscopic systems. Phys. Rev. D 34, 470.
Pubmed Abstract | Pubmed Full Text | CrossRef Full Text
Gray, C. M., and Singer, W. (1989). Stimulus-specific neuronal oscillations in orientation columns of cat visual cortex. Proc. Natl. Acad. Sci. U.S.A. 86, 1698–1702.
Pubmed Abstract | Pubmed Full Text
Gray, J. A. (2004). Consciousness: Creeping Up On The Hard Problem. Oxford: Oxford University Press.
Hagan, S., Hameroff, S., and Tuszynski, J. (2001). Quantum computation in brain microtubules? Decoherence and biological feasibility. Phys. Rev. E 65, 061901.
Pubmed Abstract | Pubmed Full Text | CrossRef Full Text
Hameroff, S. (1998). The Penrose-Hameroff “Orch OR” model of consciousness. Philos. Trans. R. Soc. A 356, 1869–1896.
Hameroff, S. (2006). The entwined mysteries of anesthesia and consciousness. Anesthesiology 105, 400–412.
Pubmed Abstract | Pubmed Full Text
Hameroff, S. R. (2007). The brain is both neurocomputer and quantum computer. Cogn. Sci. 31, 1035–1045.
Pubmed Abstract | Pubmed Full Text | CrossRef Full Text
Hameroff, S. (2010) The “conscious pilot”—dendritic synchrony moves through the brain to mediate consciousness. J. Biol. Phys. 36, 71–93.
Pubmed Abstract | Pubmed Full Text | CrossRef Full Text
Hameroff, S. R., and Watt, R. C. (1982). Information processing in microtubules. J. Theor. Biol. 98, 549–561.
Pubmed Abstract | Pubmed Full Text | CrossRef Full Text
Hameroff, S., Nip, A., Porter, M., and Tuszynski, J. (2002). Conduction pathways in microtubules, biological quantum computation and microtubules. Biosystems 64, 149–168.
Pubmed Abstract | Pubmed Full Text | CrossRef Full Text
Hameroff, S. R., and Penrose, R. (1996a). “Orchestrated reduction of quantum coherence in brain microtubules: a model for consciousness,” in Toward a Science of Consciousness, The First Tucson Discussions and Debates, eds S. R. Hameroff, A. W. Kaszniak, and A. C. Scott (Boston, MA: MIT Press), 507–540. Also published in Math. Comput. Simulat. (1996) 40, 453–480.
Hameroff, S. R., and Penrose, R. (1996b). Conscious events as orchestrated spacetime selections. J. Conscious. Stud. 3, 36–53.
He, B. J., Zemper, J. M., Snyder, A. Z., and Raichle, M. E. (2010). The temporal structures and functional significance of scale-free brain activity. Neuron 66, 353–369.
Pubmed Abstract | Pubmed Full Text | CrossRef Full Text
Pubmed Abstract | Pubmed Full Text
Huxley, T. H. (1893/1986). Method and Results: Essays. New York, NY: D. Appleton and Comp.
James, W. (1890). The Principles of Psychology. New York, NY: Henry Holt.
John, E. R., and Prichep, L. S. (2005). The anesthetic cascade: a theory of how anesthesia suppresses consciousness. Anesthesiology 102, 447–471.
Pubmed Abstract | Pubmed Full Text
Károlyházy, F., Frenkel, A., and Lukacs, B. (1986). “On the possible role of gravity on the reduction of the wave function,” in Quantum Concepts in Space and Time, eds R. Penrose and C. J. Isham (New York, NY: Oxford University Press), 109–128.
Kim, Y.-H., Yu, R., Kulik, S. P., Shih, Y. H., and Scully, M. (2000). A delayed choice quantum eraser. Phys. Rev. Lett. 84, 1–5.
Pubmed Abstract | Pubmed Full Text | CrossRef Full Text
Kitaev, A. Y. (1997). Fault-Tolerant Quantum Computation. Available online at: ArXiv.orgpreprintquant-ph/9707021
Koch, C. (2004). The Quest for Consciousness: a Neurobiological Approach. Englewood, NJ: Roberts and Company.
Kokarovtseva, L., Jaciw-Zurakiwsky, T., Mendizabal Arbocco, R., Frantseva, M. V., and Perez Velazquez, J. L. (2009). Excitability and gap junction-mediated mechanisms in nucleus accumbens regulate self-stimulation reward in rats. Neuroscience 159, 1257–1263.
Pubmed Abstract | Pubmed Full Text | CrossRef Full Text
Kolers, P. A., and von Grunau, M. (1976). Shape and color in apparent motion. Vision Res. 16, 329–335.
Pubmed Abstract | Pubmed Full Text | CrossRef Full Text
Kornhuber, H. H., and Deecke, L. (1965). Hirnpotential andrugen bei Willkurbewegungen und passiven Bewungungen des Menschen: bereitschaftspotential und reafferente potentiale. Pflug. Arch. 284, 1–17.
Pubmed Abstract | Pubmed Full Text
Libet, B. (2002). The timing of mental events: Libet's experimental findings and their implications. Conscious. Cogn. 11, 291–299.
Pubmed Abstract | Pubmed Full Text | CrossRef Full Text
Libet, B. (2003). Timing of conscious experience: reply to the 2002 commentaries on Libet's findings. Conscious. Cogn. 12, 321–331.
Pubmed Abstract | Pubmed Full Text | CrossRef Full Text
Libet, B. (2004). Mind Time: The Temporal Factor in Consciousness. Cambridge, MA: Harvard University Press.
Libet, B., Alberts, W. W., Wright, W., Delattre, L., Levin, G., and Feinstein, B. (1964). Production of threshold levels of conscious sensation by electrical stimulation of human somatosensory cortex. J. Neurophysiol. 27, 546–578.
Pubmed Abstract | Pubmed Full Text
Libet, B., Gleason, C. A., Wright, E. W., and Pearl, D. K. (1983). Time of conscious intention to act in relation to onset of cerebral activity (readiness potential): the unconscious initiation of a freely voluntary act. Brain 106, 623–642.
Pubmed Abstract | Pubmed Full Text | CrossRef Full Text
Libet, B., Wright, E. W. Jr., Feinstein, B., and Pearl, D. K. (1979). Subjective referral of the timing for a conscious sensory experience. Brain 102, 193–224.
Pubmed Abstract | Pubmed Full Text | CrossRef Full Text
Luo, L., and Lu, J. (2011). Temperature Dependence of Protein Folding Deduced from Quantum Transition. Available online at:
Pubmed Abstract | Pubmed Full Text | CrossRef Full Text
Ma, X.-S., Zotter, S., Kofler, J., Ursin, R., Jennewein, T., and Brukner, C. et al. (2012). Experimental delayed-choice entanglement swapping. Nat. Phys. 8, 480–485.
Macikic, I., de Riedmatten, H., Tittel, W., Zbinden, H., and Gisin, N. (2002). Long-distance teleportation of qubits at telecommunication wavelengths. Nature 421, 509–513.
Pubmed Abstract | Pubmed Full Text | CrossRef Full Text
Malach, R. (2007). The measurement problem in human consciousness research. Behav. Brain Sci. 30, 481–499.
Matsuyama, S. S., and Jarvik, L. F. (1989). Hypothesis: microtubules, a key to Alzheimer disease. Proc. Natl. Acad. Sci. U.S.A. 86, 8152–8156.
Pubmed Abstract | Pubmed Full Text
McCrone, J. (1999). Going Inside: A Tour Round a Single Moment of Consciousness. London: Faber and Faber.
McCulloch, W., and Pitts, W. (1943). A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 5, 115–133.
Pubmed Abstract | Pubmed Full Text
Naundorf, B., Wolf, F., and Volgushev, M. (2006). Unique features of action potential initiation in cortical neurons. Nature 440, 1060–1063.
Pubmed Abstract | Pubmed Full Text | CrossRef Full Text
Penrose, R., and Hameroff, S. R. (1995). What gaps? Reply to Grush and Churchland. J. Conscious. Stud. 2, 98–112.
Penrose, R., and Hameroff, S. (2011). Consciousness in the universe: neuroscience, quantum space-time geometry and Orch OR theory. J. Cosmol. Available online at:
Penrose, R. (1989). The Emperor's New Mind. New York, NY: Oxford University Press.
Penrose, R. (1994). Shadows of the Mind: A Search for the Missing Science of Consciousness. New York, NY: Oxford University Press.
Penrose, R. (1996). On gravity's role in quantum state reduction. Gen. Rel. Grav. 28, 581–600.
Penrose, R. (2004). The Road to Reality: A Complete Guide to the Laws of the Universe. London: Jonathan Cape.
Peres, A. (2000). Delayed choice for entanglement swapping. J. Mod. Opt. 47, 139–143.
Pockett, S. (2002). On subjective back-referral and how long it takes to become conscious of a stimulus: a reinterpretation of Libet's data. Conscious. Cogn. 11, 144–161.
Pubmed Abstract | Pubmed Full Text | CrossRef Full Text
Pubmed Abstract | Pubmed Full Text | CrossRef Full Text
Pollen, D. A. (2004). Brain stimulation and conscious experience. Conscious. Cogn. 13, 626–645.
Pubmed Abstract | Pubmed Full Text | CrossRef Full Text
Pribram, K. H. (1991). Brain and Perception. Hillsdale, NJ: Lawrence Erlbaum.
Radin, D. I. (2004). Electrodermal presentiments of future emotions. J. Sci. Explor. 11, 163–180.
Rasmussen, S., Karampurwala, H., Vaidyanath, R., Jensen, K. S., and Hameroff, S. (1990). Computational connectionism within neurons: a model of cytoskeletal automata subserving neural networks. Physica D 42, 428–449.
Ray, P. G., Meador, K. J., Smith, J. R., Wheless, J. W., Sittenfeld, M., and Clifton, G. L. (1999). Physiology of perception: cortical stimulation and recording in humans. Neurology 52, 1044–1049.
Pubmed Abstract | Pubmed Full Text
Reiter, G. F., Kolesnikov, A. I., Paddison, S. J., Platzman, P. M., Moravsky, A. P., and Adams, M. A. et al. (2011). Evidence of a new quantum state of nano-confined water. Available online at:
Rescher, N. (1991). G. W. Leibniz's Monadology. Pittsburgh, PA: University of Pittsburgh Press.
Rieper, E., Anders, J., and Vedral, V. (2011). Quantum entanglement between the electron clouds of nucleic acids in DNA. Available online at:
Rosenblatt, F. (1962). Principles of Neurodynamics. New York, NY: Spartan Books.
Sahu, S., Hirata, K., Fujita, D., Ghosh, S., and Bandyopadhyay, A. (2012). Radio-frequency induced ultrafast assembly of microtubules and their length-independent electronic properties. Nat. Mater. (in press).
Sarfatti, J. (2011). Retrocausality and signal nonlocality in consciousness and cosmology. J. Cosmol. 14. (Online).
Sataric, M. V., Zekovic, S., Tuszynski, J. A., and Pokorny, J. (1998). The Mossbauer effect as a possible tool in detecting nonlinear excitations in microtubules. Phys. Rev. E 58, 6333–6339.
Schrödinger, E. (1935). Die gegenwärtige situation in der Quantenmechanik (The present situation in quantum mechanics). Naturwissenschaften 23, 807–812, 823–828, 844–849. (Translation by J. T. Trimmer (1980) in Proc. Amer. Phil. Soc. 124, 323–338).
Scott, A. C. (1995). Stairway to the Mind. New York, NY: Springer-Verlag.
Shepherd, G. M. (1996). The dendritic spine: a multifunctional integrative unit. J. Neurophysiol. 75, 2197–2210.
Pubmed Abstract | Pubmed Full Text
Sherrington, C. S. (1957). Man: On his Nature, 2nd Edn. Cambridge, MA: Cambridge University Press.
Shimony, A. (1993). Search for a Naturalistic World View - Volume, II, Natural Science and Metaphysics. New York, NY: Cambridge University Press.
Pubmed Abstract | Pubmed Full Text | CrossRef Full Text
Smith, S., Watt, R. C., and Hameroff, S. R. (1984). Cellular automata in cytoskeletal lattice proteins. Physica D 10, l68–l74.
Sourdet, V., and Debanne, D. (1999). The role of dendritic filtering in associative long-term synaptic plasticity. Learn. Mem. 6, 422–447.
Pubmed Abstract | Pubmed Full Text | CrossRef Full Text
Spinoza, B. (1677). “Ethica in Opera quotque reperta sunt,” 3rd Edn, eds J. van Vloten and J. P. N. Land (Netherlands: Den Haag).
Stapp, H. P. (1993). Mind, Matter and Quantum Mechanics. Berlin: Springer Verlag.
Steane, A. (1998). Introduction to quantum error correction. Philos. Trans. R. Soc. A 356, 1739–1758.
Stroud, J. M. (1956). “The fine structure of psychological time,” in Information Theory in Psychology, ed H. Quastler (Glencoe, IL: Free Press), 174–205.
Tegmark, M. (2000). The importance of quantum decoherence in brain processes. Physica Rev. E. 61, 4194–4206.
Pubmed Abstract | Pubmed Full Text | CrossRef Full Text
Tittel, W., Brendel, J., Gisin, B., Herzog, T., Zbinden, H., and Gisin, N. (1998). Experimental demonstration of quantum correlations over more than 10 km. Phys. Rev. A 57, 3229–3232.
Tononi, G. (2004). An information integration theory of consciousness. Trends Cogn. Sci. 5, 472–478.
Pubmed Abstract | Pubmed Full Text | CrossRef Full Text
Turin, L. (1996). A spectroscopic mechanism for primary olfactory reception. Chem. Senses 21, 773–791.
Pubmed Abstract | Pubmed Full Text | CrossRef Full Text
Tuszynski, J. A., Hameroff, S., Sataric, M. V., Trpisova, B., and Nip, M. L. A. (1995). Ferroelectric behavior in microtubule dipole lattices; implications for information processing, signaling and assembly/disassembly. J. Theor. Biol. 174, 371–380.
Van Petten, C., Coulson, S., Rubin, S., Plante, E., and Parks, M. (1999). Time course of word identification and semantic integration in spoken language. J. Exp. Psychol. Learn. Mem. Cogn. 25, 394–417.
Pubmed Abstract | Pubmed Full Text
VanRullen, R., and Koch, C. (2003). Is perception discrete or continuous? Trends Cogn. 7, 207–213.
Pubmed Abstract | Pubmed Full Text | CrossRef Full Text
Velmans, M. (1991). Is human information processing conscious? Behav. Brain Sci. 14, 651–669.
Velmans, M. (2000). Understanding Consciousness. London: Routledge.
von Neumann, J. (1932/1955). Mathematical Foundations of Quantum Mechanics. Princeton, NJ: Princeton University Press. Translated by Robert, T. Beyer.
von Rospatt, A. (1995). The Buddhist Doctrine of Momentariness: A Survey of the Origins and Early Phase of this Doctrine up to Vasubandhu. Stuttgart: Franz Steiner Verlag.
Wegner, D. M. (2002). The Illusion of Conscious Will. Cambridge, MA: MIT Press.
Pubmed Abstract | Pubmed Full Text
Wheeler, J. A. (1978). Mathematical Foundations of Quantum Theory. New York, NY: Academic Press.
Whitehead, A. N. (1929). Process and Reality. New York, NY: Macmillan.
Whitehead, A. N. (1933). Adventure of Ideas. London: Macmillan.
Wigner, E. P. (1961). “Remarks on the mind-body question,” in The Scientist Speculates, ed I. J. Good (London: Heinemann), in Quantum Theory and Measurement, eds J. A. Wheeler and W. H. Zurek (Princeton, NJ: Princeton University Press). Reprinted in Wigner, E. (1967), Symmetries and Reflections, (Bloomington, IN: Indiana University Press).
Wolf, F. A. (1998). The timing of conscious experience: a causality-violating, two-valued, transactional interpretation of subjective antedating and spatial-temporal projection. J. Sci. Explor. 12, 511–542.
Woolf, N. J., and Hameroff, S. R. (2001). A quantum approach to visual consciousness. Trends Cogn. Sci. 5, 472–478.
Pubmed Abstract | Pubmed Full Text | CrossRef Full Text
Zurek, W. H. (2003). Decoherence, einselection, and the quantum origins of the classical. Rev. Mod. Phys. 75, 715–775.
Keywords: microtubules, free will, consciousness, Penrose-Hameroff Orch OR, volition, quantum computing, gap junctions, gamma synchrony
Citation: Hameroff S (2012) How quantum brain biology can rescue conscious free will. Front. Integr. Neurosci. 6:93. doi: 10.3389/fnint.2012.00093
Received: 14 July 2012; Accepted: 25 September 2012;
Published online: 12 October 2012.
Edited by:
Jose Luis Perez Velazquez, Hospital for Sick Children and University of Toronto, Canada
Reviewed by:
Jack Adam Tuszynski, University of Alberta, Canada
Andre Chevrier, University of Toronto, Canada
*Correspondence: Stuart Hameroff, Department of Anesthesiology, Center for Consciousness Studies, University of Arizona, PO Box 245114, Tucson, AZ 85724, USA. e-mail: |
4322a9d9549adb6a | Dismiss Notice
Join Physics Forums Today!
Why does Bohr's derivation work?
1. Dec 30, 2015 #1
Bohr assumed angular momentum was quantized as ##L=n\hbar##. But really it is quantized as ##L=\hbar \sqrt{l(l+1)}##.
What he does to derive,e.g., the Bohr radius is consider that the total energy of an electron orbiting a proton is
## E=\frac{L^2}{2mr^2}-\frac{k e^2}{r} ##
and then he makes some clever substitutions. However, Bohr substituted the formula for ##L_z## (##n\hbar##) not the actual ##L## (which is ##h\sqrt{l(l+1)}##). So why then does his procedure work?
Up until now I have considered this a mere accident but I've heard about people considering the so-called "Gravitational Bohr Radius" which is derived using the same procedure.
I don't understand why we asume its validity for the simple system of one particle orbiting another if we've got the wrong formula for angular momentum.
So then:
Why does his procedure work?
Why do we take on the similar (and wrong) derivation to the gravitational case?
2. jcsd
3. Dec 30, 2015 #2
The hydrogen energy levels are degenerate with respect to angular momentum and only depend on principal quantum number, n.
It was a happy accident.
4. Dec 31, 2015 #3
Hmm. I don't see how this explains why substituting the wrong formula for ##L## will give the correct energy levels. Maybe as you say it was a happy accident (which baffles me since this was so important for our history)...
But anyways, why do people like to talk about a Gravitational Bohr Radius ? Are they bringing Bohr's derivation into the gravitational case or are they actually solving the Schrödinger eqn for the gravitational case and calling the smallest radius "the Bohr Radius" ?
5. Dec 31, 2015 #4
In case I'm not being clear this is the derivation I'm thinking about (for the G. Bohr Radius):
Suppose ##L=n\hbar=rp##
Since ##\frac{mv^2}{r}=\frac{GMm}{r^2}##
then ##\frac{p^2}{m}=\frac{GMm}{r}##
Since ##L=n\hbar## then
and therefore
for n=1
and this what I've seen presented for "G Bohr Radius". My question is, did they solve the Schrödinger equation for the G. Potential case? Or did they derive this using a "Bohr Picture" ?
It seems to me there has to be some deep reason why this works for the smallest radius, other than it being a "brilliant blunder" by Bohr.
6. Dec 31, 2015 #5
User Avatar
Staff: Mentor
Which people are these? I don't remember reading about a "gravitational Bohr radius".
7. Dec 31, 2015 #6
Who are "people"? A quick Google search mostly turned up references to a particular R. Oldershaw. Crackpot if you ask me, and a subject not suited for this forum.
8. Dec 31, 2015 #7
I'm taking a course on Coursera by Hitoshi Murayama where they use this formula and call it G. Bohr Radius. They use this formula to give a lower limit on the mass of dark matter particles (WIMPs) taking into consideration they have to be contained within a sphere whose radius is approximately the galactic radius.
Also (by a quick look on google) you can find articles like:
Also I remember Griffiths QM consider a hypothetical "gravitational atom", but they only use the energy levels there not the radius.
Edit: I can't find exactly where Murayama refers to this formula by this name, but he does use it. (I had to get this name from somewhere and I'm pretty sure he used this term)
9. Dec 31, 2015 #8
Perhaps this is crackpottery and I didn't know this :/ . Although Murayama is a respected guy who used this formula in his lectures.
10. Dec 31, 2015 #9
Vanadium 50
User Avatar
Staff Emeritus
Science Advisor
Education Advisor
2017 Award
Really? There are no other happy accidents in history?
The Bohr model was introduced in 1913, known to be wrong in 1913, and completely superseded in 1926.
The "happy accident" comes about, as Dr. Courtney says, from the k-l degeneracy in the hydrogen atom. That comes about because the Schroedinger Equation for a 1/r potential is a differential equation that can be solved by separation of variables two different ways. To me, that looks accidental. And happy.
11. Jan 1, 2016 #10
User Avatar
Science Advisor
Gold Member
2017 Award
It's not clear to me, why the Bohr-Sommerfeld model of the atom works (it was Sommerfeld who fully understood the mathematics of the Bohr model!). Perhaps there's an explanation from deriving it as an approximation from quantum theory, which should be somehow related to the WKB method at low orders. It's already amazing that you get the correct energy levels for the non-relativistic problem. What's even more amazing to me is that Sommerfeld got the correct fine structure. The modern way to understand it is to use QED, which boils down (in Coulomb gauge) to solve the time-independnet Dirac equation with a Coulomb potential. In the Bohr-Sommerfeld model there's nothing concerning spin 1/2, and naively, I'd expect to get rather some approximation of the energy levels of a "spinless electron", but that's given by the analogous calculation in scalar QED, and the corresponding "hydrogen-like" energy levels for a boson indeed do give a different fine structure for the hydrogen atom.
Another interesting detail is that Schrödinger started his investigation concerning the hydrogen atom indeed using the relativistic dispersion relation, which lead him, using the de Broglie-Einstein rule ##\omega \rightarrow E/\hbar## and ##\vec{k} \rightarrow \vec{p}/\hbar## to get the "wave equation" for "matter waves" to the Klein-Gordon equation, and of course he got the right spectrum for this problem, but it was the wrong fine structure. So he gave up the relativistic case for the time being and used the non-relativistic approximation, leading to the Schrödinger equation.
So still it's puzzling, why Sommerfeld got the correct fine structure using Bohr-Sommerfeld quantization. It's really an astonishing accident that the errors of a completely wrong model conspire in a way to give the correct hydrogen spectrum including fine structure! |
40f5923217ed2734 | Max Tegmark on the mathematical universe
| | Conversations
Randal A. Koene portraitKnown as “Mad Max” for his unorthodox ideas and passion for adventure, his scientific interests range from precision cosmology to the ultimate nature of reality, all explored in his new popular book “Our Mathematical Universe“. He is an MIT physics professor with more than two hundred technical papers, 12 cited over 500 times, and has featured in dozens of science documentaries. His work with the SDSS collaboration on galaxy clustering shared the first prize in Science magazine’s “Breakthrough of the Year: 2003.”
Luke Muehlhauser: Your book opens with a concise argument against the absurdity heuristic — the rule of thumb which says “If a theory sounds absurd to my human psychology, it’s probably false.” You write:
Evolution endowed us with intuition only for those aspects of physics that had survival value for our distant ancestors, such as the parabolic orbits of flying rocks (explaining our penchant for baseball). A cavewoman thinking too hard about what matter is ultimately made of might fail to notice the tiger sneaking up behind and get cleaned right out of the gene pool. Darwin’s theory thus makes the testable prediction that whenever we use technology to glimpse reality beyond the human scale, our evolved intuition should break down. We’ve repeatedly tested this prediction, and the results overwhelmingly support Darwin. At high speeds, Einstein realized that time slows down, and curmudgeons on the Swedish Nobel committee found this so weird that they refused to give him the Nobel Prize for his relativity theory. At low temperatures, liquid helium can flow upward. At high temperatures, colliding particles change identity; to me, an electron colliding with a positron and turning into a Z-boson feels about as intuitive as two colliding cars turning into a cruise ship. On microscopic scales, particles schizophrenically appear in two places at once, leading to the quantum conundrums mentioned above. On astronomically large scales… weirdness strikes again: if you intuitively understand all aspects of black holes [then you] should immediately put down this book and publish your findings before someone scoops you on the Nobel Prize for quantum gravity… [also,] the leading theory for what happened [in the early universe] suggests that space isn’t merely really really big, but actually infinite, containing infinitely many exact copies of you, and even more near-copies living out every possible variant of your life in two different types of parallel universes.
Like much of modern physics, the hypotheses motivating MIRI’s work can easily run afoul of a reader’s own absurdity heuristic. What are your best tips for getting someone to give up the absurdity heuristic, and try to judge hypotheses via argument and evidence instead?
Max Tegmark: That’s a very important question: I think of the absurdity heuristic as a cognitive bias that’s not only devastating for any scientist hoping to make fundamental discoveries, but also dangerous for any sentient species hoping to avoid extinction. Although it appears daunting to get most people to drop this bias altogether, I think it’s easier if we focus on a specific example. For instance, whereas our instinctive fear of snakes is innate and evolved, our instinctive fear of guns (which the Incas lacked) is learned. Just as people learned to fear nuclear weapons through blockbuster horror movies such as “The Day After”, rational fear of unfriendly AI could undoubtedly be learned through a future horror movie that’s less unrealistic than Terminator III, backed up by a steady barrage of rational arguments from organizations such as MIRI.
In the mean time, I think a good strategy is to confront people with some incontrovertible fact that violates their absurdity heuristic and the whole notion that we’re devoting adequate resources and attention to existential risks. For example, I like to ask why more people have heard of Justin Bieber than of Vasili Arkhipov, even though it wasn’t Justin who singlehandedly prevented a Soviet nuclear attack during the Cuban Missile Crisis.
Luke: After reviewing mainstream contemporary physics, you begin to explain the “multiverse heirarchy” in chapter 6, which refers to four “levels” of multiverse we might inhabit. The “Level I multiverse,” as you call it, is an inescapable prediction of what is currently the most widely accepted theory of the early universe (eternal inflation): the universe is infinite in all directions, implying that there is an identical copy of me, who is also asking Max Tegmark questions via email, roughly 101029 meters from where I sit now.
To many readers that will sound absurd. But you write that it is “a prediction of eternal inflation which, as we’ve seen above, agrees with all current observational evidence and is implicitly used as the basis for most calculations and simulations presented at cosmology conferences.” Moreover, the Level I multiverse could still exist even if eternal inflation turns out to be false. All we need for the Level I multiverse is, you write:
• Infinite space and matter: Early on, there was an infinite space filled with hot expanding plasma.
• Random seeds: Early on, a mechanism operated such that any region could receive any possible seed fluctuations, seemingly at random.
And as you explain, we have pretty decent evidence for these two claims, independent of whether eternal inflation in particular happens to be correct.
Still, I’m curious what your colleagues in physics departments around the world think of this. If you had to guess, what proportion of them accept that the Level I multiverse is a straightforward prediction of eternal inflation? And roughly what proportion of them think eternal inflation, or some theory that assumes both “infinite space and matter” and also “random seeds”, will turn out to be correct?
Max: There’s definitely been an increased acceptance of these ideas, with the most vocal critics shifting from saying “this makes no sense and I hate it!” to “I hate it!”, tacitly acknowleding that it’s actually a scientifically legitimate possibility.
I haven’t seen any relevant poll, but my sense is that the proportion of physicists who think a Level I multiverse is likely depends strongly on their subfield of physics, with the proportion being highest among theoretical cosmologists and high-energy theorists, and lowest in very different areas where people don’t normally think much about these ideas and often feel that they sound too weird to be true.
You and your MIRI colleagues work very hard to be rational, so if you’re convinced that A implies B and A is true, then you’ll update your Bayesian prior to be convinced that B is also true. I suspect that many physicists are less rational than you: my guess is that many who are sympathetic toward inflation and learn that it generically implies eternal inflation and Level I don’t actually update their prior about Level I, but instead tell themselves that a multiverse feels unscientific, and therefore lose interest in spending more time thinking about consequences of inflation.
Luke: You go on to explain four levels of multiverse in total:
• Level I multiverse: “Distant regions of space that are currently but not forever unobservable; they have the same effective laws of physics but may have different histories.” A straightforward prediction of eternal inflation, and many other possible theories of cosmological evolution.
• Level II multiverse: “Distant regions of space that are forever unobservable because space between here and there keeps in inflating; they obey the same fundamental laws of physics, but their effective laws of physics may differ.” Also suggested by eternal inflation.
• Level III multiverse: “Different parts of quantum Hilbert space.” If the wavefunction never collapses but is instead always governed by the Schrödinger equation, this implies the universe is constantly “splitting” into parallel universes.
• Level IV multiverse: “All mathematical structures, corresponding to different fundamental laws of physics.”
• Before we get to Level IV, I want to ask about a claim you make about Level III.
You write that “fledgling technologies such as quantum cryptography and quantum computing explicitly exploit the Level III multiverse and work only if the wavefunction doesn’t collapse.”
But I assume that collapse theorists don’t think their view will be falsified as soon as the first “true” quantum computer is (uncontroversially) built. What do they argue in response? Why do you think quantum computing can only work if the wavefunction doesn’t collapse?
Max: That’s a good question, because there’s lots of confusion in this area. What’s uncontroversial is that quantum computers will work it there’s no collapse, i.e., if the Schrödinger equation works with no exceptions (as long as engineering obstacles such as decoherence mitigation can be overcome). What’s controversial is what happens otherwise. There’s such a large zoo of non-Everett interpretations (13 by now) that you’ll have to ask their adherents what exactly they predict for quantum computers, for quantum systems containing humans, etc. – in my experience, different people claiming to subscribe to the same interpretation sometimes nonetheless disagree on specific predictions.
The quantum litmus test is to ask them the following question:
Alice is in an isolated lab in a spaceship and measures the spin of an electron that was in a superposition of “up” and “down”. According to Bob, who hasn’t yet observed the spaceship, is Alice’s brain in a superposition of perceiving that she’s measured “up” and that she’s measured “down”?
The Many-Worlds Interpretation says unabiguously “yes”, because that’s what the Schrödinger equation predicts. Some Copenhagen supporters would say “no”, on the grounds that Alice collapsed the wavefunction when she observed the electron: something truly random happened at that instant, and it’s now really just up or really just down. Others say “yes” and still others will give you less clear-cut answers. My point is that a theory refusing to make a clear prediction for whether the Schrödinger holds for arbitrary large and complicated systems is automatically refusing to make a prediction for whether an arbitrarily large and complicated quantum computers work or not, and is therefore not a complete theory. I’d also argue that any theory where the wavefunction never collapses is simply Everett disguised in unfamiliar language – as far as nature is concerned, it’s only the equations that matter.
Luke: In chapter 11 you write that:
Whereas most of my physics colleagues would say that our external physical reality is (at least approximately) described by mathematics, I’m arguing that it is mathematics (more specifically, a mathematical structure). In other words, I’m making a much stronger claim. Why? …If a future physics textbook contains the coveted Theory of Everything (ToE), then its equations are a complete description of the mathematical structure that is the external physical reality. I’m writing is rather than corresponds to here, because if two structures are equivalent, then there’s no meaningful sense in which they’re not one and the same…
Do you have physics colleagues who assume external reality exists and that it can be described by mathematics, but who don’t accept your Mathematical Universe Hypothesis? If so, what are their counter-arguments?
Max: Interestingly, I haven’t heard any clearly articulated counter-arguments from physics colleagues. Rather, it’s a bit like with the unfriendly AI X-risk argument: the scientists I know who are unconvinced by the conclusion don’t take issue with specific logical steps in the argument, but lack sufficient interest in the question to have familiarized themselves with the argument.
If you want to classify people’s views, it boils down to two logically separate questions:
1. Is our external physical reality completely described by mathematics?
2. Can something be perfectly described by mathematics (having no properties except mathematical properties) but still not be a mathematical structure?
The people I’ve heard answer “no” to 1) tend to do so not based on evidence or a logical argument, but based on a preference for a non-mathematical free will, soul or deity. The people I’ve heard answer “no” to 2) often conflate the description with the described, within mathematics itself. This ties in with the important question about whether mathematics is invented or discovered – a famous controversy among mathematicians and philosophers.
Our language for describing the planet Neptune (which we obviously invent – we invented a different word for it in Swedish) is of course distinct from the planet itself, which we discovered. Analogously, we humans invent the language of mathematics (the symbols, our human names for the symbols, etc.), but it’s important not to confuse this language with the structures of mathematics that I focus on in the book. For example, any civilization interested in Platonic solids would discover that there are precisely 5 such structures (the tetrahedron, cube, octahedron, dodecahedron and icosahedron). Whereas they’re free to invent whatever names they want for them, they’re not free to invent a 6th one – it simply doesn’t exist. It’s in the same sense that the mathematical structures that are popular in modern physics are discovered rather than invented, from 3+1-dimensional pseudo-Riemannian manifolds to Hilbert spaces. The possibility that I explore in the book is that one of the structures of mathematics (which we can discover but not invent) corresponds to the physical world (which we also discover rather than invent).
Luke: In chapter 13 you turn your attention to the future of physics, and the future of humanity within the physical world. In particular, you talk a lot about risks of human extinction, aka “existential risks.”
To summarize: the bad news is that there are lots of ways for humans to go extinct. The good news is that very few extinction risks are remotely likely in the next, say, 150 years. To illustrate this point, you provide this graphic:
upcoming existential risks
I like that graphic, and I think it’s basically right, except that:
• I’d downplay nuclear war as a fully existential risk (see here),
• I’d change “global pandemic” to “synthetic biology” to emphasize that it’s novel pathogens that might be capable of full-blown human extinction (rather than “mere” global catastrophe),
• and I’d add molecular nanotechnology as a major existential threat for the next 150 years.
I suspect the folks at Cambridge University’s Centre for the Study of Existential Risk (CSER) would make the same adjustments, as they also seem to be focusing on risks from synthetic biology, molecular nanotechnology, and AGI.
Do you think you’d agree with those adjustments, or is your basic picture somewhat different from mine on those points?
Max: I agree that “synthetic biology” is a better phrase, especially since the global pandemics I had in mind when making that figure were mainly human-made. The forms of molecular nanotechnology that I suspect pose the greatest existential risks are those that transcend the boundaries with synthetic biology or AI (already covered).
I disagree with the argument that we’ve overestimated nuclear war as an existential risk. Of course an all-out nuclear war couldn’t kill all humans instantly by literally blowing us up. However, I find the supposedly reassuring arguments you cite unconvincing, and had a spirited debate about this with one of the authors last year. To qualify something as an existential risk, we don’t need to prove that it will extinguish humanity – we simply need to establish reasonable doubt of the assertion that it can’t.
If the initial blasts disable much of our infrastructure and then nuclear winter lowers the summer temperature by about 20°C (36°F) in most of North America, Europe and Asia (as per Robock) to cause catastrophic crop failures, it’s not hard to imagine scenarios of truly existential proportions. Modern human society is a notoriously complex and hard-to-model system, so the scenarios I’m most concerned about involve complex interplays between multiple effects. For example, infrastructure breakdown might make it difficult to control either starvation-induced pandemics or armed gangs who systematically sweep the planet for food, weapons, etc. with little regard for sustainability. Without any serious attempts to model such complications, I don’t find the cited estimates of available biomass etc. particularly reassuring.
Luke: As Bostrom (2013) notes, humanity seems to be investing much more effort into (e.g.) dung beetle research than we are investing in research on near-term existential risks and how we might mitigate them. From your perspective, what can we do to cause research grantmakers (e.g. the NSF) and researchers (e.g. economists) to direct more of their effort toward research into near-term existential risks, especially the risks from novel technologies like synthetic biology and AGI?
Max: We need to draw more attention to these risks, so that people start thinking of them as real threats rather than science fiction fantasies. Organizations such as MIRI are an invaluable step in this direction.
We should aim to make more opinion leaders understand xrisks and make more people who understand xrisks into opinion leaders.
Luke: Thanks, Max! |
54c2941b32d5d823 | The aim of the FDMNES project is to supply to the community a user friendly code to simulate x-ray spectroscopies, linked to the real absorption (XANES, XMCD) or resonant scattering (RXD) of the synchrotron radiation. This ab initio approach, wants to eliminate all the methodological parameters. First mainly mono-electronic, using the functionnal density theory (DFT), it includes now multi-electronics advances with the use of the time dependant DFT (TD-DFT) for a better taking into account of the excited states linked to the photon-matter interaction. It includes also the Hubbard correction (LDA+U) for a better description of the so called correlated materials.
The GNXAS package is a computer code for EXAFS data analysis based on multiple-scattering (MS) calculations and a rigorous fitting procedure of the raw experimental data. The main characteristic of the software are: + atomic phase shifts calculations in the muffin-tin approximation based on atom self-consistent relativistic calculations. Account for the neighbors is taken. + Inclusion of inelastic losses through complex Hedin-Lundqvist potential. + Calculation of MS signals associated with two, three, and four atom configurations using advanced algorithms. Use of an advanced fitting procedure that allows: + to fit simultaneously any number of spectra containing any number of edges, + to use directly the raw data without any pre-analysis, + to account for complex background multi-electron excitation features, + to use various model peaks for the pair, triplet and quadruplet distribution functions, including non Gaussian models and extreme cases. In all cases absolute parameters can be fitted, - to treat liquid phase or disordered systems and extract reliable g(r) functions in the short range, - to perform a rigorous statistical error analysis and plot two-dimensional correlation maps, - to provide a flexible scientific tool for EXAFS data analysis where the user has access to every stage of the calculation. - full modularity that makes easy to interface parts of the GNXAS software with other available software.
VASP is an ab initio simulation package based on DFT. It is used for atomic scale materials modelling, e.g. electronic structure calculations and quantum-mechanical molecular dynamics from first principles. VASP computes an approximate solution to the many-body Schrödinger equation, either within density functional theory (DFT), solving the Kohn-Sham equations, or within the Hartree-Fock (HF) approximation, solving the Roothaan equations. Hybrid functionals that mix the Hartree-Fock approach with DFT are implemented as well. Furthermore, Green's functions methods (GW quasiparticles, and ACFDT-RPA) and many-body perturbation theory (2nd-order Møller-Plesset) are available. Central quantities, like the one-electron orbitals, the electronic charge density, and the local potential are expressed in plane wave basis sets. The interactions between the electrons and ions are described using norm-conserving or ultrasoft pseudopotentials, or the projector-augmented-wave method. To determine the electronic ground state, VASP makes use of efficient iterative matrix diagonalisation techniques, like the residual minimisation method with direct inversion of the iterative subspace (RMM-DIIS) or blocked Davidson algorithms. These are coupled to highly efficient Broyden and Pulay density mixing schemes to speed up the self-consistency cycle.
XOP (includes SHADOWVUI)
|
395a25727a311206 | Thursday, April 19, 2018
What is good science?
When your understanding of physics establishes that empirical infinity is a large number and that the inverse is a small number establishing the scaling system of the universe, it soon becomes impossible to securely observe what must be the foundation details.
It has been possible to image a neutron sort of. Yet that neutron could by constructed from 600 dark matter elements or so. Each dark matter element is additionally constructed from likely 1200 additional components. All while dropping the implied scale by a 1000 and then a million.
So far there is no plausible way to use our clumsy hardware see any of this and we may never be able to.
Good science starts with learning to collect observations such as they are to establish a phenomena. It continues through learning to study observers and to find many of them. From that it is possible to enhance your potential conjecture to something you can trust to test.
If it becomes impossible to collect data, then you must contrive blank sheet conjectures that you then learn to bound and test. This is what we really have with quantum theory and it has been a fruitful approach to the physical problem of seeing the physical at the scales involved. My Cloud cosmology is orthogonal to the quantum approach and thus allows me to start with creation itself and self assemble the universe to the point in which our observations become resolved.
Both are good science as they attack the observations in two separate directions of inquiry. Bad science takes to form of manipulating data to produce desired conclusions or outright ignoring the phenomena and bad mouthing it..
What is good science?
Demanding that a theory is falsifiable or observable, without any subtlety, will hold science back. We need madcap ideas
Adam Becker is a writer and astrophysicist. He is currently a visiting scholar at the Office for History of Science and Technology at the University of California, Berkeley. His writing has appeared in New Scientist and on the BBC, among others. He is the author of What is Real? The Unfinished Quest for the Meaning of Quantum Physics (2018). He lives in Oakland in California.
The Viennese physicist Wolfgang Pauli suffered from a guilty conscience. He’d solved one of the knottiest puzzles in nuclear physics, but at a cost. ‘I have done a terrible thing,’ he admitted to a friend in the winter of 1930. ‘I have postulated a particle that cannot be detected.’
Despite his pantomime of despair, Pauli’s letters reveal that he didn’t really think his new sub-atomic particle would stay unseen. He trusted that experimental equipment would eventually be up to the task of proving him right or wrong, one way or another. Still, he worried he’d strayed too close to transgression. Things that were genuinely unobservable, Pauli believed, were anathema to physics and to science as a whole.
Pauli’s views persist among many scientists today. It’s a basic principle of scientific practice that a new theory shouldn’t invoke the undetectable. Rather, a good explanation should be falsifiable – which means it ought to rely on some hypothetical data that could, in principle, prove the theory wrong. These interlocking standards of falsifiability and observability have proud pedigrees: falsifiability goes back to the mid-20th-century philosopher of science Karl Popper, and observability goes further back than that. Today they’re patrolled by self-appointed guardians, who relish dismissing some of the more fanciful notions in physics, cosmology and quantum mechanics as just so many castles in the sky. The cost of allowing such ideas into science, say the gatekeepers, would be to clear the path for all manner of manifestly unscientific nonsense.
But for a theoretical physicist, designing sky-castles is just part of the job. Spinning new ideas about how the world could be – or in some cases, how the world definitely isn’t – is central to their work. Some structures might be built up with great care over many years, and end up with peculiar names such as inflationary multiverse or superstring theory. Others are fabricated and dismissed casually over the course of a single afternoon, found and lost again by a lone adventurer in the troposphere of thought.
That doesn’t mean it’s just freestyle sky-castle architecture out there at the frontier. The goal of scientific theory-building is to understand the nature of the world with increasing accuracy over time. All that creative energy has to hook back onto reality at some point. But turning ingenuity into fact is much more nuanced than simply announcing that all ideas must meet the inflexible standards of falsifiability and observability. These are not measures of the quality of a scientific theory. They might be neat guidelines or heuristics, but as is usually the case with simple answers, they’re also wrong, or at least only half-right.
Falsifiability doesn’t work as a blanket restriction in science for the simple reason that there are no genuinely falsifiable scientific theories. I can come up with a theory that makes a prediction that looks falsifiable, but when the data tell me it’s wrong, I can conjure some fresh ideas to plug the hole and save the theory.
The history of science is full of examples of this ex post facto intellectual engineering. In 1781, William and Caroline Herschel discovered the planet Uranus. Physicists of the time promptly set about predicting its orbit using Sir Isaac Newton’s law of universal gravitation. But in the following decades, as astronomers followed Uranus’s motion in its slow 84-year orbit around the Sun, they noticed that something was wrong. Uranus didn’t quite move as it should. Puzzled, they refined their measurements, took more and more careful observations, but the anomaly didn’t go away. Newton’s physics simply didn’t predict the location of Uranus over time.
But astronomers of the day didn’t claim that the unexpected data falsified Newtonian gravity. Instead, they proposed another explanation for the strange motion of Uranus: something large and unseen was tugging on the planet. Calculations showed that it would have to be another planet, as large as Uranus and even farther from the Sun. In 1846, the French astrophysicist Urbain Le Verrier predicted the location of this hypothetical planet. Unable to get any French observatories interested in the hunt, he sent the details of his prediction to colleagues in Germany. That night, they pointed their telescopes where Le Verrier had told them to look, and within half an hour they spotted the planet Neptune. Newtonian physics, rather than being falsified, had been fabulously vindicated – it had successfully predicted the exact location of an entire unseen planet.
For years, the mystery of Mercury was unsolved, without any suggestion that Newton was wrong
Flush with success, Le Verrier went after another planetary puzzle. Several years after his discovery of Neptune, it became clear to him and other astronomers that Mercury wasn’t moving as it was supposed to, either. The point in its orbit where it made its closest approach to the Sun, known as the perihelion, shifted a little more than Newton’s gravity said it should each Mercurial year, adding up to 43 extra arcseconds (a unit of angular measurement) over the course of a century. This is a tiny amount – less than one-30,000th of a full orbit around the Sun – but just as with Uranus before, the anomaly didn’t go away with persistent observation. It stubbornly remained, defying the ghost of Newton.
Once again, Newtonian gravity was not thrown out as falsified – at least, not immediately. Instead, Le Verrier tried the same trick again: pinning the anomaly on an unseen planet, a tiny rock so close to the Sun that it had been missed by all other astronomers throughout human history. He called the planet Vulcan, after the Roman god of the forge. Le Verrier and others sought Vulcan for years, lugging powerful telescopes to solar eclipses in an attempt to catch a glimpse of the unseen planet in the brief minutes of totality while the Sun was blocked by the Earth’s moon.
Le Verrier never found Vulcan. After his death in 1877, the astronomy community gave up the search, concluding that Vulcan simply wasn’t there. But even so, Newton’s gravity wasn’t discarded. Instead, astronomers of the time collectively shrugged and moved on. For years, the mystery of Mercury’s perihelion was unsolved, without any serious suggestion that Newton was wrong. Falsification was simply not on the menu.
Finally, in 1915, Albert Einstein used his brand-new theory of general relativity to show that he could succeed where Le Verrier had failed. General relativity was a new account of how gravity worked, superseding Newtonian physics – and it perfectly predicted the shift in the perihelion of Mercury. Einstein said he was ‘beside himself with joy’ when he realised that his theory could correctly solve this longstanding puzzle. Four years later, the British astronomer Arthur Eddington and his team took their powerful telescopes to an eclipse, not to hunt for Vulcan, but to confirm that starlight bent around the Sun as Einstein’s theory had predicted. They found that general relativity was right (though later investigations suggested that their results were marred by errors, despite reaching the correct conclusion); Einstein was instantly rocketed to fame as the man who had shown Newton wrong.
So Newtonian gravity was ultimately thrown out, but not merely in the face of data that threatened it. That wasn’t enough. It wasn’t until a viable alternative theory arrived, in the form of Einstein’s general relativity, that the scientific community entertained the notion that Newton might have missed a trick. But what if Einstein had never shown up, or had been incorrect? Could astronomers have found another way to account for the anomaly in Mercury’s motion? Certainly – they could have said that Vulcan was there after all, and was merely invisible to telescopes in some way.
This might sound somewhat far-fetched, but again, the history of science demonstrates that this kind of thing actually happens, and it sometimes works – as Pauli found out in 1930. At the time, new experiments threatened one of the core principles of physics, known as the conservation of energy. The data showed that in a certain kind of radioactive decay, electrons could fly out of an atomic nucleus with a range of speeds (and attendant energies) – even though the total amount of energy in the reaction should have been the same each time. That meant energy sometimes went missing from these reactions, and it wasn’t clear what was happening to it.
The Danish physicist Niels Bohr was willing to give up energy conservation. But Pauli wasn’t ready to concede the idea was dead. Instead, he came up with his outlandish particle. ‘I have hit upon a desperate remedy to save … the energy theorem,’ he wrote. The new particle could account for the loss of energy, despite having almost no mass and no electric charge. But particle detectors at the time had no way of seeing a chargeless particle, so Pauli’s proposed solution was invisible.
Nonetheless, rather than agreeing with Bohr that energy conservation had been falsified, the physics community embraced Pauli’s hypothetical particle: what came to be known as a ‘neutrino’ (the little neutral one), once the Italian physicist Enrico Fermi refined the theory a few years later. The happy epilogue was that neutrinos were finally observed in 1956, with technology that had been totally unforeseen a quarter-century earlier: a new kind of particle detector deployed in conjunction with a nuclear reactor. Pauli’s ghostly particles were real; in fact, later work revealed that trillions of neutrinos from the Sun pass through our body every second, totally unnoticed and unobserved.
So invoking the invisible to save a theory from falsification is sometimes the right scientific move. Yet Pauli certainly didn’t believe that his particle could never be observed. He hoped that it could be seen eventually, and he was right. Similarly, Einstein’s general relativity was vindicated through observation. Falsification just can’t be the answer, or at least not the whole answer, to the question of what makes a good theory. What about observability?
It’s certainly true that observation plays a crucial role in science. But this doesn’t mean that scientific theories have to deal exclusively in observable things. For one, the line between the observable and unobservable is blurry – what was once ‘unobservable’ can become ‘observable’, as the neutrino shows. Sometimes, a theory that postulates the imperceptible has proven to be the right theory, and is accepted as correct long before anyone devises a way to see those things.
Take the debate within physics in the second half of the 1800s about atoms. Some scientists believed that they existed, but others were deeply skeptical. Physicists such as Ludwig Boltzmann in Austria, James Clerk Maxwell in the United Kingdom and Rudolf Clausius in Germany were convinced by the chemical and physical evidence that atomic theory was correct. Others, such as the Austrian physicist Ernst Mach, were unimpressed.
Atoms were unobservable. Thus Mach condemned them as unreal and unnecessary
To Mach, atoms were a wholly unnecessary hypothesis. After all, anything that wasn’t observable couldn’t be considered a part of a good scientific theory – in fact, such things couldn’t even be considered real. To him, the archetype for a perfect scientific theory was thermodynamics, the study of heat. This was a set of empirical laws relating directly observable quantities such as the temperature, pressure and volume of a gas. The theory was complete and perfect as it was, and made no reference to anything unobservable at all.
But Boltzmann, Maxwell and Clausius had worked hard to show that Mach’s beloved thermodynamics was far from complete. Over the course of the rest of the 19th century, they and others, such as the American scientist Josiah Willard Gibbs, proved that the entirety of thermodynamics – and then some – could be re-derived from the simple assumption that atoms were real, and that all objects in everyday life were composed of a phenomenal number of them. While it was impossible in practice to predict the behaviour of every individual atom, in aggregate their behaviour obeyed regular patterns – and because there are so many atoms in everyday objects (way more than 100 billion billion of them in a thimbleful of air), those patterns were never visibly broken, even though they were the result only of statistical tendencies, not ironclad laws.
The idea of demoting the laws of thermodynamics to mere patterns was repugnant to Mach; invoking things too small to be seen was even worse. ‘I don’t believe that atoms exist!’ he blurted out during a talk by Boltzmann in Vienna. Atoms were too small to see even with the most powerful microscope that could possibly be built at the time. Indeed, according to calculations carried out by Maxwell and the Austrian scientist Josef Loschmidt, atoms were hundreds of times smaller than the wavelength of visible light – and would thus be forever hidden from view of any microscope relying on light waves. Atoms were unobservable. Thus Mach condemned them as unreal and unnecessary, extraneous to the practice of science.
Mach’s views were enormously influential in his native Austria and elsewhere in central Europe. His ideas led his compatriot Boltzmann to despair of convincing the rest of the physics community that atoms were real; this might have contributed to Boltzmann’s suicide in 1906. Yet physicists who did subscribe to Mach’s ideas often found themselves stymied in their work. Walter Kaufmann, a talented German experimental physicist, found in 1897 that cathode rays (the kind of rays used inside old TVs and computer monitors) had a constant ratio of charge to mass. But rather than accepting that cathode rays might consist of small particles with a fixed charge and mass, he heeded Mach’s warning not to postulate anything unobservable, and remained silent on the subject. Months later, the English physicist JJ Thomson found the same curious fact about cathode rays. But Mach’s views were less popular in England, and Thomson was comfortable suggesting the existence of a tiny particle that comprised cathode rays. He called it the electron, and won the Nobel Prize for its discovery in 1906 (as well as an eternal place in all introductory physics and chemistry textbooks).
Mach’s ideas certainly weren’t all bad; his writing inspired the young Einstein in his early work on relativity. Mach’s influence also extended to his godson, Pauli, the child of two fellow intellectuals in Vienna. Mach’s ideas played a major role in Pauli’s early intellectual development, and the words of his godfather were probably ringing in Pauli’s ears when he first suggested the idea of the neutrino.
Unlike Pauli, Einstein was not afraid of suggesting unobservable things. In 1905, the same year he published his theory of special relativity, he proposed the existence of the photon, the particle of light, to an unbelieving world. (He was not proven right about photons for nearly 20 years.) Mach’s ideas also inspired a vital movement in philosophy a generation later, known as logical positivism – broadly speaking, the idea that the only meaningful statements about the world were ones that could be directly verified through observation. Positivism originated in Vienna and elsewhere in the 1920s, and the brilliant ideas of the positivists played a major role in shaping philosophy from that time to the present day.
But what makes something ‘observable’? Are things that can be seen only with specialised implements observable? Some of the positivists said the answer was no, only the unvarnished data of our senses would suffice – so things seen in microscopes were therefore not truly real. But in that case, ‘we cannot observe physical things through opera glasses, or even through ordinary spectacles, and one begins to wonder about the status of what we see through an ordinary windowpane,’ the philosopher Grover Maxwell wrote in 1962.
Furthermore, Maxwell pointed out that the definition of what was ‘unobservable in principle’ depends on our best scientific theories and full understanding of the world, and so moves over time. Before the invention of the telescope, for example, the idea of an instrument that could make distant objects appear closer seemed impossible; consequently, a planet too faint to be seen with the naked eye, such as Neptune, would have been deemed ‘unobservable in principle’. Yet Neptune is undoubtedly there – and we’ve not only seen it, we sent Voyager 2 there in 1989. Similarly, what we consider unobservable in principle today might become observable in the future with the advent of new physical theories and observational technologies. ‘It is theory, and thus science itself, which tells us what is or is not … observable,’ Maxwell wrote. ‘There are no a priori or philosophical criteria for separating the observable from the unobservable.’
We use all of it, the observable and the unobservable, when we do science
Even where theories propose identical observable outcomes, some are provisionally accepted while others are flatly rejected. Say I publish a theory stating that there are invisible microscopic unicorns with flowing hair, spiralled horns and a taste for partial differential equations; these unicorns are responsible for the randomness of the quantum world, pushing and pulling subatomic particles to ensure that they obey the Schrödinger equation, simply because they like that equation more than any other. This theory is, by its nature, totally observationally identical with quantum mechanics. But it is a profoundly silly theory, and would (I hope) be rejected by all physicists were someone to publish it.
Putting aside this glib example, the choices we make between observationally identical theories have a big impact upon the practice of science. The American physicist Richard Feynman pointed out that two wildly different theories that have identical observational consequences can still give you different perspectives on problems, and lead you to different answers and different experiments to conduct in order to discover the next theory. So it’s not just the observable content of our scientific theories that matters. We use all of it, the observable and the unobservable, when we do science. Certainly, we are more wary about our belief in the existence of invisible entities, but we don’t deny that the unobservable things exist, or at least that their existence is plausible.
Some of the most interesting scientific work gets done when scientists develop bizarre theories in the face of something new or unexplained. Madcap ideas must find a way of relating to the world – but demanding falsifiability or observability, without any sort of subtlety, will hold science back. It’s impossible to develop successful new theories under such rigid restrictions. As Pauli said when he first came up with the neutrino, despite his own misgivings: ‘Only those who wager can win.’
Henry said...
I'm thinking that at least half this article could be mooted by clarifying the distinction made in science between a theory and a hypothesis. In English, the term theory has a vaguer meaning and can promote exactly this confusion. Yes, we need wild ideas in science, but as hypotheses, not as theories. Theories are what hypotheses become once they have sufficient evidentiary support to be considered proven.
Bob Podolsky said...
This article is a good illustration of the fact that new science is not created within the structure of science, but instead must involve ideas "outside the box" that science represents.
My father was Boris Podolsky, who predicted the discovery of "Quantum Entanglement" ("spooky action at a distance") in 1935 in a landmark paper with Einstein and Rosen. It was some 30 years before experimental technology caught up with theory and made the phenomenon "observable".
My father's explanation was that new science has to be "sufficiently crazy", by the standards of existing science, in order to have any chance of being a valuable addition to current scientific lore.
Bob Podolsky |
f57c5ead4cde493f | PERSYS - a program for the solution near the origin of coupled channel Schrodinger equation with singular potential. The code PERSYS produces the regular solution of a system of coupled Schrödinger equations near the origin, where the potential exhibits a singularity, due to the centrifugal, spin-orbit and Coulomb components. The solution is calculated by means of a highly accurate method, which consists of a perturbative technique in which the centrifugal term is taken as the reference potential while the rest of terms are seen as a perturbation. (Source: |
aef306369e2ce1b2 | San José State University
Thayer Watkins
Silicon Valley
& Tornado Alley
The Hartree-Fock Method for Finding
Self-Consistent Field Wave Functions
for Multi-electron Atoms
In 1926 Erwin Schrödinger published his work that provided physicists with a simple way to formulate the quantum dynamics of the electrons in an atom. The Schrödinger Equation is easy to formulate but in all but the simplest cases impossible to solve analytically and not easy to solve numerically. The procedure first involves declaring the Hamiltonian for the system. For example, the Hamiltonian for a helium atom is
H = K1 + K2 + V1 + V2 + V12
where K1 and 2 are the kinetic energies of the two electrons. V1 and V2 are the potential energies of the two electrons in the field of the two protons in the nucleus. V12 is the potential energy due to the two electrons with respect to each other.
If r1 and r2 are the distances of the two electrons from the nucleus and v1 and v2 are their velocities then the Hamiltonian reduces to
H = ½mv1² + ½mv2² −2k/r1² −2k/r1² + k/(|r1r2|)²
where r1 and r2 are the position vectors of the two electrons with respect to nucleus. The symbol k represents a constant that is the product of the constant for the electrostatic force and the square of the charge of an electron.
In the late 1920's Douglas Hartree began trying to find ways to simplify the numerical solution. He formulated the concept of the Self-Consistent Field. In this method the effect on a single electron of the rest of the electrons is assumed to reduce to a central field which is added the central field established by the nucleus of the atom. Starting with an approximation of this central field the wave function of the single electron can be found. This wave function is the used to determine the central field of the other electrons. This procedure is applied iteratively until the solutions converge, or at least do not change by a significant amount.
Hartree's procedure involved computing the eigenvectors and their eigenvalues of the discrete version of the self-consistent field version of the Schrödinger equation. The eigenvectors corresponded to the wavefunctions for the electron and the eigenvalues to the negative of their energies. Hartree compared the computed eigenvalues had a good correspondences with the energies of X-rays required to knock the various electrons out of their orbitals; i.e., their ionization energies.
Shortly after Hartree published his method in 1928, J.A. Gaunt found that the eigenvalue for an electron could be determined to a close approximation by computing the energy of the atom with the electron and the energy of the ion in which the electron is missing.
Hartree assumed that the wave function of a multi-electron atom was the product of the wave functions of the individual electrons. John Slater in 1929 found that theoretically and empirically it was better to take the multi-electron wave function as being the determinant formed from individual electron wave function, which came to be known as the Slater determinant of the system. The Slater determinant automatically produced multi-electron wave function that are anti-symmetric.
Vladimir Fock modified Hartree's method so as to obtain on each step wave functions that satisfied the theoretical requirement of a solution to the n electron problem.
Tjalling Koopmans published in 1934 a further refinement. He found that in general the spin-orbitals can be be chosen in a way such that a matrix of interaction energies is diagonal and thus the eigenvalues are simply equal the diagonal elements. See Koopmans' Theorem for more on this.
In the 1920's there was a competition between the wave mechanics of Werner Heisenberg based on infinite matrices and the quantum mechanics of Erwin Schrödinger based on partial differential equations. Schrödinger showed that the two formulation are equivalent and over time the theory was couched in terms of Schrödinger's formulation but when anyone does numerical computation in effect they are utilizing Heisenberg's wave mechanics with the infinite matrices truncated. For understanding the computations and the approximation involved in solving a physical system a matrix formulation has definite advantages. For that reason a matrix version of a model for a helium atom is given in Matrix Model of Helium.
HOME PAGE OF applet-magic
HOME PAGE OF Thayer Watkins |
8df49b236766b64a | Schedule Jan 13, 2012
Quantum transport on carbon nanotori in nanodevices and metamaterials - from effective models to non-equilibrium Green's function methods.
Mark A. Jack (FAMU), Mario Encinosa (FAMU), John Williamson (FAMU), Adam Byrd (FAMU), Leon W. Durivage (Winona State U)
Graphene-based allotropes such as carbon nanorings hold the promise of completely new nanodevice and metamaterials applications due to the effects of magnetic flux and curvature on quantum transport on a nanoscale toroidal surface and the coherence of resulting electromagnetic moments. Unique electronic and optical characteristics will emerge due to the compactification of the honeycomb lattice structure of a flat graphene sheet to a two-diemsional mamifold with toroidal geometry. Additional modular symmetries are predicted to significantly impact energy band structure and transport properties of physically distinct nanotori with different chiralities and dimensions and thus drastically reduce the number of spectrally distinct ring geometries. In addition to persistent current and Aharonov-Bohm effects under magnetic flux, new electromagnetic field distributions such as a new toroidal moment will be generated by the ring currents. In a metamaterial of a regular two- or three-dimensional lattice of these aligned nanoconstituents a significant enhancement of these quantum signatures may be expected coherence of the individual electromagnetic responses. In an effective model, the Hamiltonian for a single charge constrained to motion near a toroidal helix with loops of arbitrary eccentricity is developed and the resulting three-dimensional Schrödinger equation reduced to an effective one-dimensional formula inclusive of curvature effects in form of two resulting effective curvature potentials. The magnitude of the toroidal moment generated by the current depends strongly on the magnetic field component of the field normal to the toroidal plane. A strong dependence on coil eccentricity is also observed. In a theoretical sense, the curvature potential terms are necessary to preserve the hermiticity of the minimal prescription Hamiltonian. This effective model may also elucidate how a surface current may be driven by a properly polarized incoming electromagnetic wave front to generate a specific multipole response. Alternatively, electron transport on the carbon nanotorus is calculated in a tightbinding model for armchair and zigzag carbon nanotori between metallic leads using a recursive non-equilibrium Green's function method. Density-of-states, transmission function and source drain current are calculated for realistic system sizes of 10,000 carbon atoms and more. An object-oriented C++ code was developed using parallel sparse matrix software libraries such as PETSc (Portable, Extensible Toolkit for Scientific Computation) with additional MPI parallelism to evaluate the transport Green.s function at different energies. This fast and numerically precise tool on a multi-core architecture can incorporate additional effects such as electron-phonon coupling effects due to low-energy phonon modes, exciton transport, or electron-plasmon coupling terms in second- or third-nearest-neighbor type calculations.
View poster as pdf.
Author entry (protected) |
9480d6710fa1ec8f | How Does Light Travel?
Ever since Democritus – a Greek philosopher who lived between the 5th and 4th century’s BCE – argued that all of existence was made up of tiny indivisible atoms, scientists have been speculating as to the true nature of light. Whereas scientists ventured back and forth between the notion that light was a particle or a wave until the modern era, the 20th century led to breakthroughs that showed us that it behaves as both.
These included the discovery of the electron, the development of quantum theory, and Einstein’s Theory of Relativity. However, there remains many unanswered questions about light, many of which arise from its dual nature. For instance, how is it that light can be apparently without mass, but still behave as a particle? And how can it behave like a wave and pass through a vacuum, when all other waves require a medium to propagate?
Theory of Light to the 19th Century:
During the Scientific Revolution, scientists began moving away from Aristotelian scientific theories that had been seen as accepted canon for centuries. This included rejecting Aristotle’s theory of light, which viewed it as being a disturbance in the air (one of his four “elements” that composed matter), and embracing the more mechanistic view that light was composed of indivisible atoms.
In many ways, this theory had been previewed by atomists of Classical Antiquity – such as Democritus and Lucretius – both of whom viewed light as a unit of matter given off by the sun. By the 17th century, several scientists emerged who accepted this view, stating that light was made up of discrete particles (or “corpuscles”). This included Pierre Gassendi, a contemporary of René Descartes, Thomas Hobbes, Robert Boyle, and most famously, Sir Isaac Newton.
The first edition of Newton's Opticks: or, a treatise of the reflexions, refractions, inflexions and colours of light (1704). Credit: Public Domain.
The first edition of Newton’s Opticks: or, a treatise of the reflexions, refractions, inflexions and colours of light (1704). Credit: Public Domain.
Newton’s corpuscular theory was an elaboration of his view of reality as an interaction of material points through forces. This theory would remain the accepted scientific view for more than 100 years, the principles of which were explained in his 1704 treatise “Opticks, or, a Treatise of the Reflections, Refractions, Inflections, and Colours of Light“. According to Newton, the principles of light could be summed as follows:
• Every source of light emits large numbers of tiny particles known as corpuscles in a medium surrounding the source.
• These corpuscles are perfectly elastic, rigid, and weightless.
This represented a challenge to “wave theory”, which had been advocated by 17th century Dutch astronomer Christiaan Huygens. . These theories were first communicated in 1678 to the Paris Academy of Sciences and were published in 1690 in his Traité de la lumière (“Treatise on Light“). In it, he argued a revised version of Descartes views, in which the speed of light is infinite and propagated by means of spherical waves emitted along the wave front.
Double-Slit Experiment:
By the early 19th century, scientists began to break with corpuscular theory. This was due in part to the fact that corpuscular theory failed to adequately explain the diffraction, interference and polarization of light, but was also because of various experiments that seemed to confirm the still-competing view that light behaved as a wave.
The most famous of these was arguably the Double-Slit Experiment, which was originally conducted by English polymath Thomas Young in 1801 (though Sir Isaac Newton is believed to have conducted something similar in his own time). In Young’s version of the experiment, he used a slip of paper with slits cut into it, and then pointed a light source at them to measure how light passed through it.
According to classical (i.e. Newtonian) particle theory, the results of the experiment should have corresponded to the slits, the impacts on the screen appearing in two vertical lines. Instead, the results showed that the coherent beams of light were interfering, creating a pattern of bright and dark bands on the screen. This contradicted classical particle theory, in which particles do not interfere with each other, but merely collide.
The only possible explanation for this pattern of interference was that the light beams were in fact behaving as waves. Thus, this experiment dispelled the notion that light consisted of corpuscles and played a vital part in the acceptance of the wave theory of light. However subsequent research, involving the discovery of the electron and electromagnetic radiation, would lead to scientists considering yet again that light behaved as a particle too, thus giving rise to wave-particle duality theory.
Electromagnetism and Special Relativity:
Prior to the 19th and 20th centuries, the speed of light had already been determined. The first recorded measurements were performed by Danish astronomer Ole Rømer, who demonstrated in 1676 using light measurements from Jupiter’s moon Io to show that light travels at a finite speed (rather than instantaneously).
Prof. Albert Einstein delivering the 11th Josiah Willard Gibbs lecture at the meeting of the American Association for the Advancement of Science on Dec. 28th, 1934. Credit: AP Photo
By the late 19th century, James Clerk Maxwell proposed that light was an electromagnetic wave, and devised several equations (known as Maxwell’s equations) to describe how electric and magnetic fields are generated and altered by each other and by charges and currents. By conducting measurements of different types of radiation (magnetic fields, ultraviolet and infrared radiation), he was able to calculate the speed of light in a vacuum (represented as c).
In 1905, Albert Einstein published “On the Electrodynamics of Moving Bodies”, in which he advanced one of his most famous theories and overturned centuries of accepted notions and orthodoxies. In his paper, he postulated that the speed of light was the same in all inertial reference frames, regardless of the motion of the light source or the position of the observer.
Exploring the consequences of this theory is what led him to propose his theory of Special Relativity, which reconciled Maxwell’s equations for electricity and magnetism with the laws of mechanics, simplified the mathematical calculations, and accorded with the directly observed speed of light and accounted for the observed aberrations. It also demonstrated that the speed of light had relevance outside the context of light and electromagnetism.
For one, it introduced the idea that major changes occur when things move close the speed of light, including the time-space frame of a moving body appearing to slow down and contract in the direction of motion when measured in the frame of the observer. After centuries of increasingly precise measurements, the speed of light was determined to be 299,792,458 m/s in 1975.
Einstein and the Photon:
In 1905, Einstein also helped to resolve a great deal of confusion surrounding the behavior of electromagnetic radiation when he proposed that electrons are emitted from atoms when they absorb energy from light. Known as the photoelectric effect, Einstein based his idea on Planck’s earlier work with “black bodies” – materials that absorb electromagnetic energy instead of reflecting it (i.e. white bodies).
At the time, Einstein’s photoelectric effect was attempt to explain the “black body problem”, in which a black body emits electromagnetic radiation due to the object’s heat. This was a persistent problem in the world of physics, arising from the discovery of the electron, which had only happened eight years previous (thanks to British physicists led by J.J. Thompson and experiments using cathode ray tubes).
At the time, scientists still believed that electromagnetic energy behaved as a wave, and were therefore hoping to be able to explain it in terms of classical physics. Einstein’s explanation represented a break with this, asserting that electromagnetic radiation behaved in ways that were consistent with a particle – a quantized form of light which he named “photons”. For this discovery, Einstein was awarded the Nobel Prize in 1921.
Wave-Particle Duality:
Subsequent theories on the behavior of light would further refine this idea, which included French physicist Louis-Victor de Broglie calculating the wavelength at which light functioned. This was followed by Heisenberg’s “uncertainty principle” (which stated that measuring the position of a photon accurately would disturb measurements of it momentum and vice versa), and Schrödinger’s paradox that claimed that all particles have a “wave function”.
In accordance with quantum mechanical explanation, Schrodinger proposed that all the information about a particle (in this case, a photon) is encoded in its wave function, a complex-valued function roughly analogous to the amplitude of a wave at each point in space. At some location, the measurement of the wave function will randomly “collapse”, or rather “decohere”, to a sharply peaked function. This was illustrated in Schrödinger famous paradox involving a closed box, a cat, and a vial of poison (known as the “Schrödinger Cat” paradox).
In this illustration, one photon (purple) carries a million times the energy of another (yellow). Some theorists predict travel delays for higher-energy photons, which interact more strongly with the proposed frothy nature of space-time. Yet Fermi data on two photons from a gamma-ray burst fail to show this effect. The animation below shows the delay scientists had expected to observe. Credit: NASA/Sonoma State University/Aurore Simonnet
Artist’s impression of two photons travelling at different wavelengths, resulting in different- colored light. Credit: NASA/Sonoma State University/Aurore Simonnet
According to his theory, wave function also evolves according to a differential equation (aka. the Schrödinger equation). For particles with mass, this equation has solutions; but for particles with no mass, no solution existed. Further experiments involving the Double-Slit Experiment confirmed the dual nature of photons. where measuring devices were incorporated to observe the photons as they passed through the slits.
When this was done, the photons appeared in the form of particles and their impacts on the screen corresponded to the slits – tiny particle-sized spots distributed in straight vertical lines. By placing an observation device in place, the wave function of the photons collapsed and the light behaved as classical particles once more. As predicted by Schrödinger, this could only be resolved by claiming that light has a wave function, and that observing it causes the range of behavioral possibilities to collapse to the point where its behavior becomes predictable.
The development of Quantum Field Theory (QFT) was devised in the following decades to resolve much of the ambiguity around wave-particle duality. And in time, this theory was shown to apply to other particles and fundamental forces of interaction (such as weak and strong nuclear forces). Today, photons are part of the Standard Model of particle physics, where they are classified as boson – a class of subatomic particles that are force carriers and have no mass.
So how does light travel? Basically, traveling at incredible speeds (299 792 458 m/s) and at different wavelengths, depending on its energy. It also behaves as both a wave and a particle, able to propagate through mediums (like air and water) as well as space. It has no mass, but can still be absorbed, reflected, or refracted if it comes in contact with a medium. And in the end, the only thing that can truly divert it, or arrest it, is gravity (i.e. a black hole).
What we have learned about light and electromagnetism has been intrinsic to the revolution which took place in physics in the early 20th century, a revolution that we have been grappling with ever since. Thanks to the efforts of scientists like Maxwell, Planck, Einstein, Heisenberg and Schrodinger, we have learned much, but still have much to learn.
For instance, its interaction with gravity (along with weak and strong nuclear forces) remains a mystery. Unlocking this, and thus discovering a Theory of Everything (ToE) is something astronomers and physicists look forward to. Someday, we just might have it all figured out!
We have written many articles about light here at Universe Today. For example, here’s How Fast is the Speed of Light?, How Far is a Light Year?, What is Einstein’s Theory of Relativity?
If you’d like more info on light, check out these articles from The Physics Hypertextbook and NASA’s Mission Science page.
We’ve also recorded an entire episode of Astronomy Cast all about Interstellar Travel. Listen here, Episode 145: Interstellar Travel.
What Are The Parts Of An Atom?
Since the beginning of time, human beings have sought to understand what the universe and everything within it is made up of. And while ancient magi and philosophers conceived of a world composed of four or five elements – earth, air, water, fire (and metal, or consciousness) – by classical antiquity, philosophers began to theorize that all matter was actually made up of tiny, invisible, and indivisible atoms.
Since that time, scientists have engaged in a process of ongoing discovery with the atom, hoping to discover its true nature and makeup. By the 20th century, our understanding became refined to the point that we were able to construct an accurate model of it. And within the past decade, our understanding has advanced even further, to the point that we have come to confirm the existence of almost all of its theorized parts.
Continue reading “What Are The Parts Of An Atom?”
Cosmologist Thinks a Strange Signal May Be Evidence of a Parallel Universe
In the beginning, there was chaos.
Although they are often the stuff of science fiction, parallel universes play a large part in our understanding of the cosmos. According to the theory of eternal inflation, bubble universes apart from our own are theorized to be constantly forming, driven by the energy inherent to space itself.
Like soap bubbles, bubble universes that grow too close to one another can and do stick together, if only for a moment. Such temporary mergers could make it possible for one universe to deposit some of its material into the other, leaving a kind of fingerprint at the point of collision.
Ranga-Ram Chary, a cosmologist at the California Institute of Technology, believes that the CMB is the perfect place to look for such a fingerprint.
This image, the best map ever of the Universe, shows the oldest light in the universe. This glow, left over from the beginning of the cosmos called the cosmic microwave background, shows tiny changes in temperature represented by color. Credit: ESA and the Planck Collaboration.
The cosmic microwave background (CMB), a pervasive glow made of light from the Universe’s infancy, as seen by the Planck satellite in 2013. Tiny deviations in average temperature are represented by color. Credit: ESA and the Planck Collaboration.
After careful analysis of the spectrum of the CMB, Chary found a signal that was about 4500x brighter than it should have been, based on the number of protons and electrons scientists believe existed in the very early Universe. Indeed, this particular signal — an emission line that arose from the formation of atoms during the era of recombination — is more consistent with a Universe whose ratio of matter particles to photons is about 65x greater than our own.
There is a 30% chance that this mysterious signal is just noise, and not really a signal at all; however, it is also possible that it is real, and exists because a parallel universe dumped some of its matter particles into our own Universe.
After all, if additional protons and electrons had been added to our Universe during recombination, more atoms would have formed. More photons would have been emitted during their formation. And the signature line that arose from all of these emissions would be greatly enhanced.
Chary himself is wisely skeptical.
“Unusual claims like evidence for alternate Universes require a very high burden of proof,” he writes.
Indeed, the signature that Chary has isolated may instead be a consequence of incoming light from distant galaxies, or even from clouds of dust surrounding our own galaxy.
SO is this just another case of BICEP2? Only time and further analysis will tell.
Chary has submitted his paper to the Astrophysical Journal. A preprint of the work is available here.
The Journey of Light, From the Stars to Your Eyes
This week, millions of people will turn their eyes to the skies in anticipation of the 2015 Perseid meteor shower. But what happens on less eventful nights, when we find ourselves gazing upward simply to admire the deep, dark, star-spangled sky? Far away from the glow of civilization, we humans can survey thousands of tiny pinpricks of light. But how? Where does that light come from? How does it make its way to us? And how do our brains sort all that incoming energy into such a profoundly breathtaking sight?
Our story begins lightyears away, deep in the heart of a sun-like star, where gravity’s immense inward pressure keeps temperatures high and atoms disassembled. Free protons hurtle around the core, occasionally attaining the blistering energies necessary to overcome their electromagnetic repulsion, collide, and stick together in pairs of two.
Proton-proton fusion in a sun-like star. Credit: Borb
So-called diprotons are unstable and tend to disband as quickly as they arise. And if it weren’t for the subatomic antics of the weak nuclear force, this would be the end of the line: no fusion, no starlight, no us. However, on very rare occasions, a process called beta decay transforms one proton in the pair into a neutron. This new partnership forms what is known as deuterium, or heavy hydrogen, and opens the door to further nuclear fusion reactions.
Indeed, once deuterium enters the mix, particle pileups happen far more frequently. A free proton slams into deuterium, creating helium-3. Additional impacts build upon one another to forge helium-4 and heavier elements like oxygen and carbon.
Such collisions do more than just build up more massive atoms; in fact, every impact listed above releases an enormous amount of energy in the form of gamma rays. These high-energy photons streak outward, providing thermonuclear pressure that counterbalances the star’s gravity. Tens or even hundreds of thousands of years later, battered, bruised, and energetically squelched from fighting their way through a sun-sized blizzard of other particles, they emerge from the star’s surface as visible, ultraviolet, and infrared light.
But this is only half the story. The light then has to stream across vast reaches of space in order to reach the Earth – a process that, provided the star of origin is in our own galaxy, can take anywhere from 4.2 years to many thousands of years! At least… from your perspective. Since photons are massless, they don’t experience any time at all! And even after eluding what, for any other massive entity in the Universe, would be downright interminable flight times, conditions still must align so that you can see even one twinkle of the light from a faraway star.
That is, it must be dark, and you must be looking up.
Credit: Bruce Blaus
Credit: Bruce Blaus
The incoming stream of photons then makes its way through your cornea and lens and onto your retina, a highly vascular layer of tissue that lines the back of the eye. There, each tiny packet of light impinges upon one of two types of photoreceptor cell: a rod, or a cone.
Most photons detected under the low-light conditions of stargazing will activate rod cells. These cells are so light-sensitive that, in dark enough conditions, they can be excited by a single photon! Rods cannot detect color, but are far more abundant than cones and are found all across the retina, including around the periphery.
The less numerous, more color-hungry cone cells are densely concentrated at the center of the retina, in a region called the fovea (this explains why dim stars that are visible in your side vision suddenly seem to disappear when you attempt to look at them straight-on). Despite their relative insensitivity, cone cells can be activated by very bright starlight, enabling you to perceive stars like Vega as blue and Betelgeuse as red.
But whether bright light or dim, every photon has the same endpoint once it reaches one of your eyes’ photoreceptors: a molecule of vitamin A, which is bound together with a specialized protein called an opsin. Vitamin A absorbs the light and triggers a signal cascade: ion channels open and charged particles rush across a membrane, generating an electrical impulse that travels up the optic nerve and into the brain. By the time this signal reaches your brain’s visual cortex, various neural pathways are already hard at work translating this complex biochemistry into what you once thought was a simple, intuitive, and poetic understanding of the heavens above…
The stars, they shine.
So the next time you go outside in the darker hours, take a moment to appreciate the great lengths it takes for just a single twinkle of light to travel from a series of nuclear reactions in the bustling center of a distant star, across the vastness of space and time, through your body’s electrochemical pathways, and into your conscious mind.
It gives every last one of those corny love songs new meaning, doesn’t it?
Does Light Experience Time?
Have you ever noticed that time flies when you’re having fun? Well, not for light. In fact, photons don’t experience any time at all. Here’s a mind-bending concept that should shatter your brain into pieces.
As you might know, I co-host Astronomy Cast, and get to pick the brain of the brilliant astrophysicist Dr. Pamela Gay every week about whatever crazy thing I think of in the shower. We were talking about photons one week and she dropped a bombshell on my brain. Photons do not experience time. [SNARK: Are you worried they might get bored?]
Just think about that idea. From the perspective of a photon, there is no such thing as time. It’s emitted, and might exist for hundreds of trillions of years, but for the photon, there’s zero time elapsed between when it’s emitted and when it’s absorbed again. It doesn’t experience distance either. [SNARK: Clearly, it didn’t need to borrow my copy of GQ for the trip.]
Since photons can’t think, we don’t have to worry too much about their existential horror of experiencing neither time nor distance, but it tells us so much about how they’re linked together. Through his Theory of Relativity, Einstein helped us understand how time and distance are connected.
Let’s do a quick review. If we want to travel to some distant point in space, and we travel faster and faster, approaching the speed of light our clocks slow down relative to an observer back on Earth. And yet, we reach our destination more quickly than we would expect. Sure, our mass goes up and there are enormous amounts of energy required, but for this example, we’ll just ignore all that.
If you could travel at a constant acceleration of 1 g, you could cross billions of light years in a single human generation. Of course, your friends back home would have experienced billions of years in your absence, but much like the mass increase and energy required, we won’t worry about them.
The closer you get to light speed, the less time you experience and the shorter a distance you experience. You may recall that these numbers begin to approach zero. According to relativity, mass can never move through the Universe at light speed. Mass will increase to infinity, and the amount of energy required to move it any faster will also be infinite. But for light itself, which is already moving at light speed… You guessed it, the photons reach zero distance and zero time.
Photons can take hundreds of thousands of years to travel from the core of the Sun until they reach the surface and fly off into space. And yet, that final journey, that could take it billions of light years across space, was no different from jumping from atom to atom.
There, now these ideas can haunt your thoughts as they do mine. You’re welcome. What do you think? What’s your favorite mind bending relativity side effect? Tell us in the comments below.
What are Photons
Faraday's Constant
When we think about light we don’t really think about what it is made of. This was actually the subject one of the most important arguments in physics. For the longest time physicists and scientist tried to determine if light was a wave or a particle. There were the physicists of the eighteenth century who strongly believed that light was made of basic units , but certain properties like refraction caused light to be reclassified as a wave. It would take no less than Einstein to resolve the issue. Thanks to him and the work of other renowned physicists we know more about what are photons.
To put it simply photons are the fundamental particle of light. They have a unique property in that they are both a particle and a wave. This is what allows photons unique properties like refraction and diffusion. However light particles are not quite the same as other elementary particles. They have interesting characteristics that are not commonly observed. First, as of right now physicists theorize that photons have no mass. They have some characteristics of particles like angular momentum but their frequency is independent of the influence of mass They also don’t carry a charge.
Photons are basically the most visible portion of the electromagnetic spectrum. This was one of the major breakthroughs Einstein and the father of quantum physics, Planck made about the nature of light. This link is what is behind the photoelectric effect that makes solar power possible.Because light is another form of energy it can be transferred or converted into other types. In the case of the photoelectric effect the energy of light photons is transferred through the photons bumping into the atoms of a giving material. This causes the atom that is hit to lose electrons and thus make electricity.
As mentioned before photons played a key role in the founding of quantum physics. The study of the photons properties opened up a whole new class of fundamental particles called quantum particles. Thanks to photons we know that all quantum particles have both the properties of waves and particles. We also know that energy can be discretely measured on a quantum scale.
Photons also played a big role in Einstein’s theory of relativity. without the photon we would not understand the importance of the speed of light and with it the understanding of the interaction of time and space that it produced. We now know that the speed of light is an absolute that can’t be broken by natural means as it would needs an infinite amount of energy something that is not possible in our universe. So without the photon we would not have the knowledge about our universe that we now possess.
We have written many articles about photons for Universe Today. Here’s an article about how the sun shines, and here’s an article about why stars shine.
If you’d like more info on Photons, check out the Mass of the Photon. And here’s a link to an article about How Gravity Affects Photons.
We’ve also recorded an episode of Astronomy Cast all about the Atom. Listen here, Episode 164: Inside the Atom. |
cacd7e00113d5ca2 | Citation for this page in APA citation style. Close
Mortimer Adler
Rogers Albritton
Alexander of Aphrodisias
Samuel Alexander
William Alston
Louise Antony
Thomas Aquinas
David Armstrong
Harald Atmanspacher
Robert Audi
Alexander Bain
Mark Balaguer
Jeffrey Barrett
William Barrett
William Belsham
Henri Bergson
George Berkeley
Isaiah Berlin
Richard J. Bernstein
Bernard Berofsky
Robert Bishop
Max Black
Susanne Bobzien
Emil du Bois-Reymond
Hilary Bok
Laurence BonJour
George Boole
Émile Boutroux
Michael Burke
Lawrence Cahoone
Joseph Keim Campbell
Rudolf Carnap
Nancy Cartwright
Gregg Caruso
Ernst Cassirer
David Chalmers
Roderick Chisholm
Randolph Clarke
Samuel Clarke
Anthony Collins
Antonella Corradini
Diodorus Cronus
Jonathan Dancy
Donald Davidson
Mario De Caro
Daniel Dennett
Jacques Derrida
René Descartes
Richard Double
Fred Dretske
John Dupré
John Earman
Laura Waddell Ekstrom
Austin Farrer
Herbert Feigl
Arthur Fine
John Martin Fischer
Frederic Fitch
Owen Flanagan
Luciano Floridi
Philippa Foot
Alfred Fouilleé
Harry Frankfurt
Richard L. Franklin
Bas van Fraassen
Michael Frede
Gottlob Frege
Peter Geach
Edmund Gettier
Carl Ginet
Alvin Goldman
Nicholas St. John Green
H.Paul Grice
Ian Hacking
Ishtiyaque Haji
Stuart Hampshire
Sam Harris
William Hasker
Georg W.F. Hegel
Martin Heidegger
Thomas Hobbes
David Hodgson
Shadsworth Hodgson
Baron d'Holbach
Ted Honderich
Pamela Huby
David Hume
Ferenc Huoranszki
Frank Jackson
William James
Lord Kames
Robert Kane
Immanuel Kant
Tomis Kapitan
Walter Kaufmann
Jaegwon Kim
William King
Hilary Kornblith
Christine Korsgaard
Saul Kripke
Thomas Kuhn
Andrea Lavazza
Christoph Lehner
Keith Lehrer
Gottfried Leibniz
Jules Lequyer
Michael Levin
Joseph Levine
George Henry Lewes
David Lewis
Peter Lipton
C. Lloyd Morgan
John Locke
Michael Lockwood
Arthur O. Lovejoy
E. Jonathan Lowe
John R. Lucas
Alasdair MacIntyre
Ruth Barcan Marcus
James Martineau
Storrs McCall
Hugh McCann
Colin McGinn
Michael McKenna
Brian McLaughlin
John McTaggart
Paul E. Meehl
Uwe Meixner
Alfred Mele
Trenton Merricks
John Stuart Mill
Dickinson Miller
Thomas Nagel
Otto Neurath
Friedrich Nietzsche
John Norton
Robert Nozick
William of Ockham
Timothy O'Connor
David F. Pears
Charles Sanders Peirce
Derk Pereboom
Steven Pinker
Karl Popper
Huw Price
Hilary Putnam
Willard van Orman Quine
Frank Ramsey
Ayn Rand
Michael Rea
Thomas Reid
Charles Renouvier
Nicholas Rescher
Richard Rorty
Josiah Royce
Bertrand Russell
Paul Russell
Gilbert Ryle
Jean-Paul Sartre
Kenneth Sayre
Moritz Schlick
Arthur Schopenhauer
John Searle
Wilfrid Sellars
Alan Sidelle
Ted Sider
Henry Sidgwick
Walter Sinnott-Armstrong
Saul Smilansky
Michael Smith
Baruch Spinoza
L. Susan Stebbing
Isabelle Stengers
George F. Stout
Galen Strawson
Peter Strawson
Eleonore Stump
Francisco Suárez
Richard Taylor
Kevin Timpe
Mark Twain
Peter Unger
Peter van Inwagen
Manuel Vargas
John Venn
Kadri Vihvelin
G.H. von Wright
David Foster Wallace
R. Jay Wallace
Ted Warfield
Roy Weatherford
C.F. von Weizsäcker
William Whewell
Alfred North Whitehead
David Widerker
David Wiggins
Bernard Williams
Timothy Williamson
Ludwig Wittgenstein
Susan Wolf
David Albert
Michael Arbib
Walter Baade
Bernard Baars
Jeffrey Bada
Leslie Ballentine
Gregory Bateson
John S. Bell
Mara Beller
Charles Bennett
Ludwig von Bertalanffy
Susan Blackmore
Margaret Boden
David Bohm
Niels Bohr
Ludwig Boltzmann
Emile Borel
Max Born
Satyendra Nath Bose
Walther Bothe
Jean Bricmont
Hans Briegel
Leon Brillouin
Stephen Brush
Henry Thomas Buckle
S. H. Burbury
Melvin Calvin
Donald Campbell
Sadi Carnot
Anthony Cashmore
Eric Chaisson
Gregory Chaitin
Jean-Pierre Changeux
Rudolf Clausius
Arthur Holly Compton
John Conway
Jerry Coyne
John Cramer
Francis Crick
E. P. Culverwell
Antonio Damasio
Olivier Darrigol
Charles Darwin
Richard Dawkins
Terrence Deacon
Lüder Deecke
Richard Dedekind
Louis de Broglie
Stanislas Dehaene
Max Delbrück
Abraham de Moivre
Paul Dirac
Hans Driesch
John Eccles
Arthur Stanley Eddington
Gerald Edelman
Paul Ehrenfest
Manfred Eigen
Albert Einstein
George F. R. Ellis
Hugh Everett, III
Franz Exner
Richard Feynman
R. A. Fisher
David Foster
Joseph Fourier
Philipp Frank
Steven Frautschi
Edward Fredkin
Lila Gatlin
Michael Gazzaniga
Nicholas Georgescu-Roegen
GianCarlo Ghirardi
J. Willard Gibbs
Nicolas Gisin
Paul Glimcher
Thomas Gold
A. O. Gomes
Brian Goodwin
Joshua Greene
Dirk ter Haar
Jacques Hadamard
Mark Hadley
Patrick Haggard
J. B. S. Haldane
Stuart Hameroff
Augustin Hamon
Sam Harris
Ralph Hartley
Hyman Hartman
John-Dylan Haynes
Donald Hebb
Martin Heisenberg
Werner Heisenberg
John Herschel
Basil Hiley
Art Hobson
Jesper Hoffmeyer
Don Howard
William Stanley Jevons
Roman Jakobson
E. T. Jaynes
Pascual Jordan
Ruth E. Kastner
Stuart Kauffman
Martin J. Klein
William R. Klemm
Christof Koch
Simon Kochen
Hans Kornhuber
Stephen Kosslyn
Daniel Koshland
Ladislav Kovàč
Leopold Kronecker
Rolf Landauer
Alfred Landé
Pierre-Simon Laplace
David Layzer
Joseph LeDoux
Gilbert Lewis
Benjamin Libet
David Lindley
Seth Lloyd
Hendrik Lorentz
Josef Loschmidt
Ernst Mach
Donald MacKay
Henry Margenau
Owen Maroney
Humberto Maturana
James Clerk Maxwell
Ernst Mayr
John McCarthy
Warren McCulloch
N. David Mermin
George Miller
Stanley Miller
Ulrich Mohrhoff
Jacques Monod
Emmy Noether
Alexander Oparin
Abraham Pais
Howard Pattee
Wolfgang Pauli
Massimo Pauri
Roger Penrose
Steven Pinker
Colin Pittendrigh
Max Planck
Susan Pockett
Henri Poincaré
Daniel Pollen
Ilya Prigogine
Hans Primas
Henry Quastler
Adolphe Quételet
Lord Rayleigh
Jürgen Renn
Juan Roederer
Jerome Rothstein
David Ruelle
Tilman Sauer
Jürgen Schmidhuber
Erwin Schrödinger
Aaron Schurger
Sebastian Seung
Thomas Sebeok
Claude Shannon
David Shiang
Abner Shimony
Herbert Simon
Dean Keith Simonton
B. F. Skinner
Lee Smolin
Ray Solomonoff
Roger Sperry
John Stachel
Henry Stapp
Tom Stonier
Antoine Suarez
Leo Szilard
Max Tegmark
Libb Thims
William Thomson (Kelvin)
Giulio Tononi
Peter Tse
Francisco Varela
Vlatko Vedral
Mikhail Volkenstein
Heinz von Foerster
Richard von Mises
John von Neumann
Jakob von Uexküll
John B. Watson
Daniel Wegner
Steven Weinberg
Paul A. Weiss
Herman Weyl
John Wheeler
Wilhelm Wien
Norbert Wiener
Eugene Wigner
E. O. Wilson
Stephen Wolfram
H. Dieter Zeh
Ernst Zermelo
Wojciech Zurek
Konrad Zuse
Fritz Zwicky
Free Will
Mental Causation
James Symposium
The Two-Slit Experiment and "One Mystery" of Quantum Mechanics
Richard Feynman said that the two-slit experiment contains the "one mystery" of quantum mechanics.
I will take just this one experiment, which has been designed to contain all of the mystery of quantum mechanics, to put you up against the paradoxes and mysteries and peculiarities of nature one hundred per cent. Any other situation in quantum mechanics, it turns out, can always be explained by saying, 'You remember the case of the experiment with the two holes? It's the same thing'.
We will show that the (one) mystery of quantum mechanics is how mere "probabilities" can causally control (statistically) the positions of material particles - how immaterial information can affect the material world. This remains a deep metaphysical mystery.
The two-slit experiment was until recent years for the most part a thought experiment, since it is difficult to build an inexpensive demonstration, but its predictions have been verified in many ways since the 1960's, primarily with electrons. Recently, extremely sensitive CCDs used in photography have been used to collect single-photon events, establishing experimentally everything that Albert Einstein imagined, merely by thinking about it, as early as 1905.
Light at the yellow dot slowly disappears as the second slit opens!
Adding light causes some light to disappear!
The two-slit experiment demonstrates better than any other experiment that a quantum wave function ψ is a probability amplitude that can interfere with itself, producing places where the probability |ψ|2 (the square of the absolute value of the complex probability amplitude) of finding a quantum particle is actually zero.
Perhaps the most non-intuitive aspect of the two-slit experiment is when we first note the pattern of light on the screen with just one slit open, then open the second slit - admitting more light into the experiment, and observe that some places on the screen where there was visible light through on slit, have now gone dark! And this happens even when we are admitting only one particle of light at a time. How Feynman asked, can that single particle know that two slits are open?
Light waves are often compared to water waves, as are quantum probability waves, but this latter is a serious error. Water waves and light waves (as well as sound waves) contain something substantial like matter or energy. But quantum waves are just abstract information - mathematical possibilities. As Paul Dirac tells us, quantum wave functions are not substances.
Young's 1802 drawing of wave interference
Water waves in a pond
Dr. Quantum and the two-slit experiment
The cancellation of crests and troughs in the motion of water and other waves creates high and low points in water waves that have the same shape as bright and dark areas found in the "fringes" of light at the sharp edges of an object. These interference pattern were predicted to occur in double-slit experiments by Thomas Young in the early nineteenth century.
The two-slit experiment is thought to demonstrate the famous "collapse" of the wave function or "reduction" of the wave packet, which show an inherent probabilistic element in quantum mechanics that is irreducibly ontological and nothing like the epistemological indeterminacy (human ignorance) in classical statistical physics. We shall see below that the idea of the light wave "collapsing" instantaneously to become a particle was first seen by Einstein in 1905. This is a mistake, one still widely taught.
Note that the probability amplitude ψ is pure information. It is neither matter nor energy. When a wave function "collapses" or "goes through both slits" in this dazzling experiment, nothing material or energetic is traveling faster than the speed of light or going through both slits.
We argue that the particle of matter or energy always goes through just one slit, although the popular Copenhagen interpretation of physics claims we cannot know the particle path, that a path does not even exist until we make a measurement, that the particle may be in more than one place at the same time, and other similar nonsense that deeply bothered Einstein as he hoped for an "objective reality" independent of human observers. (David Bohm's "pilot-wave theory" agrees that an objectively real particle travels through one slit, guided by its pilot wave, which travels through both.)
A large number of panpsychists, some philosophers, and some scientists, believe that the mind of a conscious observer is needed to cause the collapse of the wave function.
There is something similar going on in the Einstein-Podolsky-Rosen thought experiments, where measurement of a particular spin component of one particle means that the other particle now has the exact opposite value of its same spin component, to conserve total spin zero. Nothing physical (matter or energy) is transmitted to the other "entangled" particle. The idea that an immaterial bit of information is "teleported" to the other particle is also mistaken. The anti-parallel spins are created together simultaneously, in a special frame of reference. There is no violation of relativity. We shall show that it is conservation of angular momentum or of spin that makes the state of the coherently entangled second particle determinate, however far away it might be after the measurement.
In the two-slit experiment, just as in the Dirac Three Polarizers experiment, the critical case to consider is just one photon or electron at a time in the experiment.
With one particle at a time (whether photon or electron), the quantum object is mistakenly described as interfering with itself, when interference is never seen in a single event. Interference only shows up in the statistics of large numbers of experiments.
Even in the one-slit case, interference fringes are visible when large numbers of particles are present, although this is rarely described in the context of the two-slit quantum mystery.
It is the fundamental relation between a particle and the associated wave that controls a particle's probable location that raises the "local reality" question first seen in 1905 and described in 1909 by Albert Einstein. Thirty years later, the EPR paper and Erwin Schrödinger's insights into the wave function of two entangled particles, first convinced physicists that there was a deep problem .
It was not for another thirty years that John Stewart Bell in 1964 imagined an experimental test that could confirm or deny quantum mechanics. Ironically, the goal of Bell's "theorem" was to invalidate the non-intuitive aspects of quantum mechanics and restore Einstein's hope for a more deterministic picture of an "objective reality" at, or perhaps even underlying below, the microscopic level of quantum physics.
At about the same time, in his famous Lectures on Physics at Cal Tech and the Messenger Lectures at Cornell, Richard Feynman described the two-slit experiment as demonstrating what he claimed is the "only mystery" of quantum mechanics.
We can thus begin the discussion of the two-slit experiment with a section from Feynman's sixth Messenger lecture entitled "Probability and Uncertainty." We provide the complete video and text of the lecture on this page, and a version starting with Feynman's provocative statement that "no one understands quantum mechanics" below.
How, Feynman asks, can a single particle go through both slits? Our answer (and David Bohm's) is that each particle goes though one sit, conserving matter and energy. The particle always goes through a single slit. A particle cannot be divided and in two places at the same time. It is the wave function that interferes with itself. And the highly localized particle can not be identified with the wave widely distributed in space. The wave function ψ is determined by solving the Schrödinger equation given the boundary conditions of the measuring apparatus (the container). We will see that the thing that goes through both slits is only immaterial information - the probability amplitude wave function ψ (t) if we solve the time-dependent Schrödinger equation.
The immaterial wave function exerts a causal influence over the particles, one that we can justifiably call "mysterious." It results in the statistics of many experiments agreeing with the quantum mechanical predictions. with increasing accuracy as we increase the number of identical experiments.
It is this "influence," no ordinary "force," that is Feynman's "only mystery" in quantum mechanics.
Let's look first at the one-slit case. We prepare a slit that is about the same size as the wavelength of the light in order to see the Fraunhofer diffraction effects most clearly. Parallel waves from a distant source fall on the slit from below. The diagram shows that the wave from the left edge of the slit interferes with the one from the right edge. If the slit width is d and the photon wavelength is λ, at an angle α ≈ λ/2d there will be destructive interference. At an angle α ≈ λ/d, there is constructive interference (which shows up as the fanning out of lightening patterns in the interfering waves in the illustration). The light constructive interference leads toward the peaks in the interference pattern.
The height of the function or curve on the top of the diagram is proportional to the number of photons falling along the screen. At first they are individual pixels in a CCD or grains in a photographic plate, but over time and very large numbers of photons they appear as the continuous gradients of light in the band below (we represent this intensity as the height of the function).
Now what happens if we add a second slit? Perhaps we should start by showing what happens if we run the experiment with the first slit open for a time, and then with the second slit open for an equal time. In this case, the height of the intensity curve is the sum of the curves for the individual slits.
But that is not the intensity curve we get when the two slits are open at the same time! Instead, we see many new interference fringes with much narrower width angles α ≈ λ/D, where D is now the distance between the two slits. Note that the overall envelope of the curve is similar to that of one big slit of width D. And also note many more lightening fan-outs of constructive interference in the overlapping waves.
Remembering that the double-slit interference appears even if only one particle at a time is incident on the two slits, we see why many say that the particle interferes with itself. But it is the wave function alone that is interfering with itself. Whichever slit the particle goes through, it is the probability amplitude ψ, whose squared modulus |ψ|2 gives us the probability of finding a particle somewhere, the interference pattern. It is what it is because the two slits are open.
This is the deepest metaphysical mystery in quantum mechanics. How can an abstract probability wave influence the particle paths to show interference when large numbers of particles are collected?
Why interference patterns show up when both slits are open, even when particles go through just one slit, though we cannot know which slit or we lose the interference
When there is only one slit open (here the left slit), the probabilities pattern has one large maximum (directly behind the slit) and small side fringes. If only the right slit were open, this pattern would move behind the right slit.
If we add up the results of some experiments with the left slit open and others with the right open we don't see the multiple fringes that appear with two slits open.
When both slits are open, the maximum is now at the center between the two slits, there are more interference fringes, and these probabilities apply whichever slit the particle enters. The solution of the Schrödinger equation depends on the boundary conditions - different when two holes are open. The "one mystery" remains - how these "probabilities" can exercise causal control (statistically) over matter or energy particles.
Feynman's path integral formulation of quantum mechanics suggests the answer. His "virtual particles" explore all space (the "sum over paths") as they determine the variational minimum for least action, thus the resulting probability amplitude wave function can be said to "know" which holes are open.
Now let's slow down the opening and closing of the right-hand slit so we can see more closely what's happening.
The wave function depends on which slits are open, not on whether there is a particle in the experiment.
Collapse of the Wave Function
But how do we interpret the notion of the "collapse" of the wave function? At the moments just before a particle is detected at the CCD or photographic plate, there is a finite non-zero probability that the photon could be detected anywhere that the modulus (complex conjugate squared) of the probability amplitude wave function has a non-zero value.
If our experiment were physically very large (and it is indeed large compared to the atomic scale), we can say that the finite probability of detecting (potentially measuring) the particle at position x1 on the screen "collapses" (goes to zero) and reappears as part of the unit probability (certainty) that the particle is at x2, where it is actually measured.
Since the collapse to zero of the probability at x1 is instantaneous with the measurement at x2, critics of quantum theory like to say that something traveled faster than the speed of light. This is most clear in the nonlocality and entanglement aspects of the Einstein-Podolsky-Rosen experiment. But the sum of all the probabilities of measuring anywhere on the screen is not a physical quantity, it is only immaterial information that "collapses" to a point.
Here is what happens to the probability amplitude wave function (the blue waves) when the particle is detected at the screen (either a photographic plate or CCD) in the second interference fringe to the right (red spot). The probability simply disappears instantly.
Animation of a wave function collapsing - click to restart
Here is another version of this animation
The first suggestion of two possible directions through a slit, one of which disappears ("collapses?") when the other is realized (implying a mysterious "nonlocal" correlation between the directions), was made by Albert Einstein at the 1927 Solvay conference on "Electrons and Photons." Niels Bohr remembered the occasion with a somewhat confusing description. Here is his 1949 recollection:
Note that they wanted Einstein's reaction to their work, but actually took little interest in Einstein's concern about the nonlocal implications of quantum mechanics, nor did they look at his work on electrons and photons, the theme of the conference!.
photon passes through a slit
The "nonlocal" effects at point B are just that the probability of an electron being found at point B goes to zero instantly (not an "action at a distance") when an electron is localized at point A
Although Bohr seems to have missed Einstein's point completely, Werner Heisenberg at least came to explain it well. In his 1930 lectures at the University of Chicago, Heisenberg presented a critique of both particle and wave pictures, including a new example of nonlocality that Einstein had apparently developed since 1927. It includes Einstein's concern about "action-at-a-distance" that might violate his principle of relativity, and anticipates the Einstein-Podolsky-Rosen paradox. Heisenberg wrote:
Clearly the "kind of action (reduction of the wave packet)" described by Heisenberg is the same "mysterious" influence that the wave function has over the places that the particle will be found statistically in a large number of experiments, including our canonical "mystery," the two-slit experiment.
Apart from the statistical information in the wave function, quantum mechanics gives us only vague and uncertain information about any individual particle. This is the true source of Heisenberg's uncertainty principle. It is the reason that Einstein correctly describes quantum mechanics as "incomplete."
Quantum mechanics does not prove that the particle actually has no position at each instant and a path that conserves its momentum, spin, and other conserved properties.
In Einstein's view of "objective reality," the particle has those properties, even if quantum mechanics prevents us from knowing them - without a measurement that destroys their interference capabilities or "decoheres" them.
Some Other Animations of the Two-Slit Experiment
None of these animations, viewed many millions of times, can explain why a particle entering one slit when both are open exhibits the properties of waves characteristic of two open slits. It remains Feynman's "one mystery" of quantum mechanics.
PBS Digital Studios
Dr Quantum
Wave-Particle Duality Animation
One good thing in this animation is that it initially shows only particles firing at the slits. This is important historically because Isaac Newton thought that light was a stream of particles traveling along a light ray. He solved many problems in optics by tracing light rays through lenses. But without a clear verbal explanation it is hard to follow.
For Teachers
For Scholars
References from Physics World
T Young 1802 On the theory of light and colours (The 1801 Bakerian Lecture) Philosophical Transactions of the Royal Society of London 92 12-48
T Young 1804 Experiments and calculations relative to physical optics (The 1803 Bakerian Lecture) Philosophical Transactions of the Royal Society of London 94 1-16
T Young 1807 A Course of Lectures on Natural Philosophy and the Mechanical Arts (J Johnson, London)
G I Taylor 1909 Interference fringes with feeble light Proceedings of the Cambridge Philosophical Society 15 114-115
P A M Dirac 1958 The Principles of Quantum Mechanics (Oxford University Press) 4th edn p9
R P Feynman, R B Leighton and M Sands 1963 The Feynman Lecture on Physics (Addison-Wesley) vol 3 ch 37 (Quantum behaviour)
A Howie and J E Fowcs Williams (eds) 2002 Interference: 200 years after Thomas Young's discoveries Philosophical Transactions of the Royal Society of London 360 803-1069
R P Crease 2002 The most beautiful experiment Physics World September pp19-20. This article contains the results of Crease's survey for Physics World; the first article about the survey appeared on page 17 of the May 2002 issue.
Electron interference experiments
Visit for details of the Nobel prize awarded to Clinton Davisson and George Thomson
L Marton 1952 Electron interferometer Physical Review 85 1057-1058
L Marton, J Arol Simpson and J A Suddeth 1953 Electron beam interferometer Physical Review 90 490-491
L Marton, J Arol Simpson and J A Suddeth 1954 An electron interferometer Reviews of Scientific Instruments 25 1099-1104
G Möllenstedt and H Düker 1955 Naturwissenschaften 42 41
G Möllenstedt and H Düker 1956 Zeitschrift für Physik 145 377-397
G Möllenstedt and C Jönsson 1959 Zeitschrift für Physik 155 472-474
R G Chambers 1960 Shift of an electron interference pattern by enclosed magnetic flux Physical Review Letters 5 3-5
C Jönsson 1961 Zeitschrift für Physik 161 454-474
C Jönsson 1974 Electron diffraction at multiple slits American Journal of Physics 42 4-11
A P French and E F Taylor 1974 The pedagogically clean, fundamental experiment American Journal of Physics 42 3
P G Merli, G F Missiroli and G Pozzi 1976 On the statistical aspect of electron interference phenomena American Journal of Physics 44 306-7
A Tonomura, J Endo, T Matsuda, T Kawasaki and H Ezawa 1989 Demonstration of single-electron build-up of an interference pattern American Journal of Physics 57 117-120
H Kiesel, A Renz and F Hasselbach 2002 Observation of Hanbury Brown-Twiss anticorrelations for free electrons Nature 418 392-394
Atoms and molecules
O Carnal and J Mlynek 1991 Young's double-slit experiment with atoms: a simple atom interferometer Physical Review Letters 66 2689-2692
D W Keith, C R Ekstrom, Q A Turchette and D E Pritchard 1991 An interferometer for atoms Physical Review Letters 66 2693-2696
M W Noel and C R Stroud Jr 1995 Young's double-slit interferometry within an atom Physical Review Letters 75 1252-1255
M Arndt, O Nairz, J Vos-Andreae, C Keller, G van der Zouw and A Zeilinger 1999 Wave-particle duality of C60 molecules Nature 401 680-682
B Brezger, L Hackermüller, S Uttenthaler, J Petschinka, M Arndt and A Zeilinger 2002 Matter-wave interferometer for large molecules Physical Review Letters 88 100404
Review articles and books
G F Missiroli, G Pozzi and U Valdrè 1981 Electron interferometry and interference electron microscopy Journal of Physics E 14 649-671. This review covers early work on electron interferometry by groups in Bologna, Toulouse, Tübingen and elsewhere.
A Zeilinger, R Gähler, C G Shull, W Treimer and W Mampe 1988 Single- and double-slit diffraction of neutrons Reviews of Modern Physics 60 1067-1073
A Tonomura 1993 Electron Holography (Springer-Verlag, Berlin/New York)
H Rauch and S A Werner 2000 Neutron Interferometry: Lessons in Experimental Quantum Mechanics (Oxford Science Publications)
Normal | Teacher | Scholar |
30cab633de83ce44 | Science and Technology links (December 26th 2020)
1. Researchers used a viral vector to manipulate eye cells genetically to improve the vision of human beings.
2. Seemingly independently, researchers have reported significant progress regarding the solution of the Schrödinger equation using deep learning: Puppin et al., Hermann et al.
3. The Dunning-Kruger Effect Is Probably Not Real. I am becoming quite upset with respect to the many effects in psychology that fail to be independently verified. And I’d feel better if it were only a problem in psychology.
4. Can the technology behind COVID-19 vaccines lead to other breakthroughs?
Published by
Daniel Lemire
Leave a Reply
This will be displayed in a monospaced font. The first four
spaces will be stripped off, but all other whitespace
will be preserved.
Markdown is turned off in code blocks:
[This is not a link](
Here is some inline `code`.
For more help see |
5f428f14a168c43f | Randell Mills GUT - Who can do the calculations?
• I'll let you know what Dr. Mills says. Or you can just join us at The Society For Classical Physics. Sorry if I came off as a jerk, you seem to be obviously willing to take an honest look at the theory.
Furthermore, not all parts of the theory are fully fleshed out as you can see, but what it does predict it does so with extreme accuracy and within the confines of classical physics and fundamental constants. There is room to make original contributions to the theory.
I suggested the design of using liquid electrodes last year on the forum to isolate the energetic transition reactions from the solid parts of the reactor and prevent them from melting or vaporizing. This was prior to revealing any liquid fuel injection or liquid electrodes being used in the latest design revealed last week.
External Content youtu.be
• The vaporized silver provides the conductive matrix, the heat provides the kinetic energy to the reactants which are catalyst and atomic hydrogen.
The kinetic energy is what is responsible for initiating the transition reactions (specifically dipole/multipole resonant collisions destabilizing the orbitsphere causing radial acceleration and release of electric potential between electron and proton). If the plasma wasn't contained within the pressure vessel the conditions conducive to the reactions would not persist. The current is mainly to alleviate charge buildup and to provide the initial kinetic energy to the reactants.
The energy obviously comes from the transition reactions which are releasing ~100 or more eVs per event depending on which fractional state is being catalyzed. There could likely even be disproportionation occurring which is when hydrinos collide and drop to even lower energy levels. This also likely occurs within the corona of our star.
If the plasma wasn't confined somehow, it would simply dissipate.
Even in single shot open air tests three years ago the plasma persisted much longer after current ceased to flow which current theory cannot explain. In all cases there is no high field, only a maximum of 5 volts.
Why not just go on the forum and ask Dr. Mills directly?
• If the magic is in the kinetic energy and not in the driving electric current, then the hydrino reaction can spread over N numbers of silver fountains...say 100 electrode sets...an electrode array where the reaction in one electrode set can activate the reaction in many other electrode sets that are nearby the prime driver set.
• I meant according to the current mainstream physics paradigm, it is explained using classical physics (GUTCP) as I have described above.
I don't claim to be the world's authority on GUTCP but I think I got the major points mostly right. Again, Dr. Mills doesn't mind answering questions on his Society for Classical Physics forum. We interact with him on a daily basis pretty much.
Also I wasn't sure if you were being sarcastic with your prior posts, but look at what happens in the corona of the Sun. If GUTCP is right, disproportionation hydrino reactions occur on a massive scale providing the high energy photons to produce the ionized species of elements observed in the spectrum, not millions of degrees temperature as is currently assumed.
• @stefan
You asked about the relationship between GUTCP and QM. I think Mills had the same question and has a first answer and gives its derivation from p11 ff. His conclusion:
“Thus the mathematical relationship of GUTCP and QM is based on the Fourier transform of the redial function. GUTCP requires that the electron is real and physically confined to a two dimensional surface comprising source currents that match the wave equation solutions for spherical waves in two dimensions (angular) and time. The corresponding Fourier transform is a wave over all space that is a solution of the three dimensional wave equation (ev.g. the Schrödinger equation). In essence, QM may be considered as a theory dealing with the Fourier transform of an electron, rather than the physical electron. By Parsevals theorem, the energies may be equivalent, but the quantum mechanical case is nonphysical – only mathematical. It may mathematically produce numbers that agree with experimental energies as eigenvalues, but the mechanisms lack internal consistency and conformity with physical laws. ”
This is a quite a remarcable result.
@ Eric
Sorry for my strong wording. I think I adopted the verbally strong position of some of the people posting in this thread :-) .
To your question:
I am no expert so I am talking about my current understanding of the process of pair production and the fine structure constant: Of course the electron is not moving with lightspeed. 1/alpha is the fraction where the electron would have the velocity c and because this is not possible (because GUTCP relies on special relativity as one of its foundations) the last permitted orbit is a fraction of 1/137. Orbit 1/138 would result in an electron velocity greater than c. And in between the pair production process happens. This transition state orbitosphere is not a traditional orbit of the electron but rather a short living state where (in the case Driscoll describes) the photon wave (photon orbitosphere) changes to become an electron and a positron. To get an impression of how this might work I think one has to see the animations of the fields of the photon and the free electron. I think they are somethere on BLPs page.
To your other question regarding my two links: they are linked to Mills equations because they use the nonradiation condition to construct models for electrons. The paper from 1990 is interesting because they use a simple ad hoc nonradiation condition for the simplest case. Then they solve maxwells equations for their simple nonradiation condition and can show that the electron can have a stable orbit and directly show that the spin is a direct physical consequence of their solution and not “inherent” as in QM. They are completely unrelated to Mills but basically had the same idea and could produce a small part of Mills result. Instead of the ad hoc simples nonradiation condition Mills took the general case and as a model of the electron he used the 2D wave equation. Btw. this also shows that Mills is not randomly putting numbers together – because these guys got the same result as Mills at least for the spin.
And the other paper shows that it is possible to construct not only the electron but other particles with this nonradiation condition so that they are stable – it is more or less a proof/indication that Mills model does not violate any accepted law of nature (Maxwell, Newton) and gives stable models for atoms.
• In regards to Epimetheus's post above, I can't remember where I read it, it was either on the forum or in Brett's book, but apparently many years ago, Hermann Haus told Dr. Mills privately that he had correctly solved for the structure of the electron classically. At the time he did not wish to make "waves" so to speak through public acknowledgement.
In regards to K-Capture Eric seems to be asking specifically about the case of capture of the inner shell; I've posted a question on the other forum so we'll see what Dr. Mills says.
• Here's what Dr. Mills posted. Probably not as much information as you would have liked but you can always prod him for more detail on the forum.
Also GUTCP theorizes that excited states are due to photons expressing "effective charge" and shielding the electron to a degree from the central field of the proton. I guess if one accepts that a high energy photon can convert into an electron and positron the idea of photons in certain situations expressing effective charge isn't all that strange. I'm not sure how to relate this to K-capture but just thought I'd mention it.
Randy MillsToday at 5:12 AM
K capture can only occur if the reaction can form a more stable nucleus. A proton cannot undergo K-capure for example.
• Mills states as follows:
Mills has trademarked “Hydrino.” And because his issued patents claim the hydrino as an invention, BLP asserts that it owns all intellectual property rights involving hydrino research. BLP therefore forbids outside experimentalists from doing even the most basic hydrino research, which could confirm or deny hydrinos, without first signing an IP agreement. “We welcome research partners; we want to get others involved,” Mills says. “But we do need to protect our technology.”
The insulator-metal transition in hydrogen
Very high temperature shock wave methods might make metalize hydrogen obtainable.
This transition from molecular liquid to atomic liquid is called the PPT (discussed below). Leif Holmlid uses a quantum mechanics process called Rydberg blockade to produce metalized hydrogen where a Rydberg matter substance like potassium is used as a QM template to reform the atomic structure of hydrogen into the low orbit based liquid matalized form.
* A phase of hydrogen Rydberg matter (RM) is formed in ultra-high vacuum by desorption of hydrogen from an alkali promoted RM emitter (Holmlid 2002 J. Phys.: Condens. Matter 14 13469). The RM phase is studied by pulsed laser-induced Coulomb explosions which is the best method for detailed studies of the RM clusters. This method gives direct information about the bonding distances in RM from the kinetic energy release in the explosions. At pressures >10-6 mbar hydrogen, H* Rydberg atoms are released with an energy of 9.4 eV. This gives a bonding distance of 150 ± 8 pm which corresponds to a metallic phase of atomic hydrogen using the results by Chau et al (2003 Phys. Rev. Lett. 90 245501). The results indicate that a partial 3D structure is formed.
I beleive that there are other theories that have been accepted by science that explain below base hydrogen orbits; specifically metalized hydrogen. High pressure physics is directed at producing metalized hydrogen as its major goal. [/quote]
All the experimental data that Mills has accumulated may very well be consistent with high temperature shock wave produced PPT hydrogen.
From Holmlid
Instead of inverted Rydberg matter, it is spin-based Rydberg matter with orbital angular momentum l = 0 for the electrons. It is shown to be both superfluid4 and superconductive (Meissner effect observed) at room temperature.6,7 The measured H–H distances are short, normally 2.3 pm.1,3,9 Several spin states with different internuclear distances exist.3 It is likely that the main process initiated by the impinging laser pulse is a transition from level s = 2 with H–H distance of 2.3 pm, to level s = 1 with theoretical distance 0.56 pm. At this distance, nuclear reactions are spontaneous and laser-induced nuclear processes are thus relatively easy to start.
• @Epimetheus
I don't understand this connection, the radial solutions are essentially Laguerre polynomials + exponential for Shrödingers equation of hydrogene and
in the derivation you gave me he uses spherical bessel functions as a radial function. So I can't follow this line of thoughts. But it is true that the fourier
transform with spherical bessel functions for the radial part indeed fourier transform into Mills charge distribution.
• Has anyone worked through Appendix I to the point they feel comfortable with the derivations? I'm mostly OK with it (except for the discussion of the H() and G() functions). However, the conclusion uses some terms that aren't well explained. While I *think* I understand these, can anyone take a crack at providing a more intuitive justification behind the highlighted equations? Exactly what is represented by the cross-product s_n * v_n? Is omega_n the angular frequency of the emitted photon? I think s_n is the spatial frequency expressed in rad/m, and v_n is a velocity in m/sec of the current density. And radiation requires that the cross product of the two at some point on the orbitsphere is equal to the photon's wavelength. Am I understanding this correctly?
• I think that in order to understand a proof of this you should in stead of the taken path, expand the plane wave in the fourier transform
into a sum of spherical bessel functions and spherical harmonics, the sum will cancel almost all terms but a single bessel and spherical
harmonic that match the same quantum number of the Mills charge distribution due to orthogonality. You will end up with the fourier transform being:
(*) j_l(|s|r) Y_lm(theta,phi)
Which is much better because the stated equation (38) in Mills takes convolution with all factors except the last having s. You just can't show that
this expression dissapears because of the property of the convolution. now for a specific w0 |s| has a certain magnitude for light like wave numbers
and hence r can be chooses so that (*) e.g. |s|r represents a zero of the spherical bessel function and (*) is shown to be zero for all light like s,w.
To understand everything that is written is hard though. In all to motivate the non radiation one only needs a half of page I think and could keep
it much much simpler than whats written in the book.
• To further explain and highlight that the key to proper mathematical understanding of Mills theory is the expansion of plane waves in various ways.
We have a photon inside the atom that is trapped. Consider the superpositioning of EM plane waves assume that the wave vectors of all the plane waves are evenly distributed e.g. they live on a sphere with constant radi. Again the theorem where you expand the plane wave in bessel function and spherical harmoics apply and we get the explicit solution of the electrical potential as
~ j_0(|r|w/c)exp(i w t), r = sqrt(x^2+y^2+z^2)
j_0(x) = sin(x)/x and hence |r|w/c = 2pi for a zero
<=> |r|2 pi f / c = 2 pi
<=> |r| 1/(1/f c) = 1
<=> |r| 1 / (T c) = 1
<=> |r| / lambda = 1
<=> |r| = lambda
So the lambda of the trapped photon has to be the same as the radi as described in option geeks post above.
What utter nonsense. Even Randall Mills can't patent physics, and how could he possibly forbid someone else from experimenting? Is he planning to put a copyright tag on each and every hydrino?
ETA. Next step, GE patent the electron and forbid anyone else using 'pirate' versions.
• Read the paper that the jack booted thugs at BLP don't want you to see.
The fact that BLP tries to suppress basic scientific research only means they are not worthy of our attention. Any organization which would send a cease and desist letter to a replicator (who is not seeking to commercialize the technology) is not worthy of existing. My hope is that LENR technologies -- which produce millions of eV per reaction -- arrive on the market soon and cause BLP to lose all future funding.
• What do you want to tell us? That paper sees indications for unusual development of bright light as claimed by Mills. This is more supportive for Mills theory than the opposite. But regarding an independent validation I find the papers of world class plasma physicist like Kroesen and Conrads much more compelling:
Conrads, H, R Mills, and Th Wrubel. (2003) “Emission in the deep vacuum ultraviolet from a plasma formed by incandescently heating hydrogen gas with trace amounts of potassium carbonate.” Plasma Sources Sci Technol 12: 389–395.
Driessen, N. M., E. M. van Veldhuizen, P. Van Noorden, R. J. L. J. De Regt, and G. M. W. Kroesen. (2005) “Balmer-alpha line broadening analysis of incandescently heated hydrogen plasmas with potassium catalyst.” In XXVIIth ICPIG, Eindoven, the Netherlands. 18-22 July.
I´m not your opinion that the cease and desist letter has anything to say. It just tells me that after Rossi we have another guy who is totally scared to lose the race against the competitors. Mills wants to make a lot of money and he owes his private investors a huge return of investment. He also needs money to start some new companies that have other products predicted by GUTCP in their focus. And of course he wants to sue the a$$ of everyone who harmed his credibility like Wikipedia, Rathke, etc.
Being the lone wolf can make you a bit weird. In my eyes Mills is way ahead of Rossi regarding basic decent human behavior.
• Randell Mills is not a decent human being. As I said in my previous post, he is a thug. Andrea Rossi, despite his less than complete honesty and straightforwardness, has never attempted to sue those who performed replications of his technology. He never sent cease and desist letters to Parkhomov, Songsheng, Stepanov, Alan Smith, and a dozen other individuals. Why? Because Andrea Rossi realizes that attempting to prohibit, under threat of litigation, basic scientific research is absolutely repugnant. It's not simply bad, but the polar opposite of the open source movement.
Basically, he is claiming that trying to replicate a scientific phenomenon (in this case the reality of the hydrino) is something no one has the right to do unless they sign up with his company. No one has any duty or obligation whatsoever to ask his permission or sign any document with Black Light Power before performing not-for-profit research. He's basically trying to be the dictator of an entire branch of science which he has no right to be. But even if he was trying, his dictatorship is a flop. After decades of research and making huge claims and pronouncements about a dozen different variations of their technology, the best he can come up with is a giant Rube Goldberg device. Even if his figures and those of his validation team are confirmed, it will be many years before a SunCell would be robust enough to operate for many months or years in an industrial setting.
LENR has him beat and he knows it due to the very basic physics involved. His technology isn't really somewhere between nuclear and chemical. That is like saying, "the speed of me on my bicycle is somewhere between a turtle and an ICBM." And if you notice, he doesn't even speak about his beloved "hydrino hydrides" anymore. He used to brag about them. Waving tubes of multi-colored crystals he'd claim they had all sorts of amazing properties. Now they have vanished.
My hope is that Black Light Power folds in short order. I would hope the same for any company or organization that would threaten a lawsuit over a simple replication attempt. We've had enough petty, arrogant dictators on this planet -- they've been responsible for all sorts of atrocities. We definitely don't need them in science. |
fc553c6e3f088004 | Nuclear Shell Model
Get Nuclear Shell Model essential facts below. View Videos or join the Nuclear Shell Model discussion. Add Nuclear Shell Model to your PopFlock.com topic list for future reference or share this resource on social media.
Nuclear Shell Model
In nuclear physics and nuclear chemistry, the nuclear shell model is a model of the atomic nucleus which uses the Pauli exclusion principle to describe the structure of the nucleus in terms of energy levels.[1] The first shell model was proposed by Dmitry Ivanenko (together with E. Gapon) in 1932. The model was developed in 1949 following independent work by several physicists, most notably Eugene Paul Wigner, Maria Goeppert Mayer and J. Hans D. Jensen, who shared the 1963 Nobel Prize in Physics for their contributions.
The shell model is partly analogous to the atomic shell model which describes the arrangement of electrons in an atom, in that a filled shell results in greater stability. When adding nucleons (protons or neutrons) to a nucleus, there are certain points where the binding energy of the next nucleon is significantly less than the last one. This observation, that there are certain magic numbers of nucleons (2, 8, 20, 28, 50, 82, 126) which are more tightly bound than the next higher number, is the origin of the shell model.
The shells for protons and for neutrons are independent of each other. Therefore, "magic nuclei" exist in which one nucleon type or the other is at a magic number, and "doubly magic nuclei", where both are. Due to some variations in orbital filling, the upper magic numbers are 126 and, speculatively, 184 for neutrons but only 114 for protons, playing a role in the search for the so-called island of stability. Some semi-magic numbers have been found, notably Z = 40 giving nuclear shell filling for the various elements; 16 may also be a magic number.[2]
In order to get these numbers, the nuclear shell model starts from an average potential with a shape something between the square well and the harmonic oscillator. To this potential, a spin orbit term is added. Even so, the total perturbation does not coincide with experiment, and an empirical spin orbit coupling must be added with at least two or three different values of its coupling constant, depending on the nuclei being studied.
The empirical proton and neutron shell gaps, numerically obtained from observed binding energies.[3] Distinct shell gaps are shown at labeled magic numbers, and at .
Nevertheless, the magic numbers of nucleons, as well as other properties, can be arrived at by approximating the model with a three-dimensional harmonic oscillator plus a spin-orbit interaction. A more realistic but also complicated potential is known as Woods-Saxon potential.
Modified harmonic oscillator model
Consider a three-dimensional harmonic oscillator. This would give, for example, in the first three levels ("l" is the angular momentum quantum number)
level n l ml ms
0 0 0 +
1 1 +1 +
0 +
-1 +
2 0 0 +
2 +2 +
+1 +
0 +
-1 +
-2 +
We can imagine ourselves building a nucleus by adding protons and neutrons. These will always fill the lowest available level. Thus the first two protons fill level zero, the next six protons fill level one, and so on. As with electrons in the periodic table, protons in the outermost shell will be relatively loosely bound to the nucleus if there are only few protons in that shell, because they are farthest from the center of the nucleus. Therefore, nuclei which have a full outer proton shell will have a higher binding energy than other nuclei with a similar total number of protons. All this is true for neutrons as well.
This means that the magic numbers are expected to be those in which all occupied shells are full. We see that for the first two numbers we get 2 (level 0 full) and 8 (levels 0 and 1 full), in accord with experiment. However the full set of magic numbers does not turn out correctly. These can be computed as follows:
In a three-dimensional harmonic oscillator the total degeneracy at level n is .
Due to the spin, the degeneracy is doubled and is .
Thus the magic numbers would be
for all integer k. This gives the following magic numbers: 2, 8, 20, 40, 70, 112, ..., which agree with experiment only in the first three entries. These numbers are twice the tetrahedral numbers (1, 4, 10, 20, 35, 56, ...) from the Pascal Triangle.
In particular, the first six shells are:
• level 0: 2 states (l = 0) = 2.
• level 1: 6 states (l = 1) = 6.
• level 2: 2 states (l = 0) + 10 states (l = 2) = 12.
• level 3: 6 states (l = 1) + 14 states (l = 3) = 20.
• level 4: 2 states (l = 0) + 10 states (l = 2) + 18 states (l = 4) = 30.
• level 5: 6 states (l = 1) + 14 states (l = 3) + 22 states (l = 5) = 42.
where for every l there are 2l+1 different values of ml and 2 values of ms, giving a total of 4l+2 states for every specific level.
These numbers are twice the values of triangular numbers from the Pascal Triangle: 1, 3, 6, 10, 15, 21, ....
Including a spin-orbit interaction
We next include a spin-orbit interaction. First we have to describe the system by the quantum numbers j, mj and parity instead of l, ml and ms, as in the hydrogen-like atom. Since every even level includes only even values of l, it includes only states of even (positive) parity. Similarly, every odd level includes only states of odd (negative) parity. Thus we can ignore parity in counting states. The first six shells, described by the new quantum numbers, are
• level 0 (n = 0): 2 states (j = ). Even parity.
• level 1 (n = 1): 2 states (j = ) + 4 states (j = ) = 6. Odd parity.
• level 2 (n = 2): 2 states (j = ) + 4 states (j = ) + 6 states (j = ) = 12. Even parity.
• level 3 (n = 3): 2 states (j = ) + 4 states (j = ) + 6 states (j = ) + 8 states (j = ) = 20. Odd parity.
• level 4 (n = 4): 2 states (j = ) + 4 states (j = ) + 6 states (j = ) + 8 states (j = ) + 10 states (j = ) = 30. Even parity.
• level 5 (n = 5): 2 states (j = ) + 4 states (j = ) + 6 states (j = ) + 8 states (j = ) + 10 states (j = ) + 12 states (j = ) = 42. Odd parity.
where for every j there are 2j+1 different states from different values of mj.
Due to the spin-orbit interaction the energies of states of the same level but with different j will no longer be identical. This is because in the original quantum numbers, when is parallel to , the interaction energy is positive; and in this case j = l + s = l + . When is anti-parallel to (i.e. aligned oppositely), the interaction energy is negative, and in this case j=l-s=l-. Furthermore, the strength of the interaction is roughly proportional to l.
For example, consider the states at level 4:
• The 10 states with j = come from l = 4 and s parallel to l. Thus they have a positive spin-orbit interaction energy.
• The 8 states with j = came from l = 4 and s anti-parallel to l. Thus they have a negative spin-orbit interaction energy.
• The 6 states with j = came from l = 2 and s parallel to l. Thus they have a positive spin-orbit interaction energy. However its magnitude is half compared to the states with j = .
• The 4 states with j = came from l = 2 and s anti-parallel to l. Thus they have a negative spin-orbit interaction energy. However its magnitude is half compared to the states with j = .
• The 2 states with j = came from l = 0 and thus have zero spin-orbit interaction energy.
Changing the profile of the potential
The harmonic oscillator potential grows infinitely as the distance from the center r goes to infinity. A more realistic potential, such as Woods-Saxon potential, would approach a constant at this limit. One main consequence is that the average radius of nucleons' orbits would be larger in a realistic potential; This leads to a reduced term in the Laplace operator of the Hamiltonian. Another main difference is that orbits with high average radii, such as those with high n or high l, will have a lower energy than in a harmonic oscillator potential. Both effects lead to a reduction in the energy levels of high l orbits.
Predicted magic numbers
Low-lying energy levels in a single-particle shell model with an oscillator potential (with a small negative l2 term) without spin-orbit (left) and with spin-orbit (right) interaction. The number to the right of a level indicates its degeneracy, (2j+1). The boxed integers indicate the magic numbers.
Together with the spin-orbit interaction, and for appropriate magnitudes of both effects, one is led to the following qualitative picture: At all levels, the highest j states have their energies shifted downwards, especially for high n (where the highest j is high). This is both due to the negative spin-orbit interaction energy and to the reduction in energy resulting from deforming the potential to a more realistic one. The second-to-highest j states, on the contrary, have their energy shifted up by the first effect and down by the second effect, leading to a small overall shift. The shifts in the energy of the highest j states can thus bring the energy of states of one level to be closer to the energy of states of a lower level. The "shells" of the shell model are then no longer identical to the levels denoted by n, and the magic numbers are changed.
We may then suppose that the highest j states for n = 3 have an intermediate energy between the average energies of n = 2 and n = 3, and suppose that the highest j states for larger n (at least up to n = 7) have an energy closer to the average energy of n-1. Then we get the following shells (see the figure)
• 1st shell: 2 states (n = 0, j = ).
• 2nd shell: 6 states (n = 1, j = or ).
• 3rd shell: 12 states (n = 2, j = , or ).
• 4th shell: 8 states (n = 3, j = ).
• 5th shell: 22 states (n = 3, j = , or ; n = 4, j = ).
• 6th shell: 32 states (n = 4, j = , , or ; n = 5, j = ).
• 7th shell: 44 states (n = 5, j = , , , or ; n = 6, j = ).
• 8th shell: 58 states (n = 6, j = , , , , or ; n = 7, j = ).
and so on.
Note that the numbers of states after the 4th shell are doubled triangular numbers plus two. Spin-orbit coupling causes so-called 'intruder levels' to drop down from the next higher shell into the structure of the previous shell. The sizes of the intruders are such that the resulting shell sizes are themselves increased to the very next higher doubled triangular numbers from those of the harmonic oscillator. For example, 1f2p has 20 nucleons, and spin-orbit coupling adds 1g9/2 (10 nucleons) leading to a new shell with 30 nucleons. 1g2d3s has 30 nucleons, and addition of intruder 1h11/2 (12 nucleons) yields a new shell size of 42, and so on.
The magic numbers are then
• 2
• 8=2+6
• 20=2+6+12
• 28=2+6+12+8
• 50=2+6+12+8+22
• 82=2+6+12+8+22+32
• 126=2+6+12+8+22+32+44
• 184=2+6+12+8+22+32+44+58
and so on. This gives all the observed magic numbers, and also predicts a new one (the so-called island of stability) at the value of 184 (for protons, the magic number 126 has not been observed yet, and more complicated theoretical considerations predict the magic number to be 114 instead).
Another way to predict magic (and semi-magic) numbers is by laying out the idealized filling order (with spin-orbit splitting but energy levels not overlapping). For consistency s is split into j = 1/2 and j = -1/2 components with 2 and 0 members respectively. Taking leftmost and rightmost total counts within sequences marked bounded by / here gives the magic and semi-magic numbers.
• s(2,0)/p(4,2) > 2,2/6,8, so (semi)magic numbers 2,2/6,8
• d(6,4):s(2,0)/f(8,6):p(4,2) > 14,18:20,20/28,34:38,40, so 14,20/28,40
• g(10,8):d(6,4):s(2,0)/h(12,10):f(8,6):p(4,2) > 50,58,64,68,70,70/82,92,100,106,110,112, so 50,70/82,112
• i(14,12):g(10,8):d(6,4):s(2,0)/j(16,14):h(12,10):f(8,6):p(4,2) > 126,138,148,156,162,166,168,168/184,198,210,220,228,234,238,240, so 126,168/184,240
The rightmost predicted magic numbers of each pair within the quartets bisected by / are double tetrahedral numbers from the Pascal Triangle: 2, 8, 20, 40, 70, 112, 168, 240 are 2x 1, 4, 10, 20, 35, 56, 84, 120, ..., and the leftmost members of the pairs differ from the rightmost by double triangular numbers: 2 - 2 = 0, 8 - 6 = 2, 20 - 14 = 6, 40 - 28 = 12, 70 - 50 = 20, 112 - 82 = 30, 168 - 126 = 42, 240 - 184 = 56, where 0, 2, 6, 12, 20, 30, 42, 56, ... are 2 × 0, 1, 3, 6, 10, 15, 21, 28, ... .
Other properties of nuclei
This model also predicts or explains with some success other properties of nuclei, in particular spin and parity of nuclei ground states, and to some extent their excited states as well. Take (oxygen-17) as an example: Its nucleus has eight protons filling the three first proton "shells", eight neutrons filling the three first neutron "shells", and one extra neutron. All protons in a complete proton shell have zero total angular momentum, since their angular momenta cancel each other. The same is true for neutrons. All protons in the same level (n) have the same parity (either +1 or -1), and since the parity of a pair of particles is the product of their parities, an even number of protons from the same level (n) will have +1 parity. Thus the total angular momentum of the eight protons and the first eight neutrons is zero, and their total parity is +1. This means that the spin (i.e. angular momentum) of the nucleus, as well as its parity, are fully determined by that of the ninth neutron. This one is in the first (i.e. lowest energy) state of the 4th shell, which is a d-shell (l = 2), and since , this gives the nucleus an overall parity of +1. This 4th d-shell has a j = , thus the nucleus of is expected to have positive parity and total angular momentum , which indeed it has.
The rules for the ordering of the nucleus shells are similar to Hund's Rules of the atomic shells, however, unlike its use in atomic physics the completion of a shell is not signified by reaching the next n, as such the shell model cannot accurately predict the order of excited nuclei states, though it is very successful in predicting the ground states. The order of the first few terms are listed as follows: 1s, 1p, 1p, 1d, 2s, 1d... For further clarification on the notation refer to the article on the Russell-Saunders term symbol.
The electric dipole of a nucleus is always zero, because its ground state has a definite parity, so its matter density (, where is the wavefunction) is always invariant under parity. This is usually the situations with the atomic electric dipole as well.
Including residual interactions
Residual interactions among valence nucleons are included by diagonalizing an effective Hamiltonian in a valence space outside an inert core. As indicated, only single-particle states lying in the valence space are active in the basis used.
For nuclei having two or more valence nucleons (i.e. nucleons outside a closed shell) a residual two-body interaction must be added. This residual term comes from the part of the inter-nucleon interaction not included in the approximative average potential. Through this inclusion different shell configurations are mixed and the energy degeneracy of states corresponding to the same configuration is broken.[4][5]
These residual interactions are incorporated through shell model calculations in a truncated model space (or valence space). This space is spanned by a basis of many-particle states where only single-particle states in the model space are active. The Schrödinger equation is solved in this basis, using an effective Hamiltonian specifically suited for the model space. This Hamiltonian is different from the one of free nucleons as it among other things has to compensate for excluded configurations.[5]
One can do away with the average potential approximation entirely by extending the model space to the previously inert core and treat all single-particle states up to the model space truncation as active. This forms the basis of the no-core shell model, which is an ab initio method. It is necessary to include a three-body interaction in such calculations to achieve agreement with experiments.[6]
Collective rotation and the deformed potential
In 1953, the first experimental examples were found of rotational bands in nuclei, with their energy levels following the same J(J+1) pattern of energies as in rotating molecules. Quantum mechanically, it is impossible to have a collective rotation of a sphere, so this implied that the shape of these nuclei was nonspherical. In principle, these rotational states could have been described as coherent superpositions of particle-hole excitations in the basis consisting of single-particle states of the spherical potential. But in reality, the description of these states in this manner is intractable, due to the large number of valence particles--and this intractability was even greater in the 1950s, when computing power was extremely rudimentary. For these reasons, Aage Bohr, Ben Mottelson, and Sven Gösta Nilsson constructed models in which the potential was deformed into an ellipsoidal shape. The first successful model of this type is the one now known as the Nilsson model. It is essentially the harmonic oscillator model described in this article, but with anisotropy added, so that the oscillator frequencies along the three Cartesian axes are not all the same. Typically the shape is a prolate ellipsoid, with the axis of symmetry taken to be z. Because the potential is not spherically symmetric, the single-particle states are not states of good angular momentum J. However, a Lagrange multiplier , known as a "cranking" term, can be added to the Hamiltonian. Usually the angular frequency vector ? is taken to be perpendicular to the symmetry axis, although tilted-axis cranking can also be considered. Filling the single-particle states up to the Fermi level then produces states whose expected angular momentum along the cranking axis is the desired value.
Related models
Igal Talmi developed a method to obtain the information from experimental data and use it to calculate and predict energies which have not been measured. This method has been successfully used by many nuclear physicists and has led to deeper understanding of nuclear structure. The theory which gives a good description of these properties was developed. This description turned out to furnish the shell model basis of the elegant and successful interacting boson model.
A model derived from the nuclear shell model is the alpha particle model developed by Henry Margenau, Edward Teller, J. K. Pering, T. H. Skyrme, also sometimes called the Skyrme model.[7][8] Note, however, that the Skyrme model is usually taken to be a model of the nucleon itself, as a "cloud" of mesons (pions), rather than as a model of the nucleus as a "cloud" of alpha particles.
See also
1. ^ "Shell Model of Nucleus". HyperPhysics.
2. ^ Ozawa, A.; Kobayashi, T.; Suzuki, T.; Yoshida, K.; Tanihata, I. (2000). "New Magic Number, N=16, near the Neutron Drip Line". Physical Review Letters. 84 (24): 5493-5. Bibcode:2000PhRvL..84.5493O. doi:10.1103/PhysRevLett.84.5493. PMID 10990977. (this refers to the nuclear drip line)
3. ^ Wang, Meng; Audi, G.; Kondev, F. G.; Huang, W.J.; Naimi, S.; Xu, Xing (March 2017). "The AME2016 atomic mass evaluation (II). Tables, graphs and references". Chinese Physics C. 41 (3): 030003. doi:10.1088/1674-1137/41/3/030003. ISSN 1674-1137.
4. ^ Caurier, E.; Martínez-Pinedo, G.; Nowacki, F.; Poves, A.; Zuker, A. P. (2005). "The shell model as a unified view of nuclear structure". Reviews of Modern Physics. 77 (2): 427-488. arXiv:nucl-th/0402046. Bibcode:2005RvMP...77..427C. doi:10.1103/RevModPhys.77.427.
5. ^ a b Coraggio, L.; Covello, A.; Gargano, A.; Itaco, N.; Kuo, T.T.S. (2009). "Shell-model calculations and realistic effective interactions". Progress in Particle and Nuclear Physics. 62 (1): 135-182. arXiv:0809.2144. Bibcode:2009PrPNP..62..135C. doi:10.1016/j.ppnp.2008.06.001.
6. ^ Barrett, B. R.; Navrátil, P.; Vary, J. P. (2013). "Ab initio no core shell model". Progress in Particle and Nuclear Physics. 69: 131-181. arXiv:0902.3510. Bibcode:2013PrPNP..69..131B. doi:10.1016/j.ppnp.2012.10.003.
7. ^ Skyrme, T. H. R. (February 7, 1961). "A Non-Linear Field Theory". Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences. 260 (1300): 127-138. Bibcode:1961RSPSA.260..127S. doi:10.1098/rspa.1961.0018.
8. ^ Skyrme, T. H. R. (March 1962). "A unified field theory of mesons and baryons". Nuclear Physics. 31: 556-569. Bibcode:1962NucPh..31..556S. doi:10.1016/0029-5582(62)90775-7.
Further reading
• Talmi, Igal; de-Shalit, A. (1963). Nuclear Shell Theory. Academic Press. ISBN 978-0-486-43933-4.
• Talmi, Igal (1993). Simple Models of Complex Nuclei: The Shell Model and the Interacting Boson Model. Harwood Academic Publishers. ISBN 978-3-7186-0551-4.
External links
Music Scenes |
7c3dc74a3262d6c9 | Monthly Archives: April 2018
Calculation of the 4D complex number tau.
It is about high time for a new post, now some time ago I proposed looking at those old classical equations like the heat and wave equation and compare that to the Schrödinger equation. But I spilled some food on my notes and threw it away, anyway everybody can look it up for themselves; what often is referred to as the Schrödinger equation looks much more like the heat equation and not like the classical wave equation…
Why this is I don’t know.
This post is a continuation from the 26 Feb post that I wrote after viewing a video from Gerard ‘t Hooft. At the end of the 26 Feb post I showed you the numerical values for the logarithm of the 4D number tau. This tau in any higher dimensional number system (or a differential algebra in case you precious snowflake can only handle the complex plane and the quaternions) is always important to find.
Informally said, the number tau is the logarithm of the very first imaginary component that has a determinant of 1. For example on the complex plane we have only 1 imaginary component usually denoted as i. Complex numbers can also be written as 2 by 2 matrices and as such the matrix representation of i has a determinant of 1.
And it is a well known result that log i = i pi/2, implicit the physics professors use that every day of every year. Anytime they talk about a phase shift they always use this in the context of multiplication in the complex plane by some number from the unit circle in the complex plane.
In this post, for the very first time after being extremely hesitant in using dimensions that are not a prime number, we go to 4D real space. Remark that 4 is not a prime number because it has a prime factorization of 2 times 2.
Why is that making me hesitant?
That is simple to explain: If you can find the number i from the complex plane into my freshly crafted 4D complex number system, it could very well be this breaks down to only the complex plane. In that case you have made a fake generalization of the 2D complex numbers.
So I have always been very hesitant but I have overcome this hesitation a little bit in the last weeks because it is almost impossible using the complex plane only to calculate the number tau in the four dimensional complex space…
May be in a future post we can look a bit deeper in this danger; if also Cauchy-Riemann equations are satisfied in four real variables, that would bring a bit more courage to further study of the 4D complex number system.
After the introduction blah blah words I can say the 4D tau looks very beautiful. That alone brings some piece of mind. I avoided all mathematical rigor, no ant fucking but just use numerical results and turn them into analytical stuff.
That is justified by the fact that Gerard is a physics professor and as we know from experience math rigor is not very high on the list or priorities over there…
That is forgiven of course because the human brain and putting mathematical rigor on the first place is the perfect way of making no progress at all. In other sciences math should be used as a tool coming from a toolbox of reliable math tools.
This post is seven pictures long, all are 550 by 775 pixels in size except for the last one that I had to make a little bit longer because otherwise you could not see that cute baby tau in the 4D complex space.
Here we go:
Just take your time and look at this ultra cute number tau.
It is very very hard to stay inside the complex plane, of course the use of 4 by 4 matrices is also forbidden, and still find this result…
I am still hesitant about using dimensions that are not prime numbers, but this is a first result that is not bad.
End of this post. |
da9bc82b3b76275c | Numerical Solution to Friedmann Equation
Given an expanding universe with associated parameters that dictate how it expands, it is expected that we attempt to discuss the age of the universe and its ultimate destiny. Such calculations are unbelievably complex even for the world's largest supercomputers unless we impose all kinds of approximations and symmetries. The cosmological principle - the assumption that the universe is homogenous and isotropic at a large enough scale - needs to be employed to make the calculation tenable. Here we will take a shot at doing that. In order to discuss the possible destiny of the universe as well as its age, we need to solve the Friedmann differential equation that we derived in the last section. While it looks like we could write it as a first order differential equation by just taking a square root, that method has problems. The main problem is that we must allow dR/dt to be either positive or negative - corresponding to either expansion or contraction of the universe - and that information is lost in taking the root. Rather than do that, we will rearrange the equation by the following steps:
The terms are called density parameters. Specifically the term is the matter density and is the cosmological constant which is referred to as the vacuum density today. Current estimates of the two densities are that the matter density is 0.31-0.33 and the vacuum density is 0.70-0.72. The numbers add up to almost exactly 1.0. Lately data has been tending toward a sum of 1.02. You can see in the interactive graphic below, that the current best value of 1.02 implies that the universe is never going to contract. The data indicates that the universe is a one-time event. We are quite fortunate to be a part of it! The equation as derived above is a second order differential equation not too unlike ones we've solved before such as the quantum harmonic oscillator. To solve this equation numerically, we will feed it into GeoGebra as a pair of first order equations just as we did for the Schrödinger equation. The process in GeoGebra is:
1. Define the necessary constants 2. Define the two first order equations 3. Solve the pair of equations while providing boundary conditions.
The Code
You know how to define constants. There are only the two densities to be defined. The two equations are: and One important consideration is that we must solve the equations forward and backward from today to predict the future state of the universe and its past - all the way to its beginning. We did much the same thing for the QHO where we solved for the wave function to the right of zero and to the left of zero separately. For the Friedmann equation we solve it starting at the present since those are conditions we know. I used R=1 and dR/dt=1 which are correct by definition. You will notice a slider named It controls how far into the past the solution goes. The equations will diverge when R=0, so if the curve disappears (indicating such divergence), make the slider value larger until it reappears. If this parameter has a value of 0.34, for instance, it means that the universe actually had its beginning at 1-0.34=0.66 times the assumed current age. Therefore when astronomers mention the age of the universe it is contingent on the matter density and vacuum density - and in fact on the form of the term in the Friedmann equation that accounts for the vacuum density. Don't forget the assumptions of homogeneity and isotropy. Furthermore, one must certainly ask the question: In whose reference frame is the age being measured? That conversation can get long and complex so I will not go into it. Some interesting scenarios to try out in the model below are the following pairs of values for the densities
[1,0] which represents the present total density but with no vacuum density contribution [0.6, 0] which represents 60% of the present total density and no vacuum density [0, 1] which represents no matter density and only vacuum density [0.95, 0.05] which represents 95% of the current matter density and a small vacuum density [0.30,0.70] which is approximately the measured actual values [x,y] whatever combination you wish to play around with in a what-if scenario.
A GeoGebra model of the Friedmann equation is below. Try out some of the parameter choices listed above.
Numerical Solution of Friedmann Equation
Where is Inflation?
If you're paying attention, you probably noticed that Friedmann's equations don't include or model the concept of inflation. Inflation does not come from laws of physics, but is instead a product of the need for an explanation. It bothers physicists and cosmologists that the universe has a near uniform temperature throughout - as if those parts had once been in contact. Thus inflation was hypothesized. Another option which dives right into the present discussions among leading theoreticians is that perhaps there was no inflation. Perhaps instead, we have an incorrect sense of locality. We see evidence that nature on a small scale is nonlocal. Particles fill all space and in certain cases things seem to happen at rates that exceed light speed and instead are instantaneous action at a distance. Do an internet search on nonlocality or entanglement to read more. In any case, perhaps nonlocality does not only exist on the smallest scales but also on the largest. If true, we would need to more carefully understand the concept of spatial separation. No question that such understanding would lead to another scientific revolution perhaps in excess of the one that came with quantum theory and relativity at the onset of the 20th century. |
d4f1b49e958850ff | Read Full Transcript
Euvie: In one of our previous episodes with you we spoke about how to assess and prevent existential risks. We got a lot of positive feedback on that episode, people really loved it. I know you’ve spent a lot of time thinking about the subject since then and you’ve refined your position. I wanted to ask you, what is your current thinking on existential risks and how to deal with them?
Daniel: Alright, great. Very happy to be back with both of you. It’s valuable for us to go up a level when we’re thinking [00:02:30] about risks, existential or just any kind of catastrophic risk. It’s not just a weird fetish topic and it’s not just purely, “There is some risk and we want to survive.” The deeper topic – Future Thinkers, you guys are looking here at the future of economy, the future of sense making, the future of technology, the future of healthcare, the future of ethics, the future of lots of things. In the presence of exponential technology that makes possible changing things that were never possible [00:03:00] to change before, what is the guiding basis for how we utilize that technological power well and make good choices in the presence of that?
We’ve always had a sense that human nature is a certain thing, fixed genetically at whatever level it’s fixed. Then we’re looking at good behaviour within that framework. As soon as we’re at the level of genetic engineering humans as real, at least, thought experiment could be a real possible technology, we could work to genetically engineer all people to be sociopathic hyper [00:03:30] computers who didn’t feel badly about winning at win lose games at all and were optimized for it. We could work to trick our brains into AIs that could take us even further in that direction of being able to win lose games. We could engineer aggression out of ourselves completely.
Then we start to say, “Wow, those are really different realities. What do we want?” Well, we want to win the game. Why do we want to win? There’s an assumed [00:04:00] game that we’re playing against whomever, right, that some ingroup is playing against some outgroup and whether it’s US versus Russia versus China or it’s whatever ingroup outgroup dynamic, we say, “What happens when you keep playing that ingroup outgroup game forever?” Those games have always caused harm. They have a narrow set of metrics that define a win and everything outside of those metrics is externality.
When we’re playing a win lose game, whether it’s person on person or team on team, or whether the team is a tribe or a country [00:04:30] or a company or a race or a whatever it is, right. The ingroup competing against an outgroup for some definition of a win, we’re directly seeking to harm each other, to cause the lose to each other and we’re also indirectly causing harm to the commons, which is we’re competing for the extraction of scarce resource, we are externalizing cost, pollution, whatever it is to the commons. If we’re competing militarily, there’s harm that comes from warfare. We’re polluting the information ecology through disinformation to be able [00:05:00] to maintain the strategic competitive advantage of some information we have.
Level risk games necessarily cause harm. If you keep increasing your harm causing capacity with technology – and technology that makes better technology that makes better technology, exponentially – exponential harm causing eventually taps out. Eventually, it exceeds the capacity of the playing field to handle and the level of harm is no longer [00:05:30] viable. It’s an important thing to think about that when we think about the risks of weaponized drones or [inaudible [0:05:37] biowarfare or any of the really dreadful things that are very new technologies that we could never do before. They’re not really different in fundamental type than the things that have always sucked.
When we first came out with catapults versus not having those, or canons or whatever it is. They’re just a lot more powerful of the things that have always sucked. It’s important to get that [00:06:00] we have been murdering other people in mass in this thing called war is a reasonable way to deal with the different or to get ahead for a long time, for the whole history of the thing we call civilization. We invent unrenewably plants out of the ground in terms of agriculture, in terms of cutting trees or whatever in ways that lead to desertification for thousands of years.
All the early civilizations that don’t exist anymore don’t exist anymore because they actually led to their own self-termination in really critical ways. It’s not like war and environmental destruction etcetera [00:06:30] is a new topic. If the Mayans fell or the Byzantine or the Mesopotamian or the Roman, even the Roman Empire fell. It wasn’t everything. It was a lot but it wasn’t everything. When you have the ingroup outgroup dynamics keep getting larger – tribe to groups of tribes to villages, [inaudible [0:06:50], kingdom, nation state, global economic trading block – so as to be able to compete with a larger team that keeps having the incentives to do those with [00:07:00] larger weaponry extraction tech, externalization tech, narrative tech, information and disinformation tech, you get to a point where you have – like we have today – a completely globally interconnected supply chain and globally interconnected civilization dynamics because of scale, where the collapse ends up being really a collapse of everything.
The level of warfare possible can actually make an inhabitable biosphere, can actually not just be catastrophic for a local people – which previous wars always war – but catastrophic for people. [00:07:30] Get that rivalrous games have always caused the behaviours of humans that have sucked but exponential suck is existential – that’s an important way of thinking of it. That means that we have to be different than we have ever been in the history of the thing we call civilization to simply not extinct ourselves. The way we have always been has been a smaller scale of the same thing that at this scale is now extinctionary. That’s a big deal because it means that the solutions we’re looking for [00:08:00] do not look like previous best practices, because those were practices at how to win at win lose games where winning at win lose games is now the omni lose lose generator.
It is now the thing that we can’t keep doing. We don’t like to think this deeply about things, we like to take whatever things have been, like the good best practices, and figure out how to iterate a tiny bit and run with those things. Except, the entire toolset of best practices we have are actually why we’re at the brink of a gazillion different ex risks scenarios that are a result of using [00:08:30] those toolsets. Words like capitalism and science and technology and democracy are our favourite words because they give us a lot of rad shit, they did. They also created a heap of problems and problems are now catastrophic in scale, where the solutions need to be new stuff. People freak out because if you say something other than capitalism, they think that you mean communism and you want to take their stuff and have the state force everyone to do shitty jobs.
If you say something other than democracy, again, it’s assumed that it’s going to be like some kind of fascist terrible thing and [00:09:00] something other than science means a regressive religion. No. I want to be very clear that I’m not proposing systems that sucked worse than the current systems we have. I’m proposing deeper level of insights and deeper level of what we would actually call innovation and novelty than have been on the table so far. For instance, democracy. When Churchill said, “Democracy is the single worst form of governance ever created, save for all the other forms,” he was saying something very, very deep, which is democracy’s the best form [00:09:30] of government we’ve ever created and it’s fucking terrible, but all the other ones are even worst, because the idea of government or governance is this really tricky thing where we’re trying to get lots of different humans to cooperate or agree or make choices together and we just suck at that.
He was admitting something very important and true. Jefferson said similar things, which is we were able to get a lot of people to care about each other, to see through each other’s perspective, to make agreements if it’s a [00:10:00] tiny number of people. This was tribes, the history of tribes. That’s why they capped out a very small size was above the size at which you could care about everybody, know about what was going on for everybody, factor their insight, share the same base reality, where if you hurt anyone you were directly hurting someone that you lived with and loved and cared about… As soon as you start getting to a size where you can hurt people without knowing it anonymously, through some supply chain action or voting on something or whatever, it starts to become a totally different reality.
Anonymous people. We’re willing to give up some freedoms for [00:10:30] people we also depend on and care about and have this close bonding with. As soon as we get to larger than tribe dynamics, we have had a real hard time doing that in any way that doesn’t disenfranchise lots of people. We have democracy, it says, “Okay, there’s no way everybody’s going to agree on anything but we still have to be able to move forward and decide if we make a road or not, or go to war or not, or whatever it is. Let’s come up with a proposition of something to do and let’s at least see that more people like it than don’t like it. At least it seems to represent the majority [00:11:00] of thinking. That seems like a reasonable idea.”
Whether we have a representative or not, or it’s 67 percent majority or 51 percent or a voting currency, they’re all different versions but basically of the same idea. Let me explain something about how bad the idea actually is, the catastrophic problems that it creates so democracy will stop being this really wonderful word. It doesn’t mean we don’t give it its due for the beautiful place that it served in history. It’s just it is a place that is in the rear-view mirror in terms of if it continues in the forefront we [00:11:30] actually can’t navigate, it’s not an adequate tool for the types of issues we need to navigate. Again, remember I’m not going to propose any other system ever proposed – propose things that don’t even sound like governance but that sound like a different method of individual and collective sense making and choice making.
Democracy’s a process where somebody or somebodies make a proposition of something, “We’re going to build the bridge this way or go to war,” whatever it is. They make a proposition to benefit something that they’re aware of that they care about. But they’re not aware of everything and they don’t care [00:12:00] about everything that’s connected equally, so other people realize, “Hey, the thing that you want to do is going to fuck stuff up that I care about. That bridge that you want to build so that you can get across the river without driving all the way around is going to kill all the owls in this area and mess up some fisheries. I care about that, the owls and the fisheries.” The other people are like, “Fuck you and the environmental owl, fishery stuff. We need to get to work.”
What you have now is a basis where, if the proposition goes through, it will benefit some things and harm other things. [00:12:30] If it doesn’t go through, the thing that would have been benefited now isn’t benefitted but the thing that would have been harmed now isn’t harmed. You will get an ingroup of people that care about the one set more who then band against the outgroup of the people who care about the other thing more. This will always drive polarization. Eventually, the polarization becomes radicalization. We didn’t even try, in the process, to figure out what a good proposition that might do a better job of meeting [00:13:00] everybody’s needs was. We didn’t even try and do a context map of what are all the interconnected issues here, what would a good design that everybody might like look like, can we even try to find a synergistic satisfier.
That’s not even part of the conversation. Maybe, rather than make the bridge there, we could make the bridge at just a slightly different area and there’s no owls there. Maybe we can move the owls. Maybe we can use pontoon boats. Maybe we don’t even need to make a bridge because all the transportation back and forth is for one company and we can just move the company’s headquarters. Maybe… The sense making [00:13:30] to inform the choice making is not a part of the governance process. If I’m making choices blind or mostly blind based on a tiny part of the optics, then other people who have other optics are like, “Wait, that’s not a good choice,” and right now we both know that if one eye shows something but my other eye shows something else, I want to pay attention to both eyes. I don’t want them in a game theoretic relationship with each other, they do parallax and give me depth perception.
My eyes and my ears don’t want to make each other the same. They actually really want to do different [00:14:00] functions but they also want to pay attention to each other. If I hear something and I think that it was over there and I’m going to go away from it but my eyes tell me it’s actually somewhere else, I want to pay attention to all of my sense making. My brain is doing this really interesting process of taking all of this different sensory information, putting it together, and trying to pay attention to all of it to make a choice that’s informed by all that sense making.
We have never been able to think about governance processes like this, where we start with, “How would a group of people that are inter-effecting each other, that are inter-effecting [00:14:30] within a particular context as sense making nodes be able to share their sense making in a way that could create parallax? It could actually synthesize into a picture that could create design criteria of what is actually meaningful, good, desired etcetera by everybody that could work towards progressively better synergistic satisfiers that are based less on theory of trade-offs, that will always create some benefit at some harm which will lead to some ingroups fighting some outgroups that will lead to increased seeking [00:15:00] of power on both sides that will eventually turn into a catastrophic self-terminating scenario.
When we look at it, we see that this type of scenario has always led to left right polarization that eventually becomes radicalization that ends in war to stabilize. You can’t keep having war with exponential tech where nonstate actors have existential level tech. You just can’t keep doing that. The thing that we’ve always done we can’t ever do anymore. That’s a big deal. We also are able to think about how do brains put the information from [00:15:30] eyes and ears together. How do a bunch of neurons coming together in neural networks to make sense in a way that none of them individually do? How do 50 trillion cells as autonomous entities operate in your body in a way that is both good for them as individuals and good for the whole simultaneously, where they are neither operating at their own benefit of the expense of the ones that they depend on nor are they damaging themselves for the other.
They are really in an optimized symbiosis because the system depends on them and they depend on the system. [00:16:00] That means that they depend on each other and the system etcetera. Can we start to study these things in a different way that gives us novel insights into how to be able to have higher level organization, collective intelligence, collective adaptive capacity, collective sense making, actuation capacity? The answer is yes, we can and we can see that that’s not the thing that we’ve called democracy. The thing that we’ve called democracy is some process of a proposition based on some very limited sense making with some majority that is always going to [00:16:30] lead to polarization, that’s going to lead to the group that doesn’t get it feeling disenfranchised and then having them typically against a group as a whole and then the warfare that occurs between them of whatever kind, whether info warfare or actual warfare.
The reason I’m bringing this up is because democracy was great compared to just a terrible fascist dictator. It definitely is not adequate to the complexity of the problems we have to solve, nor can we continue to handle the kinds of polarization [00:17:00] and problematic dynamics that it inexorably creates. The same is true with capitalism, the same is true with the thing that we call science and tech, which is probably the hardest one, I’ll get to that one last. The thing that we call capitalism, we all know the positive story there that it incents people being self-responsible and being industrious and seeking to bring better products and services to the market at a better value, and those who do will get ahead and they should get ahead because they’re going to be good stewards of resource because they got the resource by bringing [00:17:30] products and services at a value that people cared about etcetera.
We know that story. There is some truth to it like there was some truth to the democracy story and it served evolutionary relevance and it most certainly can’t take us to the next set of steps. For instance, the thing that we call money and ownership, you can actually think of it related to governance as a type of choice making process. You don’t think about deeply enough the nature of what these types of structures are. If I have a bunch [00:18:00] of money it means I have concentrated choice making capacity, because I now want to make a choice that I can extend through a bunch of employees. I can have a bunch of them working aligned with my sense making on my behalf to increase my actuator capacity, or I can get physical resources to be able to build something that extends my actuator capacity.
We recognize a system that ends up determining who has resource and then the resource ends up being a way that some people are actually directing [00:18:30] the choice making of other people as a choice making system. We say, “Well, it’s actually a shitty choice making system, because the idea that those who have the money are better choice makers for the whole is just silly. It’s just really silly. Even if someone did bring good products or services to market effectively, that doesn’t mean there are kids who inherit it that do and it doesn’t mean that I can’t make money by extraction rather than production that is not debasing the environment and it doesn’t mean that I didn’t externalize.
I figured out how to externalize more harm than cost to the commons [00:19:00] than somebody else did, which drove my margins up so I got more of it. And it didn’t mean that I didn’t do war profiteering, right? We start to realize, okay, that’s just actually a shitty choice making system. We stop even thinking about what is the future of economics and the future of governance, because the words are so loaded that we just can’t help ourselves but boot bad concepts when we think of those words. We start thinking about us humans are making choices, individually and in groups, based on information that we have [00:19:30] towards some things that we value and hope to have happened. How do we get better on value frameworks, what we really seek to benefit, how do we get better on our sense making processes and how do we get better on our choice making processes?
What is the future of individual and collective value frameworks, sense making, choice making processes. That ends up being what obsoletes the things we call economics and governance now. We come back to capitalism for a minute. Okay, not only is it just not a great choice [00:20:00] making system and not only does it end up actually being a pretty pathological choice making system because it’s easier to extract than it is to actually produce and it’s easier to rip off what someone else worked really hard on than it is to work really hard on making the thing and it’s easier to externalize cost to the commons than not etcetera. We say, “Okay, it’s not just that, it’s that we’ve got 70 trillion dollars with of economic activity, give or take, trade hands,” depending on what you consider a dollar, “where pretty much all of that externalizes [00:20:30] harm at some point along the supply chain.”
Meaning, the physical goods economy required mining and turns it into trash on the other side. The linear materials economy is destructive to the environment, causes pollution along the whole thing etcetera. Causes marketing to be able to drive it that is polluting people’s minds with a bunch of manufactured discontent etcetera. Drives disinformation of some companies trying to disinform other ones to maintain strategic advantage is [00:21:00] running on dollars that are supported by militaries. You just think about the whole thing, you’re like, “Wow, okay, even the things that I think are good like just the movement of what those dollars are is externalizing harm in an exponential way that is moving towards catastrophic tipping points.”
Alright, that can’t work. Then even worse, you say, alright, let’s take a look at Facebook for a minute because Facebook, since the last time we talked, between Russia and Cambridge Analytica and Tristan etcetera, there’s a lot more [00:21:30] understanding of the problem of platforms that have a lot of personal data but also have their own agency with respect to you. We say, okay, Facebook wants to maximize time on site because they make money by selling marketing. The more users are on site more often, the more they can charge for the marketing so they figure out, they use very complex AI, analytics, split testing etcetera, to see exactly what will make online stickiest for you possible. [00:22:00] You happen to feel really left out when you see pictures of your friends doing whit without you and that makes you click and look and it makes you feel really bad, but they optimize the shit out of that in your feed because it’s what you actually stay on.
Other people it’s the hyper normal stimuli of the girls that are all airbrushed and photoshopped that makes them stay on, or whatever it is. It’ll always be a hyper normal stimuli. If you think about the way that McDonalds wanted to make the most addictive shit or Coca Cola or Philips Morris, if I am on the supply side of [00:22:30] supply and demand, I want to manufacture artificial demand because I want to maximize lifetime value of a customer, lifetime revenue. If I can make you addicted to my stuff, that’s the most profitable thing. When you look around at a society that has ubiquitous rampant addiction of almost all kinds everywhere, you realize that’s just good for business, that’s good for JDP.
We see that Facebook is just by its own nature doing what it’s supposed to be doing – public company, fiduciary responsibility to the shareholders, maximize profit blah, blah, blah – is they’re going to maximize your time on site and [00:23:00] maximizing your time on site is going to work better by hyper normal stimuli that make you addicted than by things that make you sovereign more because you actually realize that your life is better when you get the fuck off Facebook and go hang out with real people. It has to drive addiction and discontent and whatever else. It’s really good at a bunch of crafty tricks for how to do that.
We see then that corresponding with that is the rise of bulimia and anorexia and all kinds of body dysmorphia. We see that corresponding with it is depression and probability for suicide rates. [00:23:30] We see that the hyper normal stimuli of polarizing news grabs people more than non-polarizing news because fights in an evolutionary environment are really important things to pay attention to, they’re a kind of hyper normal stimuli. The most polarizing names are going to show up on YouTube videos and get the most traction. You notice whenever there’s a debate and it’s supposed to be a friendly debate, you look at the name of the YouTube video that gets the most shares and it’s, “So and so eviscerates so and so.”
Mike: Yeah, “Destroys this person.”
Daniel: [00:24:00] Yeah. That hyper normal stimuli grabs us in the [inaudible [0:24:03] in the worst way possible, makes the worst versions of us but it fucking maximizes time on site. Now, you look at it and you say, okay, the net result of Facebook is increased radicalization in all directions, increased broken sense making, decreased sense making, increased echo chamber and antipathy at a level that will increase both the probability of civil wars and world wars and everything else, [00:24:30] increased teen depression and suicide rates and shopping addictions. You’re like, “Wow, this is fucking evil. This is a really terrible thing.” Does Facebook want to do that? We could almost make up a conspiracy theory that Facebook has optimized how to make the world worst as fast as possible.
No, Facebook doesn’t want to do that. Facebook is just trying to make a dollar and justify to itself that it should continue to exist. It just happens to be that how it makes a dollar has the externality of all that stuff in the same way that when Exon was making a dollar or when the military industrial complex [00:25:00] or a sick care industry that makes money when people are sick and not when they’re healthy. Fuck, okay, look at that whole thing. Capitalism is inexorably bound to perverse incentive everywhere. At an even deeper structural level, if we tried to fix that, we’d say, okay, we’re competing to own stuff and the moment you own something I no longer have access to it, even if you’re not using it, even if you hardly [00:25:30] ever use that whatever it is, drill that you don’t even remember where you put it.
I don’t have access to it. As a result, there’s a scarce amount of this stuff. The stuff that is more scarce, we make more valuable. Then you own it, I don’t have access to it but because you want to be able to provide for your future or whatever and there’s uncertainty, you want to own all the shit that you can and pull it out of circulation, put it in safes and security boxes. That takes a lot of resource from the earth where everything you own is just bothering [00:26:00] me, because it’s being removed from my possible access. We are in a rivalrous relationship with each other because the nature of the good itself is rivalrous because of the ownership dynamic, right, and the valuation on scarce things in particular. Everybody can say, “It makes sense why we value scarce things,” because if there’s enough for everybody it doesn’t have the same type of advantages if there’s not enough for everyone to make sense, except the problem, of course, is if we make decisions based on an economic calculus, [00:26:30] which we do the CFO looks at the numbers and says, “No, this quarter we have to do this,” and they’re only paying attention to the numbers they’re paying attention to.
If air isn’t worth anything, because there’s enough of it for everybody, and I can’t increase my strategic competitive advantage over you by hoarding more air, then air is worth nothing. Even though we all die without it, literally it is valueless to us and our economic calculus. We will pollute the shit out of it and burn the shit out of it, fill it full of CO2, pull the O2 out of it in the oxidizing [00:27:00] of hydrocarbons, because we don’t factor it because it’s not scarce and doesn’t provide competitive advantage, whereas the gold on the other hand, the gold we will fight wars over, we’ll destroy environments and cut down trees to mine it out that were actually putting the oxygen in the air that we don’t give a shit about so that we can put the gold in a safety deposit box that we don’t ever look at and doesn’t get to do anything other than be noted on my balance sheet as some increased competitive advantage that I have over you.
[00:27:30] Because if there’s not enough for everybody to have it, then I get some competitive advantage by having it and you don’t. The value is proportional to that, not the real physical asset of what that metal could do, which is why the metal is not actually being used in electronics, it’s being in gold bars and [inaudible [0:27:42] and wherever else. This is all insane and, of course, we see that the moment that we make abundant things worthless – even if they’re the foundation of life – and we make scarce things worth a lot – even if they’re meaningless – then it creates a basis to [00:28:00] artificially manufacture scarcity and avoid abundance everywhere. If I have something I supply and I make it abundant, then all of a sudden it’s worth nothing, which is why if I make some software that I could give to the whole world for free once I’ve made it – I just have to make enough money to have made it – no, no, no. I’m going to patent protect and come sue you if you figure out how to pirate it so that I can keep charging you, even though I have no unit cost.
I’ve actually solved the scarcity problem and I’m going to artificially manufacture [00:28:30] scarcity to keep driving my balance sheet equation. The Kimberly diamond mines burn and crush their diamonds because we thought diamonds were scarce, we made the price high, then we realized they weren’t scarce and the people who had the price high didn’t want that to be known etcetera, etcetera. If we want a world of abundance for everybody, we can actually technologically engineer the scarcity out, create abundance everywhere is a feasible thing to do and we can talk more about that later. You can’t have an incentive on scarcity and engineer it out at the same time. Now, there’s scarce stuff, we’re competing for it.
You own it, [00:29:00] which means you possess it and you remove my capacity for access. I want to own it faster than you own it. Now, we get into a tragedy of the commons and a multi-polar trap thing. Say I go cut down a tree in the forest. I don’t need that many trees right now but you’re in this other tribe and you’re going to go cut down some of the trees. I would like there to be a forest rather than a total clear-cut environment, because forests are beautiful and I grew up with the forests and I like the animals that are there. I know that if I don’t [00:29:30] cut down the trees and I leave the trees, there still won’t be a forest because you’re going to cut down the trees. Since you’re going to cut down for the trees and the other tribe knows that, they know that I’m going to. We say, “Fuck it, I got to cut down all the trees as fast as I can because, if there’s going to be no trees anyways, I might as well get them rather than them get it because if they have increased economic power over me they’re going to use those pieces of wood against me in war or in an economic competition.
Now, we all increase the rate of the destruction of the forest as fast as we fucking can, even though we’d all like a forest, because we’re caught in a multi-polar prisoner’s dilemma [00:30:00] with no good way out. The tragedy of the commons is a multi-polar trap. The arms race where we say, “You know what, we should just not make weaponized AI, we should just not do that.” Everyone in the world thinks the idea of facial recognition weaponized AI drones is a world that would be better not to live in. Nobody wants to live in that world. It doesn’t matter how rich or powerful you are, you’re not fucking safe to a bunch of bad scenarios in that world. [00:30:30] But we’re all making them, everybody’s advancing the fucking tech. Why are we doing that?
Why don’t we just make a treaty not to do it? Because if we make a treaty, we secretly know that the other guy’s going to defect on it secretly. If he gets the tech first, it’s a world war and he’ll beat us. He knows that we’re going to defect on it. Either we make the treaty and we all defect on it secretly while agreeing to it publicly while trying to spy on the other guys, while trying to disinform their spies about what we’re actually doing. Or, we just don’t [00:31:00] even fucking agree to it. We move forward towards a world that increases the probability of extincting all of every day, in this scenario, these multi-polar traps. We’ll come back to this, I was on capitalism but I’m going to come back to this multi-polar trap so please remind to do it.
One of the things we have to solve at a generator function level is multi-polar traps as a category, meaning all instances of them [00:31:30] categorically. Because a multi-polar trap basically means where the local optimum for an individual agent, if they pursue it, leads to the global minimum for the whole. If I don’t make the nukes or the AI or take the tree down, I’ll get fucked right now. If I do it, I don’t get fucked right now, as we all do it, we all get worse fucked later. You made a scenario where the [00:32:00] incentive of the agent short-term locally is directly against the long-term global wellbeing. That is the world at large right now and capitalism is inexorably connected to that, because it drives these rivalrous dynamics and rivalrous dynamics are the basis of multi-polar traps.
I could just sound depressing, except there are actually solutions to all of these things. There is a basis for how we deal with physical stuff that is not you owning some scarce thing [00:32:30] and removing my access. We all know examples of it, we know that when you go to the grocery store and you use a shopping cart, you don’t bring your own shopping cart that you own which would be a major pain in the ass and I don’t do that. You have access to a cart where there’s enough carts that during peak hours, the busiest hours, there’s enough carts for everybody and enough leftover for repairs. Your access to the cart does not decrease my access to carts. I’m not upset that you got a cart. We’re not in any competition over carts. Because I only need enough [00:33:00] for there to be enough during peak time – which is maybe 200 people – I only need 200 carts, 220 carts, even though that store might service 10,000 people a month.
Think about 200 carts versus 10,000 carts, how much less resource it is from the environment, how much more efficient it is. You start saying, “Alright, let’s look at other places where this thing could happen.” We say, “Well, we’ve owned cars as a good and when you own a car I no longer have access to it.” Then you’re mostly [00:33:30] just going to leave it sitting and hardly ever use it. You’ll use it now and again, but it’s going to spend 95 percent of its life just sitting places. As a result of that, there’s a lot of fucking cars to not provide that much transportation. That’s a lot of metals taken out of the earth and a lot of plastics and a lot of actual environmental costs to be able to make those, to have most of them never in use.
You start to say, “Okay, we look at car sharing like Uber and Lift and whatever and we start moving there from a possession of a good [00:34:00] to an access of a service.” That’s pretty cool. We still got to pay for it, it’s still kind of shitty because there’s not enough of it to give me access to go everywhere but it’s pretty easy to imagine that that thing takes over. Then you use something like blockchain to disintermediate the central company that’s pulling the profits out of it. Now, it’s cheaper access for everybody and puts more resource back into the quality of transportation and then it becomes self-driving cars, so it doesn’t actually even have that cost it’s just a self-maintaining dynamic. You say, “[00:34:30] Okay, now it takes a tiny fraction of the metals coming out of the earth to provide higher quality transportation units for everyone, where you having access to transportation as a commonwealth service does not decrease my access to transportation as a commonwealth service.
But when you use transportation to go to school and learn stuff, or go to a makers studio and make stuff, or go to a music studio and make stuff, you’re going to make stuff that also becomes part of a commonwealth, enriches the commons that I have access to. We move from a rivalrous [00:35:00] ownership of scarce goods to an anti-rivalrous access to shared commonwealth resources. We went from rivalrous, not just to non-rivalrous meaning uncoupled – and in rivalrous we’re anti coupled, your wellbeing is directly against mine – but now to anti-rivalrous, which is a coupled good where, as you have access to more resources that make you more generative, what you generate enriches the commons that I have access to, so I am engendered, I am incented [00:35:30] to want to give you the maximum to support you having the maximum access to generative resources.
I’m not explaining all of what the future of the economic system looks like right now, but to just start giving a sense. When we think about the problems of capitalism, there have been problems associated with it forever but the scale of the problems is just more catastrophic now. I’m also sharing examples of ways that we can start shifting some of the things that we couldn’t shift previously, some of the things that neither Marx nor Smith [00:36:00] had available to them. This is very interesting. We have to shift off these systems and we can at the same time. This is, to me, a very interesting development insight when I look at biology in particular, is that if we look at the 40 weeks of a baby in utero – we’ve talked about this before but I’ll bring it up in this context – it couldn’t be born much earlier. It would be premi and without an ICU or whatever it would die.
It also couldn’t stay much longer, it would get too big and never be able to be born, kill the mum and kill itself. It comes out in a pretty [00:36:30] narrow window, where it both for the first time can – it actually now has the capacities – and has to, it can’t stay any longer. It’s interesting, it’s got 40 weeks on one growth path. It’s growing the whole time, there’s a growth curve. It’s not going to stay on that growth curve forever. If we tried to forecast its future and just continue the progression of 40 weeks, around 50 weeks it kills itself and the mum. That’s not the thing, it goes through this discreet non-linear phase that if I had never seen it before I’d have no idea how to predict. [00:37:00] Out of the birth canal, umbilical cord cut, respiration to the lungs, stuff coming in through the mouth rather than the belly button. Everything’s different and it does it in a way that’s unprecedented the whole 40 weeks that it’s existed previously.
If I tried to plot the curve of what has been, it is not that. It does it both when it has to and can. If we look at any type of development phase – chicken developing inside of an egg, if it tried to come out earlier it’s still goo, if it tried to stay in later it’ starves. If we look at a caterpillar to [00:37:30] butterfly through chrysalis, same thing. If it tried to just keep eating it would eat itself to extinction, if it tried to go into the chrysalis earlier it doesn’t have enough resources to make a butterfly, it would die and the chrysalis is partial goo. This is really interesting that when we look at discreet non-linear phase shifts, where there’s one phase – the caterpillar’s getting bigger and bigger and bigger, eating everything in its environment and if I just forecast it keeps being that, I forecast it eats all the environment and then dies, except that’s not what happens.
It does this really different thing that is not [00:38:00] an extension of the previous curve. When we try to take our capitalism curve, our nationalism curve, our science and tech curve and keep extending them, it’s just fucking silly and it’s why we come up with silly stuff like it [inaudible [0:38:11] into infinity with the singularity. On one side, if we take all the things that seem like they’re good then we [inaudible [0:38:17] into a singular. If we take the things that look bad, then it just goes into self-extinction. It’s neither of those curves, because shit is getting exponentially more powerful, which means better and worse at the same time. That means it’s neither of those curves, that means [00:38:30] that this phase is just coming to an end, it’s destabilizing.
The things that are getting exponentially worse will end the phase before the things that are getting exponentially better will change the nature of those things. That means that we’re actually going to get something altogether, if it is other than extinction. It’s very interesting that we’re at the place where these types of dynamics both have to change right now, because we have catastrophic level tech that we never had to have the same things that always sucked be [00:39:00] unsurvivable, and can change because we actually have the level of insights that make possible things that are different at a deep enough axiomatic level. It’s like when you’ve got a bunch of cells competing against each other and then there’s this metamorphosis where now they’re inside of a slime mould and they’re part of a multi-site organism where they’re participating together as part of a shared identity.
That’s a really deep level of reorganization. People going from that shift as separate, rivalrous, competitive agents [00:39:30] modelling themselves as apex predators trying to compete with each other to be the most apex predator and predate the environment. We modelled ourselves as apex predators, right, that was an easy thing to do. We look around at nature and we’re more like lions than gazelles. We look a lot like chimpanzees, they’re kind of badass hunter apex predator, they coordinate etcetera. We’re like, “That’s cool, we’ll do the apex predator thing.” We both want to see where we can get in the dominance hierarchies, we get to add the prestige hierarchy but how we do [00:40:00] within our tribe than our tribe as a whole, our species as a whole relative to the other species is in this apex position.
There’s a reason why we cannot think that way anymore. Again, we thought this way forever and we cannot think this way anymore. When I say we thought this way forever, I don’t mean every indigenous tribe thought this way, because they didn’t. They had a web of life and were merely a strand in types of thoughts. The ones that thought this way became more apex predators and killed those other people. It was effective to try and do this apex predator thing up until now. Now, it also destroys everything. [00:40:30] Again, the thing that has been adaptive is now anti-adaptive, which is why the thoughts that have made us win are the thoughts that make us extinct now. If I take an apex predator, how its ability to be a predator, its ability to kill other things and compete with the other predators for who’s most badass is pretty fixed and it’s pretty symmetrically bound with its environment.
Lions can’t get better at killing faster than gazelles get better at running. [00:41:00] They coevolve where the lions slowly get a little bit better at some things and the gazelles also get better at other things. If the lions just rapidly got way better, they’d kill all the gazelles and then debase their own capacity to keep living and then they’d be extinct. If great whites could make mile long drift nets and just take all the fucking fish out of the ocean and reproduce at an exponential rate, they would have already self-induced their extinction but they can’t. All the years, the great whites never got a drift net but we have drift nets and we have nuclear weapons and we have D9s and we have [00:41:30] the ability to technologically extend our predatory capacity and to do so in a way that is exponential and that makes us completely asymmetric with the environment that we depend on.
Again, I come back to the lion and I say, alright, the most badass lion, the most badass gorilla is not like 10x more badass than the next most badass gorilla. It’s like, marginally better. Marginally can win at a fight and only for a pretty short [00:42:00] period of time before the next guy takes him. Then you look at a Putin or a Trump and you say, “How much military capacity does that one person have to bare if they wanted to, or economic capacity compared to, say, a homeless guy?” You look at the spread and you’re like, “Oh, this is a very different distribution of power than any other species has.” Other species did not have a million x power dynamics within the same species or billion x. Freakin’ tremendous. Or that much more power [00:42:30] relative to their environment.
If the lions could get technologically more advanced than their predation faster than the gazelles could, it would debase the stability of the entire ecosystem. The thing to realize is if a cancer cell starts replicating in a way that’s good for it, it’s actually getting more sugar and replicating faster inside the body, if it keeps doing that it kills its host and kills itself, it is ultimately suicidal. Its own short-term success is suicidal. Viruses that kill people too quickly don’t propagate for very long because they kill their host and they don’t get a chance to propagate. The [00:43:00] viruses that are less lethal end up being the ones that get selected for over a longer period of time, because they get a chance to propagate. If there was a species that was so good at hunting that it killed everything in its environment, then it would go extinct.
It’s not the most competitive advantage that makes it through, it’s self-stabilizing ecosystems that make it through. This is such a way more complete understanding of evolution, which is individuals within a species don’t make it through because they wouldn’t have survived without the whole species. Species don’t even make it through [00:43:30] because they wouldn’t survive without other species. Whole evolutionary self-stabilizing niches make it through. That’s fucking important, right?
Mike: Yeah, this whole idea of survival of the fittest is challenged with this concept because at no point is survival of the fittest taking the whole system into account going forward infinitely. It has to go through that phase shift.
Daniel: Yeah, survival of the fittest was something that had a local truth but was not the [00:44:00] only global phenomena that was operating, because there was also a tremendous amount of cooperation that was happening. Cooperation within members of a species with each other and between species and inter-dependence on each other. Again, the idea of competition is hyper normal stimuli, it was an early hyper normal stimuli hijack like sugar and porn and airbrushed pictures and likes on Facebook. In an evolutionary environment, fights standout even though they’re not mostly what’s happening. Mostly, if I am in a forest [00:44:30] there’s a gazillion interactions happening every second of [inaudible [0:44:32] soil bacteria having a relationship with each other and gas exchange between me and the plants, that’s just boring but it’s almost everything.
Then I see a couple lions fighting and I’m like, “Shit, that’s really interesting, survival of the fittest.” There is this hyper normal stimuli that made us actually miss-emphasize what was happening as a part of the phenomena – it was not all of the phenomena – miss-emphasize it. There’s also this thing that, as we’re moving forward right now, the way [00:45:00] we have been applying that thinking, which is that some individual agent or some ingroups – countries, companies, races, whatever – some ingroups can be more fit to survive than others through better militaries, or better economic extraction tech, or better info and disinfo in narrative tech. That has always been true.
I’m not criticizing that that was always true and even necessary. If one tribe killed another tribe [00:45:30] and their life got better because now the other tribe wasn’t competing for pigs with them and now they got all the kids and the got all the stone tools that their tribe had made and whatever, they’re like, “Shit, I realize that this killing other tribes thing is actually a pretty good evolutionary strategy.” Now, all the other tribes have to build militaries or die by default. The win lose game becomes obligate. One, the win lose game worked, it actually worked if you were good at it. Two, it was obligate, which is if you didn’t do it, you got killed [00:46:00] by Genghis Khan or Alexander the Great or whoever the fuck it was.
When we look at cultures that did not focus on militaries but focused on the arts and humanities and education and healthcare etcetera, they outcompeted the other cultures in terms of quality of life but that wasn’t where the thing actually got decided. They all got murdered. The really effective murdering cultures combined and combined and made it through and that’s us today. Yet, the tools of murdering and the tools of environmental extraction are going up and up until [00:46:30] we’re at a level where the playing field just cannot handle that game anymore. You can’t keep extracting more stuff from the environment when you’ve already got to peak resource, when you’ve got the biodiversity issues and species extinction issues etcetera you can’t keep polluting an environment when you’ve got dead zones in the ocean from nitrogen runoff and CO2 levels in the air etcetera getting to the point of cataclysm.
You can’t keep doing increasing military tech following exponential tech curves, where [00:47:00] then non-state actors can have fully catastrophic level tech and you can’t even monitor it, you just can’t keep playing that game. This is the just is that the thing that has always defined the game is that it’s always been a rivalrous game theoretic environment, and the rivalrous game theoretic environment, if it can, produce tech that keeps increasing will always self-terminate at a point and we just happen to be in the eminence of that point. This is the first generator function of X risk. Now, [00:47:30] we take this all the way back to the beginning of the conversation – I obviously got longwinded.
At the beginning of the conversation we said, “Why are we focusing on risk?” If we’re focused on, “How do we design a civilization that is actually good, that’s beautiful, that’s desirable?” Those are hard terms and we’ll get to that in a minute, that starts to get to some of the inadequacy of the thing we call science right now and its incommensurability with ethics, we’ll get to that. What does a beautiful civilization look like? [00:48:00] The first thing we can say easily is that it doesn’t self-terminate. If it self-terminates, we can mostly all agree that’s actually not a desirable thing. If it is inexorably self-terminating, it is structurally self-terminating – not just one little accident that we can solve but overdetermined through mini-vectors because of underlying generators functions, that’s not a good civilization design.
The first design criteria of an effective civilization is that it’s not self-terminating. Then we say, “[00:48:30] What are the things that cause self-termination?” What we find is that even though there are a gazillion different ways that it can express, ways that it can actually happen, it’s from very deep underlying dynamics that, if we understand those and we solve them at the dynamics level, we fix all of them. The first one is this topic we’ve been talking about and we’ve talked about previously, which is that rivalrous games multiplied by exponential technology self-terminate, because rivalrous games cause [00:49:00] harm with power and more power ends up being more harm until it’s more than is handable at the playing field. We got that. Exponential tech, whether we’re looking at a scenario of everything getting fucked up by AI or by bio warfare or by nanotech stuff or by so many different types of scenarios, those are all the same types of choices humans have been making just with those exponential powers added.
Given that we cannot put those technologies away, [00:49:30] we cannot get the world to stop making them, much as often wish we could. We either figure out how to move from a rivalrous to an anti-rivalrous environment that is developing and deploying those technologies, or we self-terminate. This is the first design criteria is that we have to create rigorously anti-rivalrous environments. It doesn’t end up being all of it. I’ll do two generator functions, that’s one. That is [00:50:00] what we see in terms of all of the either exponential tech risks or war risks or economic collapse leading to failed state scenario risks, they all come from things like that. All of the environmental biosphere collapse stuff is also related to tech getting bigger, we’re fishing out more fish as we’re putting more CO2.
It also relates to that but it’s a slightly different thing that we look at here, which is whether we’re talking about CO2 [00:50:30] in the air or mercury in the air or the water, or micro-plastics in the water or a continent of plastic in the ocean, or nitrogen affluent in the river deltas. Those are all toxicity dynamics, those are all basically stuff that has accumulated somewhere that it shouldn’t have, we call that pollution. We don’t need to solve the dead zones issue or the CO2 issue, or that we have to solve all of those categorically, that we’re not creating accumulation dynamics. [00:51:00] On the other side of that same coin, is depletion dynamics, cutting down all the old growth forests, fishing out all of the fish, species extinction, biodiversity loss, peak nitrogen, peak oil etcetera. Those are all where we are using something in a way that depletes it.
Then it gets turned into [inaudible [0:51:21] pollution on the other side where it accumulates somewhere. We can define toxicity formally as depletion or accumulation because of an open loop in a [00:51:30] network diagram. If the loop was closed, the things wouldn’t accumulate, they would go back to the source of where we would get something from so it wouldn’t have to deplete. We notice that when we see any kind of natural system – we’ll go to nuclear system – whether it’s a coral reef or a forest or whatever, when we go to a forest there is not trash. The body of something dies, it’s compost. There’s faeces, it’s compost. Something gets cut and bleeds, it processes. It doesn’t matter what it is. Anything you can think about that is part of that environment [00:52:00] from an evolutionary sense, the environment’s evolved to be able to process.
There’s also no unrenewable use of anything. Anything that is utilized is utilized in a closed loop fashion. One of the things we see in complex systems, in natural systems, is comprehensive loop closure. One of the things we notice about the human design systems is open loops everywhere. The materials economy itself, learning materials economy that takes version stuff and turns it into trash after using it for a very short period of time is a [00:52:30] macro-open loop. There’s micro-open loops everywhere. We have to categorically solve that. We have to basically close all of those loops. Another way of saying what this generator function of issues is is that the stuff that nature makes is what we call complex, it’s self-organizing, self-creating, self-organizing, self-repairing.
The stuff we make, design tools, is complicated. It’s rad, the computer we’re talking on is rad. If [00:53:00] it got broken, it would not self-repair and it didn’t self-organize, it was made, whatever design, from the outside. We can think about complex stuff comes about through evolution, complicated stuff comes about through design. Two different types of creative processes with fundamentally different characteristics. Complex stuff has comprehensive loop closure everywhere, because it couldn’t externalize something and still be selected for adaptivity. The adaptiveness factors everything. Whereas, [00:53:30] if I’m building something I might make it for one reason or two or three reasons, but it actually affects a lot of other stuff but the other stuff wasn’t what I was trying to optimize for, so there ends up being more externalities occur.
Even this computer I’m talking to you on right now was not optimized – it was optimized for a bunch of things, so it’s really cool. The fact that it’s got a backlit screen that is 2D that’s at a fixed focal length from my eyes where I’m getting macular degeneration from spending too many hours on it was one of the things that… My eye health was not one of the things it was built to try and focus [00:54:00] on. Or, the fact that it’s fucking up my posture ergonomically by me looking down at the screen, it wasn’t one of the things it tried to focus on. It was a gazillion other things. What happened to the environment and the supply chain process of getting the metals to make it or the making of this computer affected a gazillion things that were not part of its design criteria, which means them making of it required externalizing a bunch of harm, i.e. a bunch of open loops where it affected stuff that was not internalized to the process.
We can see that if a forest burns, it repairs itself. [00:54:30] If a house burns, it does not repair itself. If my computer gets broken, it doesn’t fix itself but if I cut myself it fixes. There’s this fascinating difference. The reason we’re bringing this up is to say for something to be anti-fragile it has to be complex. Complexity is the defining origin of anti-fragility. Complicated things are all ultimately fragile, more or less fragile. If we have a situation where complicated systems [00:55:00] subsume their complex substrate, then this means continue to grow. Basically, we’re converting the complexity of the natural world into the complicated built world and it’s continuing to grow, it will eventually collapse because the complicated world – and if you notice, it’s just like my computer, the water infrastructure is complicated, not complex.
The pipes don’t self-repair, they can break easily, they’re subject to being broken on purpose or accident. The same is true with everything – [00:55:30] the roads, the energy grid, everything. Now, when I look at globalization, I say, “I’ve got an increasingly interconnected complicated world that is increasingly more complicated where the failures anywhere can trigger failures in more places, because nowhere can actually make its own stuff anymore, because their shit is complicated enough, it has to be made across this whole thing. This computer we’re speaking on took six continents to make. If China did, everywhere is fucked. [00:56:00] If the US died… There are so many places. If mining wasn’t accessible in Africa, everywhere’s fucked.
We see an increasingly interconnectedly complicated, which also means increasingly fragile built world that we’re trying to run exponentially more energy through, in terms of human activity, dollars, etcetera, etcetera. That’s happening while decreasing the complexity of the biosphere far enough that [00:56:30] its own anti-fragility is going away, we’re getting to a place where, rather than climate being self-stabilizing, it can go into auto-destabilizing, positive feedback cycles where the biodiversity is getting low enough that you can get catastrophic shifts in the nature of the [inaudible [0:56:45] that’s made it possible to have a biosphere like the one that we’ve lived in. If you have a world where complicated systems are subsuming their complex substrate and continuing to grow, they will eventually collapse. These are two different generator functions where we can say, “[00:57:00] If I’m trying to solve ocean dead zones, or plastic, or species extinction as one offs, I will certainly fail.”
At most, I move the curve of collapse a year, but there’s so many other scenarios for fail that are overdetermined. If I don’t solve the generator function of all of them, I haven’t actually got it. Having a right relationship between complex and complicated, and having loop closure within the complicated, [00:57:30] and creating anti-rivalrous spaces that is a safe basis for exponential technology is the first level of assessment of necessary design criteria for a viable civilization.
Read Full Transcript
Euvie: Can we talk about [00:01:30] some of the specific examples of the generator functions and what they look like in a society?
Daniel: Okay. I want to look at these same two generator functions through a couple different lenses that end up mapping to the same thing, but these lenses are valuable. One way of saying what needs to happen is that we need systems of collective intelligence and collective sense making and choice making that [00:02:00] increase with scale effectively. I’ll say why that makes sense. If you look at Jeffrey West’s work out of the Satna Fe Institute – his book Scale is a classic example – we see that you’ve got productive capacity or intelligence capacity, design capacity of a person.
Then you bring a few people together and you get increased productive capacity. For a little while, you’ll actually get an exponential up curve where more people give you a lot more ability, because they’re sharing new [00:02:30] capacities. This is the start up phase of something. Pretty soon, you get an inflection point where adding more people starts having diminishing returns per capita. Then you get to this tabling part of the S curve, where adding more people is not increasing adaptive capacity really at all. If I have a curve where more people don’t keep adding adaptive capacity well, then those people will always have an incentive to defect against that system, because they’ll actually have more [00:03:00] adaptive capacity per capita if they defect against it.
So, as long as I have a system of collective intelligence that cannot scale with the number of people, it can’t include everybody, right? It will always force its own collapse, its own defection. So far, we’ve found that all the things that we call intentional, innate human systems – countries and companies – have this type of curve. Again, this shows why none of those can make it. This is a design criteria thing of [00:03:30] if we’re creating a social system, if it doesn’t have sense making and choice making able to increase at least linearly with number of people through some process, then the people will defect against that system as soon as you pass the inflection point.
They’ll either defect and make their own thing completely, unless the big system – even though it’s less efficient per capita than them – is still a lot bigger than them and it would take them out. In which case, it’s not safe to overtly defect, then they covertly defect or they defect while staying within [00:04:00] the system, which starts to look like everything you see in the world today where someone says, “Okay, what is my particular bonus structure and how can I optimize getting the most bonuses here, even though it’s not what’s good for the company?” They have now defected against the wellbeing of the whole, because their own incentive and the wellbeing of the whole are actually anti-coupled or, at least, miscoupled.
We can see that almost everywhere people within systems have actually defected against [00:04:30] being optimally aligned with the integrity of the system. They’re basically preying on the system in some way, while continuing to look like they’re serving it because that’s where the incentive is. That’s going to, of course, make the system radically unadaptive and speed up the rate of its inevitable decline. If I could get a system that could scale its actual adaptive capacity with the number of people that were in, the people being in the system would always be better for them and participating with it than defecting against [00:05:00] it, covertly or overtly.
One way of looking at of what we have to solve is that sense making and choice making processes that scale adaptivity, right, where the adaptiveness scales with the number of people. We’ll come back to this in a minute. This ends up being a very central way of thinking about it. When we talk about rivalrous dynamics creating problems, anti-rivalrous dynamics, coherence dynamics of agents with other agents, are core to the solutions that we’re looking at. It also ends up being that [00:05:30] coherence is what solves a lot of the problems of the [inaudible [0:05:33] in tech. Just how when we were talking about, in democracy, the process of making a proposition, somebody sensing something they care about, they make a proposition that benefits it but it harms other things that they might not have even been sensing.
How do we bring all the sensors together to say, “What is everything important connected to this?” Holds those as design constraints and then go into an integrated design process that is trying to meet all of those design constraints. That sounds like the future of governance but it also [00:06:00] sounds like the future of technology design, right, to make technology that’s not externalizing harm. That also means infrastructure design. Coherence between all the agents that are sensing a part of the system means that all the information from all the parts of the system get to be internalized to the decision making processes, to the choice making processes which means that externalities get internalized.
Interpersonal coherence at scale ends up being a central way of thinking about this. Another way of thinking about it is the [00:06:30] generator function of all the X risks… One thing we just said is the generator function is social and coherence, right? That was actually what we just said is a generator function. Spoke the language of collective intelligences ability to scale. Another way of looking at it is that the source of all of the risks and also the things that have always sucked, the risks are just the things that have always sucked bigger, is the relationships between choice and causation, an inappropriate relationship between [00:07:00] choice and causation as two types of change.
This is going to get into a very tricky philosophic area, which is partly why I said the thing that we call science and tech is necessary but not sufficient. Science is a theory of causation, right? When we study law, what we’re studying – the laws of physics or the laws of any domain – are the rules that create change, causal change. When we study, even more purely, in computer science, [00:07:30] computation means rule based transform. Something is changing, it’s transforming from one state to another state in a perfectly predictable way, governed by a rule set, law set. Causation.
Choice, we don’t have a theory of choice, which is why every philosophic conversation gets into free will and determinism and gets stuck there. Sam Harris and Dunnite debated out and get nowhere and, at the end of the day, say, “We just have different intuitions on this.” That’s happened since forever in philosophy. [00:08:00] We could actually get into that topic at depth at some point, it does require some nuance to do well. We are making choices. We are, at minimum, operating as if we were making choices and we are making choices that are extended by techs knowledge of causation. Before I got the tech and I was just predator, I could hit somebody with my fist but then I could understand causation and say, “The heavier the thing is and the faster it goes and the harder it is, right, causal principle, the more damage it causes.”
[00:08:30] I can extend my fist through a stone hammer. Then I can extend it through a sword. Then I can extend it through a gun. Then I can extend it through a… I’m taking my knowledge of causation, science, and creating technology, applied science, that allows my choice still to be extended through levers, very powerful levers. Science is giving me a theory of causation that can create applied causation [00:09:00] without giving me a theory of choice that tells me how to use that increased causal power. This has been a classic thing in the philosophy of science from the time of Descartes, that science says what is not what ought, that it is the realm of objective, “This is,” but we can’t say anything what ought because we don’t deal in subjectives. Through most interpretations of physicalism, the entire thing is meaningless and deterministic anyways.
Ought just means something that we can’t even make sense of. [00:09:30] There’s a real bitch in here, there’s a real problem. Which is science gives us the ability to understand the physical world very deeply and to create technology that can change the physical world very deeply. It is the most powerful avenue for affecting the physical world – technology, which is applied science – but it has absolutely no compass for how it should do that. Not only does it have no compass, [00:10:00] it says that all compasses are gibberish, because it’s going to be some religious idea, some moral, some whatever, but they’re not science. They’re not objective. We’ve equated objective with real and subjective with gibberish.
The relationship between subjective and objective we don’t even really take seriously, we don’t know how, we don’t have good tools for that – intellectual tools. Then we say, “Okay, if technology is the power to change, to create nuclear bombs that can blow up the [00:10:30] entire fucking world, to create all kinds of dystopias or all kinds of protopias. If it’s all this power to do anything, what determines how we use that power?” It’s not an ethical framework, it’s not a theory of choice. What it ends up being is well, who paid for the science? Who can pay for the tech? How did they get the money to pay for the tech? Remember, money is a concentrated choice making system.
What we end up getting is capitalism. What that ends up meaning is social Darwinism, that means game theory. Win lose game theory still ultimately guides [00:11:00] the development of all the tech. If we’re now growing exponential tech with no basis for how to use it other than to keep winning at win lose games – where every time we increase out technological capacity, so do all sides in a multi-polar way, so where just upping the ante of the playing field. This is a very important principle. It is impossible to have a kind of technological asymmetric advantage and maintain it indefinitely once you employ it. You can maintain it while you don’t employ it, [00:11:30] in which case it’s not really advantage it’s only potential advantage.
The moment that you deploy it then everybody else sees it and it’s much easier to copy than it was to do the initial innovation. It’s easy to iterate and find some other examples of. All you did was up the playing field, which was where we went from one country with nukes to lots of countries with nukes and somebody with AI to lots of places with AI etcetera, etcetera. The idea, “Well, we’re going to develop this technology for our good purpose,” that’s just silly because it’s going to be [00:12:00] used by all players for all purposes. This is why I call naïve techno optimism naïve is because – could technology solve some problems? Sure. Have we addressed loop closure on complicated systems where it’s not externalizing harm somewhere else? No.
Have we recognized that that same technological capacity will be used by everybody for all purposes, all purposes that are incentive in a system that has ubiquitous perverse incentive? That’s a problem. [00:12:30] The answer is not that we technology ourselves out of it. We actually have to change that underlying basis. What that means is if we got all this power, what should we do with it? How the fuck do we answer that? Ultimately, that’s an ethical question, is an existential question. This comes to the forefront and we don’t like to ask this because we only know physicalism, which says, “This is not even a question.”
Basically, in physicalism we have a couple different versions and they all suck. They’re all nihilist unless you do some [00:13:00] mental gymnastics to try and pretend that they aren’t, which I would say intellectually that honest because either I get consciousness is an epiphenomena of brain but that makes it acausal. The brain is a causally closed physical system where voltage differentials in the brain move ions across membranes and a neuro transmitter goes in one way versus the other, you think you love her or not, or have this idea or believe this thing or not and it’s all basically controlled by particle physics and your consciousness is an epiphenomena for [00:13:30] some reason but could not be causal because what is consciousness that is not physics that could causally affect physics if physics is causally closed.
You don’t really have a choice, your experience of yourself as a choosing agent is ultimately an adapt illusion. Which is also a problematic argument because why would it be adaptive to have that thing if that thing doesn’t affect causality at all? What is the weird metaphysics of how first person pops out of third person? David Chalmer speaks about it in a very interesting good way when he says, “Okay, [00:14:00] You’ve got a bunch of atoms, they’re non-experiential and you arrange them in a particular way and experience pops out of it.” They’ve got position and momentum and mass and shit like that and now we’ve got feeling and emotion and different type of stuff. If I have tools to study third person, I’m going to find third person because that’s my tools.
If my epistemology is measure shit and then do math across the stuff that I measured, I’m going to come to a belief that reality is measurable shit. [00:14:30] If I went the Buddhism direction and my epistemology was enquire into the nature of my own experience, I could do the exact opposite. I could say, “I actually don’t know that there are any particles here. I might be dreaming. I might be a brain in a vat being simulated by electrodes. I might be a crazy person. I might be who the fuck knows. I can’t know any of that.” Buddhists and Descartes are the same thing. What can I know for sure? [inaudible [0:14:54], I’m experiencing something but the thing that I think is ‘I’ and the thing that I think is something might not be what they are.
What I know for sure [00:15:00] is experience. Let me explore the nature of my experience. Then, of course, the Buddhists typically do the opposite reductive move – what’s real is consciousness and the physical universe is either nor real or an epiphenomena. If my epistemology is to enquire into the nature of experience, what is going to come up as real is experience. What you end up having is epistemologies that bias ontologies and are self-referential. On the physics side and the [00:15:30] physicalism interpretation of physics – you get dreadful nihilism or incongruencies. Other than that, you get weird religious shit. We’re just not happy with any of this, so we just try not to think about this too much.
We actually can’t, because we have to actually address what do we want, why do we want it, what is worth wanting. The addict want’s stuff that makes their life suck and the little kid who grows up in front of the screen with a bunch of flashing lights wants it again because they were programmed to want it, their [00:16:00] sovereignty is hijacked. What is worth wanting? What actually creates a good life? What does good mean? The fact that we didn’t like the bad religious answers for these doesn’t mean that we get to throw these questions out complete, because you end up getting the existential risk of where we’re at right now, which is we don’t ask those, we just built the tech based on who pays us. Great.
We all get to go extinct in a world where we have no theory of choice but we are choosing based on shitty theory of choice, [inaudible [0:16:28], game theoretic theory of choice, and [00:16:30] we have a theory of causation so we’re extending our shitty choices through exponential tech. Another way of saying what we have to actually get right is individual and collective choice making that doesn’t suck. Another way of saying doesn’t suck is individual and collective choice making that is omni positive or, at least, vectoring towards omni positive. It is omni considerate in terms of considering all that will affect, realizing [00:17:00] that it’s interconnected with all of these things and that if I act in a way to beat the other guy, I’m engendering his enmity and my own insecurity in the future.
If I’m polluting the air, I breathe the air. This is where I have to shift all the way down to an identity level, which is the idea that I’m a separate thing. You’re a separate thing over there. I can advantage myself independent of you or even at your expense. It’s just actually ontologically not a well-formed idea. Ontologically, when I say ‘I’, [00:17:30] I might think ‘I’ and I’ve got some idea of what that means – a set of atoms contained in a particular boundary that looks like this guy called me, it’s on my Facebook picture, a set of memories or whatever. But when I think of ‘I’, I usually don’t think of all the plants on the biosphere, without which I would not exist because there would be no atmosphere and I would be dead.
I usually don’t think about all of the [inaudible [0:17:53] without which the plants wouldn’t exist, and all of the pollinators. I don’t exist without all those. If I think [00:18:00] of ‘I’ without those, it’s an ill formed concept. When I think of that ill formed concept and I think that it’s a good concept, it’s a real thing, I can think about advantaging that I at the expense of the things that I depend on. That is a kind of insanity but it is a kind of ubiquitous insanity currently. For me to make a choice for me, I have to know what the fuck I am. I am not a separate thing in game theory at competition with everything else. I am ultimately [00:18:30] interdependent with and dependent on so much other than me.
Then I say, alright, to not debase the world upon which I exist, the foundation upon which I exist, I start to get, “I am an emergent property of this whole thing. If there weren’t the bacteria and the plants and the pollinators, there weren’t the people that came before that made up the ideas that I believe and the words that I think in and the aesthetics that I perceive through, if there wasn’t gravity and electro magnetism making [00:19:00] the whole fucking thing possible, if it wasn’t for so much stuff that I consider other than me, I don’t exist.” Then I get, “I am an emergent property of the whole and I am both interconnected with everything and totally unique within it.” So are you.
As soon as I get that, I get a couple things. Because we’re totally interconnected, I cannot advantage myself at your expense in a real way, I only can if I haven’t factored loop closure everywhere. The way that I harm you [00:19:30] is going to end up being an open look that is going to harm the substrate of what I care about. When I factor all the closed loops, we get to David Bohm’s the Evolution of Wholeness. This whole thing. The evolution of wholeness. The Schrödinger equation of the whole thing, of evolving in its complexity. I can’t advantage myself at your expense in a meaningful way and I also can’t understand myself without understanding myself through my interactions with you and your feedback, your reflection.
[00:20:00] Choice. We have to have a theory of choice that comes from a philosophy that can actually relate choice and causation and can have a theory of causation and a theory of choice that are [inaudible [0:20:16] with each other. It’s not a made up theory of choice. It can serve as a basis for how to make good choices in the presence of all of the technology we have. Something we said last time when we were [00:20:30] together is the technology is a causal extension of our choice, extending our power to be like the power of Gods. The ability to create new life forms with genetic engineering, to destroy whole life forms and species, to blow things up. Nuclear bombs are bigger than Zeus’ lightening bolt was depicted. The power of Gods.
If you do not get the love and wisdom and consideration of Gods as a choice making basis for that power, you’ll use that power [00:21:00] in stupid ways that will end up causing self-destruction on a very tiny playing field. If I want to make choices that are good for me and I’m interconnected with the whole, I have to make choices that are good for the whole. That means I have to understand that I’m not a separate thing. I have to be able to progressively consider my relationship with everything, the impacts of my choices on everything, and be able to internalize the things [00:21:30] that would have been externalities into my choice making process. Not just at an individual level but at a group level and in group process.
This becomes the future of design as opposed to the open loop harm externalizing method that we’ve had is how do we have progressively more omni considerate design that is more omni positive, that is a safe vessel for the level of effect that it has.
Euvie: The idea of us being separate from the world in many ways is modern, because if you look at how [00:22:00] some of the what we call primitive tribes looked at themselves and considered themselves, it was not the same. They considered themselves a lot more connected with everything or just a small part of the whole. Like you said, those tribes, a lot of them got killed by the other tribes who considered themselves as more separate. How we place our sense of agency and where we place our sense of what is us [00:22:30] and what is not us actually has a very significant effect on the outcomes.
I think that in the modern world a lot of people don’t even realize that this sense of separation between self and world is a construct. That’s because we have certain scientific definitions or whatever of where body ends and the environment begins. If you start deconstructing that – you can do it through any number of things, [00:23:00] like just intellectually deconstructing it or in deep meditation looking at where your sense of self ends and self of the world begins – it just breaks down. That sense of self is very strong.
Maybe it is because capacity to affect the environment is so much higher now with technologies and all these tools that have become the extension of ourselves. Maybe that’s part of the reason why people are so attached [00:23:30] to their sense of self, because they actually see the affect so strongly. Whereas, in the past, people would see the affects of nature a lot more strongly than the affects of themselves.
Mike: I think it’s strong because of the dynamic of experience and time. You don’t see the direct results of your actions coming back to you after a long time until they become external and separate and then, when they come back, it’s someone else or something else affecting you in a way you don’t want to be affected, but you don’t loop it back to your original [00:24:00] actions. This is something I’ve been thinking about quite a lot as you’ve been talking, Daniel, is the natural experience that people have and how these theories might contradict those natural experiences, even though the natural experience is incorrect. How do we communicate these concepts in a way that, even though it can conflict with someone’s day to day, minute by minute experience, but could cause them to think differently and expand their identity?
Euvie: People’s experience is also dictated by their concepts, [00:24:30] those two things affect each other. When people have an idea about how a certain thing is, they tend to experience it in a way that is consistent with that idea.
Mike: True.
Daniel: If we look at the tribes that have more of there’s a web of life and we’re a strand in that view that you were mentioning a moment ago. They had an informal theory of choice. They might have had some moral ethical principles. [00:25:00] This was not like a formal system of formalized ethics but they had some theory. They had a very weak theory of causation. They thought diseases happen by ghosts and obviously didn’t know how to work with tech all that well etcetera. They didn’t make it through with other people that understood causation better. The causation leads to physical adaptive advantage and there’s a theory of choice, it’s just a theory of choice [00:25:30] called win out win lose games, right.
Game theory is only theory of choice. It’s just that theory is tapping out. That theory itself leads to its own self determination because, as the power keeps getting larger, it becomes more than you can keep winning at. We said this before. The win lose eventually becomes lose lose, omni lose lose when you have levels of war that nobody can win, when you have tragedy of the commons that are completely ruined commons, those types of dynamics. When you have an information ecology [00:26:00] that’s so broken from the incentive to disinform that nobody has any idea what the fuck is true.
You either figure out how to create omni win win as a new solution against win lose, other than win lose, or you get omni lose lose as the inevitable by-product of trying to keep win lose. It’s funny because there’s this very mytho poetic way of thinking about what we’re talking about that is otherwise a very technically clear thing. We said scaling to get the power of Gods we have to have the love and wisdom of Gods to guide it. Similarly, we say [00:26:30] we either create omni win win or we get omni lose lose, that sounds a lot like heaven or hell on the other side of purgatory and we get some hard choices to make.
We might even ask if maybe those stories were metaphors for seeing, “Hey, we’re making choices that if we kept getting more powerful at this, we’d be problematic.” Not just the heaven or hell and purgatory story, but the, “We’re in [inaudible [0:26:55] and [inaudible [0:26:57] is the next phase.” There’s a lot of these stories [00:27:00] that, in a Joseph Campbell like way, we can say, “That’s actually a very interesting way to think about where we’re at right now.” We can’t depend on Jesus to come back and solve it, or aliens to come fix it for us, or fifth dimensional light ray from the galactic centre or whatever it is. We have to actually become that Jesus or those aliens.
We have to actually become a being that has the right capacity to make the right choices and do the right sense making and the right choice making. It is true that a being [00:27:30] that’s a shit ton more effective at good choice making needs to encompass all of these things. The types of beings we’ve been, it’s just we have to become them. With regard to, Mike, your question on what experiences are natural. Obviously, it’s natural for me to think in English not Mandarin. If I grew up in China, I would think in Mandarin and that would be natural. Because I think in English, I have certain constructs of thought linguistically that are related [00:28:00] to the syntax of English that, if I thought in Mandarin, would be different.
My aesthetic would be different. I if grew up in the plains with the Sioux Indians, what would be natural to be would be different and my identity ethics, aesthetics etcetera. This is very conditioned. In the modern world, where we experience ubiquitously feeling separate, we also experience ubiquitously feeling alone and lonely. Not just alone, but lonely. We can see that loneliness as a major [00:28:30] source of depression, anxiety, and ultimately even suicidal impulse is pretty much ubiquitous in the developed world. Is that a natural experience? It’s certainly a ubiquitously conditioned one.
Was there ever a person in an indigenous tribe throughout the history, two and a half million years of [inaudible [0:28:50] history or 250,000 years of homo sapien history that felt lonely? Not that much. You live in a tribe with 150 people that know every fucking thing about you and that you’ve known forever and that you [00:29:00] fully depend on, you know everything about them and you’ve got no secrets. Lonely’s not really a thing. Separate from them is not really a thing. When we say ‘natural’, I think what you mean is conditioned ubiquitously. Then we have to say, alright, humans are more susceptible to being conditioned by their environment than most creatures.
Obviously, a dog that grows up in the wild or captivity is going to be different but we’re even more susceptible because – and this is a really important thing to understand about sapiens – the gorilla [00:29:30] or the chimp that’s close to us can grab onto its mum’s fur in the first five minutes while she moves around. We can’t even move our head for three months. A horse is going to get up and walk in 20 minutes and it takes us a year. Just to have a real sense, do the calculation, how many 20 minutes fit into a year to get how many multiples it takes us to be adaptive in the most simple way.
Wow, we are embryos for an extraordinarily long period of time. We are helpless for an [00:30:00] extraordinarily long period of time. Why is that? Because the horse comes pre-programmed how to be horse, it’s going to be a horse pretty much the same from generation to generation in the wild. It coming pre-programmed works because it adapted to fit its environment. Then it can hold the code of how to be adaptive in relationship to its environment. It used to be really adaptive to throw spears and probably none of us are all that good at throwing spears and we’re pretty adaptive because we like texts and podcasts and shit that they didn’t do. What’s adaptive for us [00:30:30] changes pretty rapidly.
Unlike the other creatures that are the product of their environment, we are, as tool evolvers, creatures that not only go and exploit every niche – gorillas didn’t leave and go find islands and be in the water and go to the arctic, they adapted to a niche. We went about found every niche and then we made new niches. We made cities and treehouses and all kinds of shit. As a result, we had to learn how to adapt to the new world that we were in, [00:31:00] because we were going to keep creating new worlds, finding and creating new worlds. As babies, we’re born pretty much open to just start imprinting what world am I in and how do I not have a genetic program to do this but how do I [inaudible [0:31:13] program to be able to do this?
It’s not just in childhood because we can change stuff so fast that the whole tribe might get up and leave and go somewhere else where we went from gathering to farming or to hunting, something super different. This is why we need adult neuroplasticity, to be able to change our [00:31:30] [inaudible [0:31:31] orientation even later in life. We are radically affected by our environment and it’s part of our adaptive advantage and we are mostly affected by our social environment, what the other people around us are paying attention to and doing and what the nature of the relationships are like. You’ll notice that mostly now people live in nuclear family homes on their own, don’t interact with other people all that much and then they spend all their time addictively looking at other people on screens.
They watch TV and they watch people [00:32:00] and then they go to Facebook and they look at people and they read news articles about people. They’re fucking fascinated by people but then we’re conditioned currently to both suck at interacting with people – capitalism has largely been a way of not needing each other directly and being able to indirectly intermediate meeting each other through money. Money can just buy whatever I need, I don’t actually have to have friends or neighbours or give a shit about anybody or have anyone give a shit about me. That seems really convenient and everybody is in a crisis of loneliness [00:32:30] at home looking at people hoping that they’re getting likes, which are not real relationships.
This is basically sugar rather than nutrients. This is porn rather than a real relationships. This is a hyper normal stimuli having co-opted real stimuli but also having desensitized us to real stimuli. We’re all being comprehensively bad for us. When you asked, “How could people have a different experience,” realizing that we are conditioned by our environment to think in words of a certain language to experience in a certain way, [00:33:00] I could say lots of things that won’t matter. I could say, “Go spend more time in nature,” but nobody will do it. We both know that people listen to podcasts, nobody will do it. Maybe one time and then their life is busy and they will be a product of their environment again. Very largely. Because the relationship with us here was a tiny part of all of the relationships that were influencing.
I could say, “Go spend time in nature,” and I could say, “Have some good psychedelic journeys and do this kind of breath work and contemplate that the atoms that make up your body were plants not that long ago. [00:33:30] It would just be nice words to say that wouldn’t end up affecting people that much. If you want to ask, “How could people really change their experience?” Only thing I can say that is statistically really going to work is immerse yourself in an environment that makes that likely, around people that are doing that. If you went and lived with a tribe for a while in the Amazon, you go live with the [inaudible [0:33:52] or something, you will experience the world differently before a couple weeks passes.
You will start hearing things in a different [00:34:00] way, experiencing yourself in a different way, feeling a connection to people in a different way. If you even change the group of people you’re hanging out around and what they’re motivated by and what they think about, you change the quality of your relationships with them, you will end up changing the basis of what seems like natural experience to you.
Read Full Transcript
Euvie: I’m reading Carl Young right now and he’s talking about his experience [00:01:30] of going to live with the [inaudible [0:01:32] Indians and how it completely just blew apart his conception of what was natural and how the western world view is different from other world views. He noticed that they were so happy and serene and they felt themselves as one with their environment and they had this very special relationship with the sun. It was very beautiful but, at the same time, he realized how they were very vulnerable to [00:02:00] the invasion of the western civilization. If we create a new civilization operating system that is not oriented towards winning wars, then how do we ensure that it doesn’t get destroyed by those who are?
Daniel: Imagine there’s a group of people that get a stronger theory of causation. They learn Newton’s physics and now they can use calculus to plot a [inaudible [0:02:22] curve and make the [inaudible [0:02:23] hit the right spot every time, rather than the pendulum dousing, which is hit or miss. That belief is going to catch on [00:02:30] and that’s why science really caught on, took us out of the dark ages, was because it led to better weapons and better agriculture tech and better real shit. It proliferated because it was proliferative. If we increase our theory of causation, that ends up catching on.
If we could increase our theory of causation and our theory of choice, and the relationship between them, that would actually be the most adaptive. Especially in the presence of where our particular game theoretic model of choice, with the extension of causation we have, is definitely self-determinating, [00:03:00] definitely anti-adaptive. I know we’ve been on for a long time. There’s really only one more thing that I want to share that closes this set of concepts. Remember we said that any source of asymmetric advantage, competitive advantage in a win lose game will end up, once it’s deployed, being figured out and utilized by everybody. You just up the level of ante in the playing field.
We also said that in the [inaudible [0:03:25] and many of the tribes we’ve mentioned [00:03:30] lost win lose games. We don’t want to try and build something that’s just going to lose at a win lose game, but we know that if it tries to win at win lose games it’s just still part of the same existential curve that we’re on. It has to not lose at a win lose game while also not seeking to win. It’s basically not playing the game but it is oriented about how not to lose. It’s a very important thing. We can think about power, the way we have traditionally thought of power, as a power over or power [00:04:00] against type dynamic – game theoretic, win lose dynamic. Any agent that deploys a particular kind of power leads to other agents figuring how to deploy the same and other kinds of power. Power keeps anteing up until we get to problems.
We could think about another term we might call strength, which is not the power to beat someone else but it’s the ability to not be beat by someone else. It’s the ability to maintain our own sovereignty and our own coherence in the presence of outside forces. We could talk about my power, “Can I go beat somebody up?” [00:04:30] But my strength is, “Can my body fend off viruses? Can I fend off cancers? Can I actually protect myself if I need to protect myself?” Which is different than, “Can I go beat other people up?” The power game is the game we actually have to [inaudible [0:04:45]. Power over dynamics mean rivalrous dynamics, mean win lose dynamics, is the source of evil.
It’s not that money is the source of evil, it’s that power over where I think my wellbeing is anti-coupled to yours ends up being [00:05:00] the source of evil and money’s just very deep in the stack of power dynamics. Status is and certain ways of relating to sex and a number of things are. We have to get rid of the power over dynamics. It doesn’t mean that I can’t develop strength that makes me anti-fragile in the presence of rivalry. Then I say, “What kind of capacity can I develop that doesn’t get weaponized by somebody else and used against me, given that any asymmetric capacity I get can be weaponized?” There’s really only one and this is a really interesting thing.
[00:05:30] If I make the adaptive capacity of… Say we’re trying to make a new civilization as a model and a new [inaudible [0:05:38] civilization, new economics, new governance, new infrastructure, new culture that has comprehensive loop closure, doesn’t create accumulation or depletion, doesn’t have rival risk games within it etcetera. If I try to have some unique adaptive capacity via a certain type of information tech, the other world will see that information tech and [00:06:00] use it for all kinds of purposes including against me where there’s an incentive to do so. The same is true if I use military tech or if I use environmental extraction tech, I’m still in the same problem.
If my advantage, if the advantage of the way this civilization’s structured has to do with increase coherence in the sense making and choice making between all the agents in the system, all the people in the system, increase interpersonal coherence, this cannot be weaponized. Anyone else employing it is now just a [00:06:30] system it’s self-propagating. For instance, when we start playing rivalrous games we start realizing that it’s not just us against somebody else, it’s teams against larger teams. Then the idea with a team is we’re supposed to cooperate with each other to compete against somebody else.
The compete against someone else idea ends up going fractal and I end up even competing against my team mates sometimes, and that’s part of why the collective intelligence doesn’t scale thing is because I’ll cooperate with my other buddies [00:07:00] on the basketball team unless there’s also a thing called most valuable player and I’m in the running for it and I have a chance to make the three point shot rather than pass, even though it decreases the chance of the team winning. Now, I have an incentiveness alignment. I might go for that. Then it gets bigger where there’s a couple of us that both want the same promotion to the same position at the company and we’re actually going to try and sabotage the other one, even though that harms the company, because my own incentive is not coupled with their [00:07:30] incentive and with the company.
Then I can look a couple different government agencies that are competing for the same chunk of budget. They will actually seek to undermine each other so they get more of the budget when they’re supposed to be on the same team called that country. What we realize is we get this thing called fractal disinformation, fractal decoherence and defection happening everywhere. That creates the most broken information ecology and the least effective coordination and cooperation possible. [00:08:00] That’s everywhere, that’s ubiquitous. It’s the result of that underlying rivalry. As we mentioned before, if I have some information, I want to make it to where nobody else can use it. I’m going to trademark it, patent it, protect my intellectual property. Before I release it, I actually want to disinform everybody else about it, tell them the gold is somewhere else so that they go digging somewhere else and don’t pay attention to what I’m doing.
If I am both hoarding information, disinforming others, and [00:08:30] keeping my information from being able to be synthesized with others, that means I’m going to not let my knowledge about cancer research and whatever it is be out there because I gotta make the [inaudible [0:08:39] back. The best computer that the world could build doesn’t exist because Apple has some of the IP but Google has some of the IP, and 10 other companies have some of the IP. The best computer that science knows how to build can legally not be built in this world. And the best phone [00:09:00] and the best car and the best medicine and the best every fucking thing there is because we keep the actual adaptive knowledge from synthesizing, let alone that everybody’s having to reproduce the same fucking work because we don’t want to share our best practices.
Then almost all the budget is going into marketing against the other ones rather than actual development and the marketing’s just lying in manipulation, at least, about why ours is comprehensively better when they then have to say the same thing about what their IP does that’s good and our IP does [00:09:30] another thing. Imagine if we had a world where all the IP got to be synthesized. Nobody was disinforming anybody else. Nobody was sabotaging anyone else. Everyone was incented to share all of the info. To synthesize all the info, to synthesize all of the intellectual property ideas etcetera, work towards the best things possible, imagine how much more innovation would actually be possible, how much more collective intelligence and capacity would actually be possible.
If our source of adaptive advantage [00:10:00] is that, is we make a world and now we have to come back to – we were talking about – if you possess a good and I no longer have access to it, we’re in a rivalrous relationship. You possess a piece of information that I don’t then get to have access to, we’re in a rivalrous information of knowledge etcetera. If you have access to something and we’ve structured the nature of access where we have engineered the scarcity out of the system as such that you’re having access doesn’t make me not have access, and you having access leads to you [00:10:30] being a human who has a full life and some of your full life is creativity and generativity.
Now, not only do you have the full access to those transportation resources, but also maker studios and art studios and education and healthcare and all the kinds of things that would make you a healthy, well adaptive, creative person – and every well adapted person’s creative. Nobody wants to just chill out watching TV all the time unless they were already broken, broken by a system that tried to incent them to shit that nobody wants to do or, if they can get a way out, they will but they’re a broken person. If someone was supported in an educational system [00:11:00] to pay attention to what they were innately fascinated and to facilitate that, they will all become masterful at some things with innate, intrinsic motivation to do those things.
Now, in a world where we support everybody to have access to the things that they are intrinsically incented to want to create. If, right now, I get status by having stuff but if we are engineering [inaudible [0:11:24] system everyone has access, nobody possesses any of it, everybody has access to all of it. There’s no status of having things and it’s totally boring. [00:11:30] There’s no differential advantage, the only way you get status, the only way you get to express the uniqueness of what you are is by what you create. Now, the whole system’s running towards that but you don’t create something to get money because money for what, to have access to shit you already have access to? Because you get to be someone who created that thing and both your own intrinsic express of it and extrinsically getting to offer to the world that would recognize that.
Now, we have a situation where we all have access to commonwealth resources that create an anti-rivalrous relationship to [00:12:00] each other. Obviously, I’m just speaking about this at 100,000-foot level. We could drill down on what the actual architecture looks like but there is actual architecture here. It is viable, it meets the design of criteria. We have sense making processes where we look at what a good design would be before making a proposition for a design that don’t lead to polarization and radicalization, that lead to progressively better synergistic satisfiers and get us out of theory of trade offs and into [inaudible [0:12:27] also as a way of having people be [00:12:30] more unifiable and on the same team.
If I’ve got this world where it’s source of competitive advantage, if you want to call it that, is that it is obsolete at a competition within itself, it has real coherence. Then not only is the quality of life erratically higher because the people don’t feel lonely and they actually have creative shit to do and they aren’t being used as instrumental pawns to some other purpose etcetera, and the quality of life is better because they’re actually making better medicine and better technology and better [00:13:00] etcetera because of the ability for the IP to synthesize and everything else. This world can also out innovate in really key ways in other places in the world. Then, rather than the rest of the world wanting to attack it, it can actually say, “Here, we’ll export solutions you want.” The rest of the world starts to create a positive dependence relationship.
The rest of the world says, “Shit, we want to be able to innovate. Why were they able to solve that problem we weren’t able to solve?” Because our guys were sabotaging each other and their guys weren’t sabotaging each other. We say, “Great, [00:13:30] here’s the social technology to use. Now, as soon as the implement that, it’s not being weaponized, that’s just the world actually shifting. That’s where this model actually becomes a new base [inaudible [0:13:37] the world starts flowing to. You have to do that. You create a prototype of a false act civilization that is anti-rivalrous, that is anti-fragile against rivalry – strengths, not power – and that is auto-propagating that by the nature of the solutions that it is exporting and by its own adaptive capacity, its own design [00:14:00] starts to implemented in other places. That’s ultimately, the desire. That is a path to a post-existential risk world, which is building it in prototype in a way where it auto-propagates.
Mike: That’s so exciting.
Euvie: Are there places where these prototypes are being built?
Daniel: Kind of but not really. There are intentional communities where people are trying to practice some things they feel will be relevant, a closed look agriculture [00:14:30] system where they at least have regenerative agriculture and maybe some kinds of social coherence technologies where they have a better system of conflict resolution than our current judicial system. Better parenting, better education. We have those things and those are cool and they’re valuable but they still have to buy their computers from Apple and fly on a Boeing to get somewhere that depends upon environmental destruction and war. They can’t actually provide a high-tech civilization, they’re not yet [00:15:00] civilization models and the civilization models are all part of this one dominant civilization model. This is next endeavour.
Before a full stack civilization occurs, obviously partial ones but that are directed towards a full stack civilization have to occur. Because in the world we’re talking about, there is no place for the things currently called judges or lawyers or politicians or bankers. Those systems don’t exist. That doesn’t mean that there isn’t an equivalent of a judicial system that is totally fucking different from the level of the theory [00:15:30] of ethics to the [inaudible [0:15:31]. Somebody has to be getting trained in the civics of that system. There’s nothing like banking but there are things like paying attention to how the accounting of this new economy works but people have to be trained in that. I’ll give you one, for instance, of we think about the physical economy.
We’ll take attention out and just look at physics. We see that there’s at least three different kinds of physics involved in the materials economy that are fundamentally different in their math. There is a physics of atoms, [00:16:00] physical atoms. There is a physics of energy and there is a physics of bits. Right now, those are fungible. I can use the same dollar to buy software or to buy energy or to buy metals or physical stuff, food. There’s a fixed number of atoms of a certain type on the planet that are reasonably accessible.
Right now, we’re just taking them from the environment in a way that causes depletion and then putting them back into the environment as waste in a way that cause accumulation toxicity on both sides. [00:16:30] You can’t keep doing that, we have to close loop it where we have been, give or take, a finite amount of metals. Not just metals but hydrocarbons, everything. A finite amount of atoms that are in a closed loop relationship but they can be upcycled because we have the energy to upcycle them, which means putting the same atoms into higher pattern – where the pattern is evolving, the pattern’s stored in bits.
If I take the atoms out of one battery, put it into a new battery which evolved as battery technology. That new battery is in bits, a blueprint. [00:17:00] I’m going to use energy to take the atoms in the current form, disassemble them and reassemble them into this new battery. There’s a fixed amount of atoms – we have to close loop those. There’s not a fixed amount of energy. We get new energy from the sun every day but we have a finite bandwidth of how much we get, we have to operate within. That’s not closed loop, we can use that up and it has entropy. Within that bandwidth, we [00:17:30] have to work. Bits are fundamentally unlimited – limited only by the compute of the energy and matter. That can keep expanding basically indefinitely.
Once I’ve made a bit, I can reproduce it exponentially without any unit cost because I can reproduce it exponentially without unit cost once I’ve developed at once. I get exponential return on software in a way that I could never get on atomic stuff, which is why Elon has a hard time raising money for physical stuff and [00:18:00] WhatsApp sold for 19 billion dollars. It’s why all the unicorns are software and mostly social tech or fintech or something that is actually doing not good things for the world can create exponential returns. It’s why Silicon Valley has basically mostly just invested in software stuff. If you make those fungible, you’ll actually be moving the energy away from the atom and away from the energy into the virtual. Away from the physical into the virtual, even though the virtual depends [00:18:30] on the physical, so you’re actually debasing the substrate upon which it depends.
You notice that, since the bit we can keep having more of forever, they don’t go through an entropy degradation when we use them, the energy we can use entropically degrades but we get more of it every day and the atoms don’t entropically degrade but we have to keep cycling them and there’s a fixed number. The physics, the accounting of those are totally different. That’s not one economy that’s totally fungible to its self-appointed accounting system. That’s three completely separate [00:19:00] but interacting physical economies. Again, we already said we’re not owning goods, we’re having access to shared commonwealth services. To really go into it, it’s a lot of things. These are examples of some of the considerations that have to happen to actually be able to think about things like economics at a level of depth that is appropriate to the nature of the issues.
If we don’t answer the question of what makes a good civilization. We simply say [00:19:30] what allows civilization to endure. We start with, let’s just say we don’t want existential catastrophic risk. There’s a whole bunch of different types of existential catastrophic risks that all have the same generator function, so we have to create categorical solutions to the generator functions. It turns out that those are the generator functions that have made all the things that we intuitively have experienced as sucking – like violence and environmental devastation.
Solving those generator functions doesn’t just allow us to survive [00:20:00] in maybe some dystopian dynamic. Anti-rivalrous dynamics with each other and close loop dynamics and the proper relationship between the complicated and the complex. Scalable collective intelligence systems in a right understanding in theory of choice and relationship with theory of causation end up being a way of mapping to a world that is definitely [inaudible [0:20:20] on any meaningful definition of [inaudible [0:20:23] and any meaningful consideration of what good could mean. We come back to this mytho poetic. [00:20:30] We can’t keep going the way that we’re going. There’s a purgatory coming and it’s going to go one way or the other. One way is really shitty and one way’s really lovely.
That’s a true story. Bucky Fuller said utopia or oblivion and it’s going to be hit or miss until we’re actually there. We’re not gonna know which way it goes. That’s the thing we’re just kind of in on is what it takes if we try to solve the various risks in isolation is impossible, we fail. If what it takes to solve them categorically ends up [00:21:00] also mapping to how we engage everyone in creating the true, the good, and the beautiful that is theirs to create progressively better, both upregulating their sense making of what that is with themselves and with each other being able to make that, scaling the collective intelligence that is progressively answering those questions better.
Mike: Can you leave us with some book recommendations for anyone who wants to read up on this a little more and expand their understanding?
Euvie: Books or other resources.
Daniel: Yes. [00:21:30] I wish I could share more things than I can but a lot of what we’re thinking in terms of a new civilization design like this is new. It doesn’t mean it’s not drawing on lots of elements. Couple things. We mentioned Jeffrey West’s work Scale on collective intelligence, that’s very valuable. We talk about some of the dynamics of game theory that have to shift and so finite and infinite games is just my favourite starting point. It’s one of the types of books that is very simple but has multiple levels of depth [00:22:00] of meaning. If you read it multiple times, you’ll gain new insights.
Euvie: That one blew my mind.
Mike: Yup, me too.
Daniel: James Carse, very beautiful. I have a blog, that has some articles on these types of topics. There’s also a booklist there with a heap of books.
Euvie: Great.
Mike: Awesome. This, as always, is enlightening and so fun.
Daniel: There are books that are valuable and there’s obviously all of your podcasts available. If you had the [00:22:30] experience of anything that I said making sense and actually seeming obvious. But then you also realize you never thought it in that particular way, then there’s a question, “Why did I never think it in that way, even though it seems obvious after the fact?” That’s one of the properties that clarity has is it can add novel insight but that seems obvious and [inaudible [0:22:50] relates to everything that we know.
Then we say, “Okay, I wasn’t thinking about rivalrous dynamics and upping the ante clearly enough, [00:23:00] I wasn’t thinking about the exponential economy and software and atoms all being fungible. There’s a problem there. I wasn’t thinking about open loops, closed loops in this particular way.” You can. If you start just asking for yourself, “What do I think is actually wrong?” Of all the things that seem wrong, what do they have in common? Why are those things that are wrong wrong? Then go deeper. Keep going deeper with that. Don’t look for one answer. Are there a number of different things that come together [00:23:30] that are partial answers to this? What would solving that look like? There are resources of other peoples thinking on these things but they won’t replace. They will inspire.
They won’t replace your own deep thinking on these things for your own sense making. The resource that I would offer the most is when you are bothered by something or you wish some beautiful thing existed more than it does, really think hard about why things are the way they are. Know that the first thoughts you will come with [00:24:00] are not that good. If you stop, you won’t get beyond there. If you really keep working on it and thinking about it and then going and researching in light of that question and then thinking about it more, you actually start to get novel and meaningful and deeper sense making that is aligned with what is yours to pay attention to and work on.
Euvie: To relate to what you said earlier, I think it really helps to use different modes of enquiry. People can get stuck in just intellectual inquiry or just [00:24:30] spiritual inquiry. But all are valuable. When we can see the same thing from several different perspectives, it becomes a 3D object, rather than just being flat.
Daniel: You just actually mentioned one of my favourite practices is really endeavouring to see and experience the world through the perspective of someone else and actually see and experience it. If I’m still thinking, “No, if I was in their position, I wouldn’t do that,” I haven’t got it yet. If I was in their position, I would do. If I’m really putting myself [00:25:00] in a position, I would get enraged by the things they’re enraged by. I would get excited by the things they’re excited by. This, both as a practice of empathy and connection, as a practice of understanding, as a practice of intelligence and learning because I see different things. If I look at the world through the lens of a mechanical engineer, I see shit everywhere that mechanical engineers see that you never saw.
Which is different than if you look through the lens of a fashion designer, or you look through the lens of a game theory person. They’re looking at different things. Or an evolutionary biologist. [00:25:30] There’s a whole universe I wasn’t paying attention to – like when you buy a car and you see it everywhere, you put on a lens and you start seeing all kinds of stuff, effective sense making. Also, as a kind of spiritual technology of getting out of the default mode of what you think you are. When I’m trying to be someone else, it’s not my personality that can do that. If I’m trying to take their personality, it’s not perspective that can do it. It’s the same consciousness witnessing my perspective that can then witness somebody else’s. As soon as I do that, I actually dissociate from just being [00:26:00] my personality and then I get some more spaciousness around it and less reacted by it.
Euvie: This also relates to not just looking through the lens of different personalities or different modern frameworks, it’s also looking through the lens of premodern frameworks or even animal frameworks and that can all be very useful, as well. If we look at the world through the lens of quote unquote primitive tribe, then different things come into focus and different things become very meaningful and very powerfully meaningful. It [00:26:30] resonates through your whole being… wow. That’s not to be dismissed, because there’s something there.
Mike: At the very minimum, for self-discovery it’s super useful because we spent more time as those primitive versions of ourselves than we have the modern versions. You can untie a lot of behaviours that you don’t really realize you have based off of just looking at the world from a primitive standpoint.
Daniel: If your [inaudible [0:26:55] decide to go visit the Amazon and live with a tribe [00:27:00] and experience the world through those eyes and be affected by it and then look at how they can incorporate elements of that experience of the world and their previous experience of the world to being able to live more fully. That would be beautiful. This was wonderful, I really appreciated being here with you both. I really do want to say that I love your podcasts and I love what you’re creating, both of you together. It’s very easy to have [00:27:30] well informed dystopian views and it’s easy to not think about things or it’s easy to have poorly informed positive views.
To have well informed positive views is actually tricky. If we keep being anything like the kinds of people that we have always been, that do really wonderful and really atrocious shit with our power but having exponentially more power, they’re all dystopian scenarios. We have to be something really different than we’ve ever been, which requires some type of deep [00:28:00] shift that could make that happen. That requires some deep thinking, some deep imagination. I know that what you are really dedicated to doing here on the show. I’m reminded of this quote from the book of Romans. It says the pathway to heaven and narrow and steep and the pathway to hell is wife and many. It’s just like a way of thinking about thermodynamics, which is that there’s just more ways to break shit than to build it.
There’s not that many ways all the cells can in your body [00:28:30] can come together that make you, the emergent property of you. There’s a lot of ways that you just get 150 pounds of goo. We say, “Okay, we’ve got a lot of power and most of those scenarios with a lot of power suck. How could we have this much power that doesn’t suck? How could we have this much power and not use it against each other?” We start seeing Orwell and control systems and we start saying, “That sucks, too.” To keep thinking through, “How could we have it that doesn’t suck that can’t depend on aliens or Jesus [00:29:00] coming back? How do we get us to be that kind of consciousness?” It’s a really good way of thinking about how to actually address these problems.
If we can’t [inaudible [0:29:09] without vision, man perishes. If we can’t even see a well-grounded positive future and positive use of the technological capacity have, we are not going to make it. I love that you all have this space dedicated to exploring a topic. Incentive is always evil. It’s a bitch. I don’t want to [00:29:30] move from perverse incentive to positive incentive. Positive incentive means my sense making has determined what I think is good and I’m going to try and extrinsically override your sense making to make a choice aligned with my sense making.
I’m going to use an extrinsic reward strategy to co-opt your sovereignty and have your choice making be based on my sense making incentive scheme rather than your own sense making. That is always the basis of evil. If I want to have a collective [00:30:00] intelligence that’s actually intelligent, I need everyone to have intrinsic sense making and choice making that is uncorruptable, which means it’s not being co-opted by extrinsic reward and punishment schemas. I got this wrong at first. I use to say, “We have to create a world where the incentive of every agent is rigorously aligned with the wellbeing of every other agent and of the commons, that is wrong.
What is right is to say that we must [00:30:30] rigorously remove any place where the incentive of an agent is misaligned with the wellbeing of other agents and the commons, but an adequate future is one that has no system of structural incentive.
Euvie: That’s a mine blower.
Daniel: The cells in your body are actually not trying to get the other ones to do what they want them to do. They have their own internal sense making processes and they do what makes sense to them. What makes sense to them also happens to be [00:31:00] what’s good for the ones around them, because they depend on the ones around them and vice versa and they’re in a communication process. The brain is not overriding the cells and in no way could handle the complexity necessary of the cells not doing their own sense making. Better incentive schemas as a transition, which is happening in the blockchain, is nice. It’s worse than more perverse incentives but it is transitional, not post-transitional.
It actually does not address existential-risk, it doesn’t give us the right collective intelligence. The right collective intelligence has to be [00:31:30] fractal sovereignty. Meaning, at the level of an individual and every group size, it has its own intact sense making and choice making that ends up being vectoring towards omni consideration. The level of shift that we’re talking about is hard to imagine.
Mike: Yeah. What has to be invented to even begin a transition and then be put to rest so that the next version can come along is such a long road.
Daniel: The reason we [00:32:00] incent people is because we have a civilization that needs a lot of shit done that is not fun. It’s dreadful stuff. We want to get the people to do the dreadful stuff. If we created a commonwealth where everyone had access to resources, then nobody would do the dreadful stuff and then the state would have to force them. That’s why we don’t like communism. Then you get the state imperialism. We say, “Okay, cool, let the free market force them instead,” it’s economic servitude but at least that doesn’t look like somebody did it because the [00:32:30] market is just the anonymous thing.
If you don’t do the shitty job, you’re homeless and you’re kids can’t eat. Cool. But we’ll tell you the story that you can work your way up and become wealthy, even though statistically we know that it’s silly, it happened to those two guys that one time. Even though statistically the rest of the time having more resources makes it easier to make more resources, and having less resources makes it harder to make more resources. The system has a gradient that makes it actually continue in the direction of inequality, not otherwise. [00:33:00] That’s where incentive came from. That’s the good side.
The negative side is a few controlling a many one to use incentive, reward, and punishment, and to get people to do the shitty things we have to do them. This is using choice to create a system of causation – incentive is a causal system, game theory is a causal system – to control the choice of others. Control or co-op. I want to have my theory of choice effect [00:33:30] causal dynamics that are only causal. I.e. If I make an automated robot, I haven’t actually made a sentient being a utility. I’m going to say something even deeper, which is instrumental relationships are evil.
Mike: Can you expand that?
Daniel: Yeah. If I’m interacting with you to meet some people that you know to get my network ahead or to get some knowledge from you or to gain access to something or to whatever it is, I have something [00:34:00] that I want to do that you are an instrument towards, you are a path towards. It’s a utilitarian ethic. You are a ends to a means for me. However I relate with you, however it affects your own sovereign, sentient experience is a place I might externalize harm because it’s not why I’m relating with you.
Mike: Yeah, yeah.
Daniel: Again, a healthy world, a world of the future, other people need to have intrinsic [00:34:30] value independent of utilitarian value to everyone. That’s a part of the culture. Not just the other people and other beings, all kinds of sentient beings, but relationships have intrinsic value. I’m going to invest in the integrity of our relationship independent of me getting anything out of it, because it is actually the basis of meaningfulness itself. Which is why in a utilitarian and instrumental dynamic, we’re getting ahead while feeling utterly fucking meaningless and destroying everything [00:35:00] that is meaningful in the process. That is us being hooked to addiction to a stupid game, where what we think we want is not what we actually want, and what we think of as a win is actually an omni stupid thing.
This is why the Hindu concept of Dharma was a virtue ethic, not a utilitarian ethic and there was a very meaningful set of concepts of, “Do what is inherently right in your relationships with life, [00:35:30] independent of what the outcome might be, because you really don’t know what the fucking out come is going to be.” If you try and just figure out what the outcome is going to be, you’re going to be wrong a lot of times and you’re also going to justify a lot of unethical stuff. Utilitarianism is the rampant ethics that anyone who’s paying attention to ethics pays attention to right now. It’s not without any merit but it is also problematic. It is up there with democracy and capitalism and the philosophy of science in terms of being a [00:36:00] problematic thing, to be the dominant system.
We cannot actually predict in complex systems well enough to do a utilitarian thing and the intrinsic dynamics of a relationship in another being end up becoming moved to being a means to an ends other than them. As soon as I start factoring everything meaningful along the chain of whatever I think my outcome is to where my outcome is actually being in a way that is an integrity with an honouring of all life, [00:36:30] now it’s a virtue ethic.
Euvie: Yeah. I was having this conversation recently about people who are obsessed with life hacking and optimizing everything. When they get into that mindset, eventually they get to what they call optimizing relationships and then they start putting people on a value hierarchy where they want to interact with high value people and they want to get a high value woman and they’re using these tactics to find and attract the most high value woman. It’s funny, because those people, [00:37:00] in my experience, are some of the most existentially unhappy people that I’ve met. They will never demonstrate it outward, in an outward way, but that’s what I’ve noticed. That people who try to optimize everything in this kind of utilitarian way end up really profoundly unhappy.
Daniel: It’s the same thing as continuously pursuing a better high. It’s, “I’m getting a hit from winning at a particular thing, so I’ve got to try to win at it all the time.” But, “I need the hit because my baseline [00:37:30] is that life feels fucking meaningless because I don’t actually have any real relationships and I don’t even know what meaning means. I don’t even know what intimacy means.” That hyper normal environment needs a hyper normal stimuli to feel anything. The fact that I use people instrumentally has people end up not liking me, which makes me hurt even more, which makes me want another hit even more.
Mike: People like you make it super easy. You just come on and it’s like we listen to audio books all day then we get to actually talk [00:38:00] to the person who’s coming up with the cutting-edge ideas themselves. It’s quite interesting, thank you.
Daniel: Bye ya’ll.
Euvie: It’s always wonderful getting our brains blown by you, thank you.
Daniel: Thank you both, this was really fun.
Daniel Schmachtenberger
Today on the show we welcome back Daniel Schmachtenberger, the co-founder of Neurohacker Collective and founder of Emergence Project.
After addressing the existential risks that are threathening humanity in one of our earlier episodes, Daniel now dives deeper into the matter. In the following three episodes, he talks about the underlying generator functions of existential risks and how we can solve them.
Win-Lose Games Multiplied by Exponential Technology
As Daniel explains, all human-induced existential risks are symptoms of two underlying generator functions.
One of these functions is rivalrous (win-lose) games. This includes any activity where one party competes to win at the expense of another party. Daniel believes that win-lose games are at the root of almost all harm that humans have caused, both to each other and to the biosphere. As technology is increasing our capacity to cause harm, these competitive games start to exceed the capacity of the playing field. Scaled to a global level and multiplied by exponential technology, these win-lose games become an omni lose-lose generator. When the stakes are high enough, winning the game means destroying the entire playing field and all the players.
Daniel then looks into some of the issues that capitalism, science and technology have created. Among byproducts of these rivalrous games are what he calls “multipolar traps”. Multipolar traps are scenarios where the things that work well for individuals locally are directly against the global well-being. He proposes that our sense-making and choice making processes need to be upgraded and improved if we want to solve these traps as a category.
Daniel believes that the current phases of capitalism, science, technology and democracy are destabilizing and coming to an end. In order to avoid extinction, we have to come up with different systems altogether, and replace rivalry with anti-rivalry. One of the ways to do that is moving from ownership of goods towards access to shared common resources. Daniel argues that we are at the place where the harmful win-lose dynamics both have to and can change.
He also proposes a new system of governance which would allow groups of people that have different goals and values to come to decisions together on various issues.
Humanity’s current predatory capacity enhanced with technology makes us catastrophically harmful to the environment that we depend on. Daniel challenges the notion of “the survival of the fittest”, and argues that it is not the most competitive ecosystem that makes it through, but the most self-stabilizing one.
Complicated Open-Loop Systems vs. Complex Closed-Loop Systems
The biosphere is a complex self-regulating system. It is also a closed-loop system, meaning that once a component stops serving its function, it gets recycled and reincorporated back into the system. In contrast, the systems humans have created are complicated, open loop systems. They are neither self-organizing nor self-repairing. Complex systems, which come from evolution, are anti-fragile. Complicated systems, designed by humans, are fragile. Complicated open-loop systems are the second generator function of existential risks.
Open loops in a complicated system, such as modern industry, create depletion and accumulation. This means that resources are depleted on one end of the chain and waste is accumulated on the other end. A natural complex system, on the contrary, reabsorbs and processes everything, which means there is no depletion or waste in the long run. This makes natural systems anti-fragile. By interfering with natural complicated system, we affect the biosphere so much that it begins to lose its anti-fragility.
At the same time, man-made complicated systems are outgrowing the planet’s natural resources to the point where collapse becomes unavoidable.
Daniel explains that the necessary design criteria for a viable civilization which is not self-terminating are:
• Creating loop closure within complicated man-made systems
• Having the right relationship between complex natural and complicated man-made systems
• Creating anti-rivalrous environments within which exponential technology does not threaten our existence
Complex systems, which come from evolution, are anti-fragile. Complicated systems, which come from design, are fragile. - Daniel Schmachtenberger of @theneurohacker Click To Tweet
The Relationship Between Choice and Causation
Daniel explains that adaptive capacity increases in groups, but only up to a point. After a certain point, adding more people starts having diminishing effects per capita. This results in people defecting against the system, because that’s where their incentives are. He proposes that we create new systems of collective intelligence and choice-making that can scale more effectively.
Science has given us a solid theory of causation. Through science, we have gained incredible technological power that magnifies the outcomes of our choices. We don’t have a similarly well grounded theory of choice, an ethical framework to guide us through using our increased power. When it comes to ethics, science rejects all non-scientific efforts, such as religious ideas or morals. Instead, win-lose game theory has served as the default theory of choice in science. This has lead to a dangerous myopia towards the existential risks that are generated from win-lose games.
It is necessary to address these ethical questions, especially in terms of existential risk we are now at. We have to improve the individual and collective choice-making to take everything in consideration and realize how we are interconnected with everything around us. “I” is not a separate entity, but an emergent property of the whole.
We need to have a theory of choice which relates choice and causation. The core to the solution, as Daniel explains, is the coherence dynamics, which internalizes the external and includes it in the decision making process.
It's not those with the most competitive advantage that make it through in the long run, but the most self-stabilizing ecosystems. - Daniel Schmachtenberger of @theneurohacker Click To Tweet
The Path to a Post-Existential-Risk World
Daniel talks about the need for individuals and systems to have strength as opposed to power. Strength is not the ability to beat others, the ability to maintain sovereignty in the presence of outside forces.
The path to the post-existential risk world is towards a civilization that is anti-rivalrous, anti-fragile and self-propagating. Ultimately, we have to create a world that has not only overcome today’s existential risks, but is also a world where humanity can thrive.
Do what is inherently right in your relationship with life independent of the outcome, because you really don’t know what the outcome will be. - Daniel Schmachtenberger of @theneurohacker Click To Tweet
After trying Qualia ourselves, we decided to arrange a special deal for our listeners who also wanted to give it a try. When you get an ongoing subscription to Qualia at, just use the code FUTURE to get 10% off.
In this episode of Future Thinkers:
• The generator functions of existential risks
• The impact of win-lose games, multiplied by exponential technology
• Win-lose games in the essence of capitalism, science and technology
• How to solve multi-polar traps
• How to replace rivalry with anti-rivalry
• The design criteria of an effective civilization
• The characteristics of complex and complicated systems
• Open-loop vs. closed-loop systems
• Scalable collective intelligence, sense-making and choice-making
• The relationship between choice and causation
• Natural and conditioned experiences
• The difference between power and strength
• The path to a post-existential-risk world
• How to increase our self sovereignty
• Why incentives are intrinsically evil
Book Recommendations:
More from Future Thinkers:
This Episode is Sponsored By:
Leave a reply
Your email address will not be published.
Say Hi!
Log in with your credentials
Forgot your details? |
ff65383d69cbbf3e | The pilot-wave dynamics of walking droplets By Daniel M. Harris & John W. M. Bush
The following video illustrates an alternativ approach to Quantum Mechanics and the wave/particle dualism.
Mainly a wave created through the resonance of an particle and an medium guides/pilots the particle based on it’s interaction oscillation. The minimal delay after the creation of the resonance wave and the subsequent contact between particle and wave place the particle on a slight offset to the waves high point and subsequently the wave is pushing or piloting the particle in a certain direction. As the final movement emerges from a iterative interaction the resulting path for each particle emitted by a certain source is somewhat random but shows a probabilistic coherence in the great numbers.
This view is an eyeopener for me and I am very much interested in the limits and possibilities for explaining Quantum Strangeness and if this interpretation might become the mainstream theory one day.
Make sure to check Veritasium for some thoughts and explanation on the above video:
Also watch the amazing footage created by Bryce Parry and Josh Parker:
The theory behind this is called Bohmian Mechanics (de Broglie–Bohm theory). The following video explains the theory:
Further reading can be found here:
De Broglie–Bohm theory
The de Broglie–Bohm theory, also known as the pilot wave theory, Bohmian mechanics, Bohm's interpretation, and the causal interpretation, is an interpretation of quantum mechanics. In addition to a wavefunction on the space of all possible configurations, it also postulates an actual configuration that exists even when unobserved. The evolution over time of the configuration (that is, the positions of all particles or the configuration of all fields) is defined by a guiding equation that is the nonlocal part of the wave function. The evolution of the wave function over time is given by the Schrödinger equation. The theory is named after Louis de Broglie (1892–1987) and David Bohm (1917–1992). The theory is deterministic and explicitly nonlocal: the velocity of any one particle depends on the value of the guiding equation, which depends on the configuration of the system given by its wave function; the latter depends on the boundary conditions of the system, which, in principle, may be the entire universe. The theory results in a measurement formalism, analogous to thermodynamics for classical mechanics, that yields the standard quantum formalism generally associated with the Copenhagen...
Definition from Wikipedia – De Broglie–Bohm theory
Tags: / Kategorie: Video |
f094eaab5b1bb1cc | Skip to main content
Quantum Physics in Secondary School? How Some Teachers Capture Student Interest Early
For many people, the phrase “quantum physics” evokes images of science fiction-like technology, a vaguely puzzled sensation, or perhaps just a shudder. Yet for a growing number of secondary school teachers worldwide and their teenage students, quantum physics represents a gateway to a lifelong love of science.
Kirsten Stadermann is one such teacher. Although she originally intended to make a career of researching laser physics, she “accidentally” started teaching in Holland when a local school lost a teacher unexpectedly.
“It was such a great experience,” she recalls of her first few months on the job. “At the school I was smiling all day long… and I thought, well, that’s what I want to do.”
Stadermann started off scouring the literature for mention of state and national standards that mentioned one or more topics generally regarded as modern or quantum physics*. Although she faced difficulties in finding readily accessible documents, which ultimately limited her study to mostly European countries, she analyzed the curricula of 15 countries that mention quantum physics—five more than had been previously studied. In addition, some of the countries (Germany, for example) set their educational standards on a state-by-state basis, so in total, she reviewed 23 different curriculum documents.
She noticed immediately that almost all the countries approach quantum physics as an elective or advanced option for students already studying physics—about 5%-20% of the overall population of 17- to 19-year-olds. She was surprised to find, however, that Einsteinian physics is a central component of the standard physics curriculum in Australia and the German state of Bavaria for students as young as 14 or 15. Despite some teachers’ concerns that younger students would be hopelessly befuddled by such complex topics, research shows that they are actually quite capable of grasping the key concepts. In fact, the mind-bending aspect of physics served to increase student interest in the subject—particularly among girls.
This meshes with Stadermann’s own experience in the classroom. As she explains, high schoolers don’t have the foundation in math that is required for quantum calculations, so teachers are forced to discuss the concepts in qualitative terms. This frequently leads to broader speculations on the philosophical implications as students grapple with such seemingly impossible ideas as wave-particle duality or the Heisenberg uncertainty principle.
Stadermann recalls, “[At first] I was a little bit afraid that in the end they couldn’t pass the exams… but the funny part is that these classes—where we had these discussions—did much better than the other classes.”
Artist's rendering of quantum entanglement
One of the biggest challenges faced by physics teachers lies in the fact that quantum physics is anything but straightforward. Indeed, it’s the very fact that there is no one “right” interpretation that many students are excited by the subject. “It’s very important to show them…that everybody understands that nobody understands it,” she says. Not only does this assuage the students’ anxieties when faced with difficult concepts, it illustrates for the students the very nature of science itself.
Scientific understanding is, after all, anything but uniform and static. For centuries, individuals have grappled with confusing observations, argued adamantly amongst themselves, and worked together to synthesize their many insights into a set of overarching theories. And as students discuss the merits and implications of various models, isn’t that what they experience on a smaller scale? Stadermann hopes that by engaging more fully with the nature of science in the classroom, students will be less susceptible to the distrust that all too often follows scientists. In particular, she mentions the fact that many people are wary of climate scientists because of the disagreements that occasionally erupt between them. “[My students] can understand that that’s normal, not a bad thing,” she says optimistically.
Naturally, there are reasons that quantum physics hasn’t been adopted as standard curriculum by droves of countries. For one, every hour spent on quantum physics is one hour that isn’t spent on another topic. For another, quantum physics doesn’t lend itself to the inexpensive and simple lab experiments high school teachers usually rely on for concrete experience. Finally, how can you test a topic where there are no right answers?
Stadermann readily agrees that it is difficult to choose a topic to abandon in favor of quantum physics. In Holland, the school board caused a controversy when it elected to cut optics, a subject critical to the understanding of everything from glasses to telescopes. That doesn’t necessarily mean that the students understand less of the world, though. In fact, they can develop a greater appreciation of modern electronics and explore cutting-edge technologies like quantum computers—arguably even more important to a young person preparing for a 21st-century career.
While it is also true that quantum physics doesn’t lend itself easily to lab experiments, students aren’t automatically devoid of the laboratory experience. Stadermann and countless other teachers have found the PhET simulations put out by the University of Colorado to be invaluable. These free applications can run on nearly any computer—no expensive lab equipment needed—and provide the students with a way to tweak and test to their hearts’ content. Stadermann cautions only that the students must be encouraged to form their own explanations and discuss amongst themselves for the simulations to be truly useful.
“Talking about interpretations from Bohr and Einstein doesn’t really help if the students are not allowed to think themselves,” she warns.
A PhET simulation designed to teach students about the photoelectric effect. Credit: PhET
The question of testing quantum physics in secondary school is more difficult. Stadermann has used both multiple-choice questions and in-depth oral examinations to study the effectiveness of her teaching, and they tell drastically different stories. In the multiple-choice tests, her students performed poorly overall, averaging around 7 correct answers out of 20. When she actually spoke with them, though, it soon became clear that they had a broad understanding ranging over a variety of interpretations—and what’s more, they enjoyed delving into deep philosophical questions. Even more importantly in her mind, they showed a good comprehension of the nature of science. “The question is, what are we actually testing with the multiple choice tests?” she asks.
Teachers of many other subjects, like literature, already rely on open-ended or essay-based exam questions for this very reason. “But physics teachers always want a yes and no answer so the whole culture of physics is different,” Stadermann says. In this sense, physics teachers are used to comparing their classes’ completed exams against an answer key, and the habit dies hard.
Even so, Stadermann believes there is value in teaching quantum physics even if it is not always satisfactorily tested in final exams—and that’s the benefit it brings to the students. They learn to cope with competing perspectives and enjoy the philosophical and technological implications of quantum physics. Perhaps most telling is the fact that she typically has five students per year who go on to study physics, when most classes produce just one. It’s not the study of projectile motion that captures her students’ interest—it’s the parallel universes, quantum tunneling, and semiconductors that they simply can’t get enough of.
The math can come later.
–Eleanor Hook
*The complete list of topics considered by this study are as follows:
Blackbody radiation
Bohr atomic model
Discrete energy levels (line spectra)
Interactions between light and matter
Wave-particle duality/complementarity
Matter waves, quantitative (de Broglie)
Technical applications
Heisenberg’s uncertainty principle
Probabilistic/statistical predictions
Philosophical consequences/interpretations
One dimensional model/potential well
Atomic orbital model
Exclusion principle/periodic table
Schrödinger equation
Calculations of detection probability
1. Physics is one of the most common science subjects taken in Singapore, with most students taking either pure physics or combined sciences (physics/chemistry). Yet, for those who are new to the subject such as secondary three students, learning about mechanics, light waves, heat radiation, electromagnetism and the structure of atoms can be a little daunting.Quantum physics is one of the most interesting topic to know more.
Post a Comment
Popular Posts
How 4,000 Physicists Gave a Vegas Casino its Worst Week Ever
Ask a Physicist: Phone Flash Sharpie Shock!
Lexie and Xavier, from Orlando, FL want to know:
The Science of Ice Cream: Part One
|
15cc74066e7c181b | Open Access Articles- Top Results for History of chemistry
History of chemistry
File:Mendelejevs periodiska system 1871.png
The 1871 periodic table constructed by Dmitri Mendeleev. The periodic table is one of the most potent icons in science, lying at the core of chemistry and embodying the most fundamental principles of the field.
The history of chemistry represents a time span from ancient history to the present. By 1000 BC, civilizations used technologies that would eventually form the basis to the various branches of chemistry. Examples include extracting metals from ores, making pottery and glazes, fermenting beer and wine, extracting chemicals from plants for medicine and perfume, rendering fat into soap, making glass, and making alloys like bronze.
The protoscience of chemistry, alchemy, was unsuccessful in explaining the nature of matter and its transformations. However, by performing experiments and recording the results, alchemists set the stage for modern chemistry. The distinction began to emerge when a clear differentiation was made between chemistry and alchemy by Robert Boyle in his work The Sceptical Chymist (1661). While both alchemy and chemistry are concerned with matter and its transformations, chemists are seen as applying scientific method to their work.
Ancient history
Early Metallurgy
Silver, copper, tin and meteoric iron can also be found native, allowing a limited amount of metalworking in ancient cultures.[3] Egyptian weapons made from meteoric iron in about 3000 BC were highly prized as "Daggers from Heaven".[4]
Arguably the first chemical reaction used in a controlled manner was fire. However, for millennia fire was seen simply as a mystical force that could transform one substance into another (burning wood, or boiling water) while producing heat and light. Fire affected many aspects of early societies. These ranged from the simplest facets of everyday life, such as cooking and habitat lighting, to more advanced technologies, such as pottery, bricks, and melting of metals to make tools.
It was fire that led to the discovery of glass and the purification of metals which in turn gave way to the rise of metallurgy.[citation needed] During the early stages of metallurgy, methods of purification of metals were sought, and gold, known in ancient Egypt as early as 2900 BC, became a precious metal.
Bronze Age
Main article: Bronze Age
Certain metals can be recovered from their ores by simply heating the rocks in a fire: notably tin, lead and (at a higher temperature) copper, a process known as smelting. The first evidence of this extractive metallurgy dates from the 5th and 6th millennium BC, and was found in the archaeological sites of Majdanpek, Yarmovac and Plocnik, all three in Serbia. To date, the earliest copper smelting is found at the Belovode site,[5] these examples include a copper axe from 5500 BC belonging to the Vinča culture.[6] Other signs of early metals are found from the third millennium BC in places like Palmela (Portugal), Los Millares (Spain), and Stonehenge (United Kingdom). However, as often happens with the study of prehistoric times, the ultimate beginnings cannot be clearly defined and new discoveries are continuous and ongoing.
File:Metal production in Ancient Middle East.svg
These first metals were single ones or as found. By combining copper and tin, a superior metal could be made, an alloy called bronze, a major technological shift which began the Bronze Age about 3500 BC. The Bronze Age was period in human cultural development when the most advanced metalworking (at least in systematic and widespread use) included techniques for smelting copper and tin from naturally occurring outcroppings of copper ores, and then smelting those ores to cast bronze. These naturally occurring ores typically included arsenic as a common impurity. Copper/tin ores are rare, as reflected in the fact that there were no tin bronzes in western Asia before 3000 BC.
After the Bronze Age, the history of metallurgy was marked by armies seeking better weaponry. Countries in Eurasia prospered when they made the superior alloys, which, in turn, made better armor and better weapons.[citation needed] This often determined the outcomes of battles.[citation needed] Significant progress in metallurgy and alchemy was made in ancient India.[7]
Iron Age
Main article: Iron Age
The extraction of iron from its ore into a workable metal is much more difficult than copper or tin. It appears to have been invented by the Hittites in about 1200 BC, beginning the Iron Age. The secret of extracting and working iron was a key factor in the success of the Philistines.[4][8]
Classical antiquity and atomism
Main article: Atomism
Democritus, Greek philosopher of atomistic school.
Philosophical attempts to rationalize why different substances have different properties (color, density, smell), exist in different states (gaseous, liquid, and solid), and react in a different manner when exposed to environments, for example to water or fire or temperature changes, led ancient philosophers to postulate the first theories on nature and chemistry. The history of such philosophical theories that relate to chemistry can probably be traced back to every single ancient civilization. The common aspect in all these theories was the attempt to identify a small number of primary classical element that make up all the various substances in nature. Substances like air, water, and soil/earth, energy forms, such as fire and light, and more abstract concepts such as ideas, aether, and heaven, were common in ancient civilizations even in absence of any cross-fertilization; for example in Greek, Indian, Mayan, and ancient Chinese philosophies all considered air, water, earth and fire as primary elements.[citation needed]
Ancient World
Around 420 BC, Empedocles stated that all matter is made up of four elemental substances—earth, fire, air and water. The early theory of atomism can be traced back to ancient Greece and ancient India.[11] Greek atomism dates back to the Greek philosopher Democritus, who declared that matter is composed of indivisible and indestructible atoms around 380 BC. Leucippus also declared that atoms were the most indivisible part of matter. This coincided with a similar declaration by Indian philosopher Kanada in his Vaisheshika sutras around the same time period.[11] In much the same fashion he discussed the existence of gases. What Kanada declared by sutra, Democritus declared by philosophical musing. Both suffered from a lack of empirical data. Without scientific proof, the existence of atoms was easy to deny. Aristotle opposed the existence of atoms in 330 BC. Earlier, in 380 BC, a Greek text attributed to Polybus argues that the human body is composed of four humours. Around 300 BC, Epicurus postulated a universe of indestructible atoms in which man himself is responsible for achieving a balanced life.
With the goal of explaining Epicurean philosophy to a Roman audience, the Roman poet and philosopher Lucretius[12] wrote De Rerum Natura (The Nature of Things)[13] in 50 BC. In the work, Lucretius presents the principles of atomism; the nature of the mind and soul; explanations of sensation and thought; the development of the world and its phenomena; and explains a variety of celestial and terrestrial phenomena.
Much of the early development of purification methods is described by Pliny the Elder in his Naturalis Historia. He made attempts to explain those methods, as well as making acute observations of the state of many minerals.
Medieval alchemy
See also: Minima naturalia, a medieval Aristotelian concept analogous to atomism
File:Fotothek df tg 0007129 Theosophie ^ Alchemie.jpg
The elemental system used in Medieval alchemy was developed primarily by the Persian alchemist Jābir ibn Hayyān and rooted in the classical elements of Greek tradition.[14] His system consisted of the four Aristotelian elements of air, earth, fire, and water in addition to two philosophical elements: sulphur, characterizing the principle of combustibility; "the stone which burns", and mercury, characterizing the principle of metallic properties. They were seen by early alchemists as idealized expressions of irreducibile components of the universe[15] and are of larger consideration within philosophical alchemy.
The three metallic principles: sulphur to flammability or combustion, mercury to volatility and stability, and salt to solidity. became the tria prima of the Swiss alchemist Paracelsus. He reasoned that Aristotle's four-element theory appeared in bodies as three principles. Paracelsus saw these principles as fundamental and justified them by recourse to the description of how wood burns in fire. Mercury included the cohesive principle, so that when it left in smoke the wood fell apart. Smoke described the volatility (the mercurial principle), the heat-giving flames described flammability (sulphur), and the remnant ash described solidity (salt).[16]
The philosopher's stone
Main article: Alchemy
File:William Fettes Douglas - The Alchemist.jpg
"The Alchemist", by Sir William Douglas, 1855
Alchemy is defined by the Hermetic quest for the philosopher's stone, the study of which is steeped in symbolic mysticism, and differs greatly from modern science. Alchemists toiled to make transformations on an esoteric (spiritual) and/or exoteric (practical) level.[17] It was the protoscientific, exoteric aspects of alchemy that contributed heavily to the evolution of chemistry in Greco-Roman Egypt, the Islamic Golden Age, and then in Europe. Alchemy and chemistry share an interest in the composition and properties of matter, and prior to the eighteenth century were not separated into distinct disciplines. The term chymistry has been used to describe the blend of alchemy and chemistry that existed before this time.[18]
The earliest Western alchemists, who lived in the first centuries of the common era, invented chemical apparatus. The bain-marie, or water bath is named for Mary the Jewess. Her work also gives the first descriptions of the tribikos and kerotakis.[19] Cleopatra the Alchemist described furnaces and has been credited with the invention of the alembic.[20] Later, the experimental framework established by Jabir ibn Hayyan influenced alchemists as the discipline migrated through the Islamic world, then to Europe in the twelfth century.
During the Renaissance, exoteric alchemy remained popular in the form of Paracelsian iatrochemistry, while spiritual alchemy flourished, realigned to its Platonic, Hermetic, and Gnostic roots. Consequently, the symbolic quest for the philosopher's stone was not superseded by scientific advances, and was still the domain of respected scientists and doctors until the early eighteenth century. Early modern alchemists who are renowned for their scientific contributions include Jan Baptist van Helmont, Robert Boyle, and Isaac Newton.
Problems encountered with alchemy
There were several problems with alchemy, as seen from today's standpoint. There was no systematic naming system for new compounds, and the language was esoteric and vague to the point that the terminologies meant different things to different people. In fact, according to The Fontana History of Chemistry (Brock, 1992):
The language of alchemy soon developed an arcane and secretive technical vocabulary designed to conceal information from the uninitiated. To a large degree, this language is incomprehensible to us today, though it is apparent that readers of Geoffery Chaucer's Canon's Yeoman's Tale or audiences of Ben Jonson's The Alchemist were able to construe it sufficiently to laugh at it.[21]
Chaucer's tale exposed the more fraudulent side of alchemy, especially the manufacture of counterfeit gold from cheap substances. Less than a century earlier, Dante Alighieri also demonstrated an awareness of this fraudulence, causing him to consign all alchemists to the Inferno in his writings. Soon after, in 1317, the Avignon Pope John XXII ordered all alchemists to leave France for making counterfeit money. A law was passed in England in 1403 which made the "multiplication of metals" punishable by death. Despite these and other apparently extreme measures, alchemy did not die. Royalty and privileged classes still sought to discover the philosopher's stone and the elixir of life for themselves.[22]
There was also no agreed-upon scientific method for making experiments reproducible. Indeed many alchemists included in their methods irrelevant information such as the timing of the tides or the phases of the moon. The esoteric nature and codified vocabulary of alchemy appeared to be more useful in concealing the fact that they could not be sure of very much at all. As early as the 14th century, cracks seemed to grow in the facade of alchemy; and people became sceptical.[citation needed] Clearly, there needed to be a scientific method where experiments can be repeated by other people, and results needed to be reported in a clear language that laid out both what is known and unknown.
Alchemy in the Islamic World
File:Jabir ibn Hayyan.jpg
Jābir ibn Hayyān (Geber), a Persian alchemist whose experimental research laid the foundations of chemistry.
In the Islamic World, the Muslims were translating the works of the ancient Greeks and Egyptians into Arabic and were experimenting with scientific ideas.[23] The development of the modern scientific method was slow and arduous, but an early scientific method for chemistry began emerging among early Muslim chemists, beginning with the 9th century chemist Jābir ibn Hayyān (known as "Geber" in Europe), who is considered as "the father of chemistry".[24][25][26][27] He introduced a systematic and experimental approach to scientific research based in the laboratory, in contrast to the ancient Greek and Egyptian alchemists whose works were largely allegorical and often unintelligble.[28] He also invented and named the alembic (al-anbiq), chemically analyzed many chemical substances, composed lapidaries, distinguished between alkalis and acids, and manufactured hundreds of drugs.[29] He also refined the theory of five classical elements into the theory of seven alchemical elements after identifying mercury and sulfur as chemical elements.[30][verification needed]
Among other influential Muslim chemists, Abū al-Rayhān al-Bīrūnī,[31] Avicenna[32] and Al-kindi refuted the theories of alchemy, particularly the theory of the transmutation of metals; and al-Tusi described a version of the conservation of mass, noting that a body of matter is able to change but is not able to disappear.[33] Rhazes refuted Aristotle's theory of four classical elements for the first time and set up the firm foundations of modern chemistry, using the laboratory in the modern sense, designing and describing more than twenty instruments, many parts of which are still in use today, such as a crucible, cucurbit or retort for distillation, and the head of a still with a delivery tube (ambiq, Latin alembic), and various types of furnace or stove.[citation needed]
For practitioners in Europe, alchemy became an intellectual pursuit after early Arabic alchemy became available through Latin translation, and over time, they improved. Paracelsus (1493–1541), for example, rejected the 4-elemental theory and with only a vague understanding of his chemicals and medicines, formed a hybrid of alchemy and science in what was to be called iatrochemistry. Paracelsus was not perfect in making his experiments truly scientific. For example, as an extension of his theory that new compounds could be made by combining mercury with sulfur, he once made what he thought was "oil of sulfur". This was actually dimethyl ether, which had neither mercury nor sulfur.[citation needed]
17th and 18th centuries: Early chemistry
File:Georgius Agricola.jpg
Agricola, author of De re metallica
Practical attempts to improve the refining of ores and their extraction to smelt metals was an important source of information for early chemists in the 16th century, among them Georg Agricola (1494–1555), who published his great work De re metallica in 1556. His work describes the highly developed and complex processes of mining metal ores, metal extraction and metallurgy of the time. His approach removed the mysticism associated with the subject, creating the practical base upon which others could build. The work describes the many kinds of furnace used to smelt ore, and stimulated interest in minerals and their composition. It is no coincidence that he gives numerous references to the earlier author, Pliny the Elder and his Naturalis Historia. Agricola has been described as the "father of metallurgy".[34]
In 1605, Sir Francis Bacon published The Proficience and Advancement of Learning, which contains a description of what would later be known as the scientific method.[35] In 1605, Michal Sedziwój publishes the alchemical treatise A New Light of Alchemy which proposed the existence of the "food of life" within air, much later recognized as oxygen. In 1615 Jean Beguin published the Tyrocinium Chymicum, an early chemistry textbook, and in it draws the first-ever chemical equation.[36] In 1637 René Descartes publishes Discours de la méthode, which contains an outline of the scientific method.
The Dutch chemist Jan Baptist van Helmont's work Ortus medicinae was published posthumously in 1648; the book is cited by some as a major transitional work between alchemy and chemistry, and as an important influence on Robert Boyle. The book contains the results of numerous experiments and establishes an early version of the law of conservation of mass. Working during the time just after Paracelsus and iatrochemistry, Jan Baptist van Helmont suggested that there are insubstantial substances other than air and coined a name for them - "gas", from the Greek word chaos. In addition to introducing the word "gas" into the vocabulary of scientists, van Helmont conducted several experiments involving gases. Jan Baptist van Helmont is also remembered today largely for his ideas on spontaneous generation and his 5-year tree experiment, as well as being considered the founder of pneumatic chemistry.
Robert Boyle
File:Robert Boyle 0001.jpg
Robert Boyle, one of the co-founders of modern chemistry through his use of proper experimentation, which further separated chemistry from alchemy
Irish chemist Robert Boyle (1627–1691) is considered to have refined the modern scientific method for alchemy and to have separated chemistry further from alchemy.[37] Although his research clearly has its roots in the alchemical tradition, Boyle is largely regarded today as the first modern chemist, and therefore one of the founders of modern chemistry, and one of the pioneers of modern experimental scientific method. Although Boyle was not the original discover, he is best known for Boyle's law, which he presented in 1662:[38] the law describes the inversely proportional relationship between the absolute pressure and volume of a gas, if the temperature is kept constant within a closed system.[39][40]
Boyle also tried to purify chemicals to obtain reproducible reactions. He was a vocal proponent of the mechanical philosophy proposed by René Descartes to explain and quantify the physical properties and interactions of material substances. Boyle was an atomist, but favoured the word corpuscle over atoms. He commented that the finest division of matter where the properties are retained is at the level of corpuscles. He also performed numerous investigations with an air pump, and noted that the mercury fell as air was pumped out. He also observed that pumping the air out of a container would extinguish a flame and kill small animals placed inside, and well as causing the level of a barometer to drop. Boyle helped to lay the foundations for the Chemical Revolution with his mechanical corpuscular philosophy.[41] Boyle repeated the tree experiment of van Helmont, and was the first to use indicators which changed colors with acidity.
Development and dismantling of phlogiston
Joseph Priestley, co-discoverer of the element oxygen, which he called "dephlogisticated air"
In 1702, German chemist Georg Stahl coined the name "phlogiston" for the substance believed to be released in the process of burning. Around 1735, Swedish chemist Georg Brandt analyzed a dark blue pigment found in copper ore. Brandt demonstrated that the pigment contained a new element, later named cobalt. In 1751, a Swedish chemist and pupil of Stahl's named Axel Fredrik Cronstedt, identified an impurity in copper ore as a separate metallic element, which he named nickel. Cronstedt is one of the founders of modern mineralogy.[42] Cronstedt also discovered the mineral scheelite in 1751, which he named tungsten, meaning "heavy stone" in Swedish.
In 1754, Scottish chemist Joseph Black isolated carbon dioxide, which he called "fixed air".[43] In 1757, Louis Claude Cadet de Gassicourt, while investigating arsenic compounds, creates Cadet's fuming liquid, later discovered to be cacodyl oxide, considered to be the first synthetic organometallic compound.[44] In 1758, Joseph Black formulated the concept of latent heat to explain the thermochemistry of phase changes.[45] In 1766, English chemist Henry Cavendish isolated hydrogen, which he called "inflammable air". Cavendish discovered hydrogen as a colorless, odourless gas that burns and can form an explosive mixture with air, and published a paper on the production of water by burning inflammable air (that is, hydrogen) in dephlogisticated air (now known to be oxygen), the latter a constituent of atmospheric air (phlogiston theory).
In 1773, Swedish chemist Carl Wilhelm Scheele discovered oxygen, which he called "fire air", but did not immediately publish his achievement.[46] In 1774, English chemist Joseph Priestley independently isolated oxygen in its gaseous state, calling it "dephlogisticated air", and published his work before Scheele.[47][48] During his lifetime, Priestley's considerable scientific reputation rested on his invention of soda water, his writings on electricity, and his discovery of several "airs" (gases), the most famous being what Priestley dubbed "dephlogisticated air" (oxygen). However, Priestley's determination to defend phlogiston theory and to reject what would become the chemical revolution eventually left him isolated within the scientific community.
In 1781, Carl Wilhelm Scheele discovered that a new acid, tungstic acid, could be made from Cronstedt's scheelite (at the time named tungsten). Scheele and Torbern Bergman suggested that it might be possible to obtain a new metal by reducing this acid.[49] In 1783, José and Fausto Elhuyar found an acid made from wolframite that was identical to tungstic acid. Later that year, in Spain, the brothers succeeded in isolating the metal now known as tungsten by reduction of this acid with charcoal, and they are credited with the discovery of the element.[50][51]
Volta and the Voltaic Pile
A voltaic pile on display in the Tempio Voltiano (the Volta Temple) near Volta's home in Como.
Italian physicist Alessandro Volta constructed a device for accumulating a large charge by a series of inductions and groundings. He investigated the 1780s discovery "animal electricity" by Luigi Galvani, and found that the electric current was generated from the contact of dissimilar metals, and that the frog leg was only acting as a detector. Volta demonstrated in 1794 that when two metals and brine-soaked cloth or cardboard are arranged in a circuit they produce an electric current.
In 1800, Volta stacked several pairs of alternating copper (or silver) and zinc discs (electrodes) separated by cloth or cardboard soaked in brine (electrolyte) to increase the electrolyte conductivity.[52] When the top and bottom contacts were connected by a wire, an electric current flowed through the voltaic pile and the connecting wire. Thus, Volta is credited with constructed the first electrical battery to produce electricity. Volta's method of stacking round plates of copper and zinc separated by disks of cardboard moistened with salt solution was termed a voltaic pile.
Thus, Volta is considered to be the founder of the discipline of electrochemistry.[53] A Galvanic cell (or voltaic cell) is an electrochemical cell that derives electrical energy from spontaneous redox reaction taking place within the cell. It generally consists of two different metals connected by a salt bridge, or individual half-cells separated by a porous membrane.
Antoine-Laurent de Lavoisier
Although the archives of chemical research draw upon work from ancient Babylonia, Egypt, and especially the Arabs and Persians after Islam, modern chemistry flourished from the time of Antoine-Laurent de Lavoisier, a French chemist who is celebrated as the "father of modern chemistry". Lavoisier demonstrated with careful measurements that transmutation of water to earth was not possible, but that the sediment observed from boiling water came from the container. He burnt phosphorus and sulfur in air, and proved that the products weighed more than the original. Nevertheless, the weight gained was lost from the air. Thus, in 1789, he established the Law of Conservation of Mass, which is also called "Lavoisier's Law."[54]
The world's first ice-calorimeter, used in the winter of 1782-83, by Antoine Lavoisier and Pierre-Simon Laplace, to determine the heat involved in various chemical changes; calculations which were based on Joseph Black's prior discovery of latent heat. These experiments mark the foundation of thermochemistry.
Repeating the experiments of Priestley, he demonstrated that air is composed of two parts, one of which combines with metals to form calxes. In Considérations Générales sur la Nature des Acides (1778), he demonstrated that the "air" responsible for combustion was also the source of acidity. The next year, he named this portion oxygen (Greek for acid-former), and the other azote (Greek for no life). Lavoisier thus has a claim to the discovery of oxygen along with Priestley and Scheele. He also discovered that the "inflammable air" discovered by Cavendish - which he termed hydrogen (Greek for water-former) - combined with oxygen to produce a dew, as Priestley had reported, which appeared to be water. In Reflexions sur le Phlogistique (1783), Lavoisier showed the phlogiston theory of combustion to be inconsistent. Mikhail Lomonosov independently established a tradition of chemistry in Russia in the 18th century. Lomonosov also rejected the phlogiston theory, and anticipated the kinetic theory of gases. Lomonosov regarded heat as a form of motion, and stated the idea of conservation of matter.
Lavoisier worked with Claude Louis Berthollet and others to devise a system of chemical nomenclature which serves as the basis of the modern system of naming chemical compounds. In his Methods of Chemical Nomenclature (1787), Lavoisier invented the system of naming and classification still largely in use today, including names such as sulfuric acid, sulfates, and sulfites. In 1785, Berthollet was the first to introduce the use of chlorine gas as a commercial bleach. In the same year he first determined the elemental composition of the gas ammonia. Berthollet first produced a modern bleaching liquid in 1789 by passing chlorine gas through a solution of sodium carbonate - the result was a weak solution of sodium hypochlorite. Another strong chlorine oxidant and bleach which he investigated and was the first to produce, potassium chlorate (KClO3), is known as Berthollet's Salt. Berthollet is also known for his scientific contributions to theory of chemical equilibria via the mechanism of reverse chemical reactions.
While many of Lavoisier's partners were influential for the advancement of chemistry as a scientific discipline, his wife Marie-Anne Lavoisier was arguably the most influential of them all. Upon their marriage, Mmme. Lavoisier began to study chemistry, English, and drawing in order to help her husband in his work either by translating papers into English, a language which Lavoisier did not know, or by keeping records and drawing the various apparatuses that Lavoisier used in his labs.[55] Through her ability to read and translate articles from Britain for her husband, Lavoisier had access knowledge from many of the chemical advances happening outside of his lab. [56] Furthermore, Mme. Lavoisier kept records of Lavoisier's work and ensured that his works were published. [57] The first sign of Marie-Anne's true potential as a chemist in Lavoisier's lab came when she was translating a book by the scientist Richard Kirwan. While translating, she stumbled upon and corrected multiple errors. When she presented her translation, along with her notes to Lavoisier [58] Her edits and contributions led to Lavoisier's refutation of the theory of phlogiston.
Lavoisier made many fundamental contributions to the science of chemistry. Following Lavoisier's work, chemistry acquired a strict quantitative nature, allowing reliable predictions to be made. The revolution in chemistry which he brought about was a result of a conscious effort to fit all experiments into the framework of a single theory. He established the consistent use of chemical balance, used oxygen to overthrow the phlogiston theory, and developed a new system of chemical nomenclature. Lavoisier was beheaded during the French Revolution.
19th century
In 1802, French American chemist and industrialist Éleuthère Irénée du Pont, who learned manufacture of gunpowder and explosives under Antoine Lavoisier, founded a gunpowder manufacturer in Delaware known as E. I. du Pont de Nemours and Company. The French Revolution forced his family to move to the United States where du Pont started a gunpowder mill on the Brandywine River in Delaware. Wanting to make the best powder possible, du Pont was vigilant about the quality of the materials he used. For 32 years, du Pont served as president of E. I. du Pont de Nemours and Company, which eventually grew into one of the largest and most successful companies in America.
Throughout the 19th century, chemistry was divided between those who followed the atomic theory of John Dalton and those who did not, such as Wilhelm Ostwald and Ernst Mach.[59] Although such proponents of the atomic theory as Amedeo Avogadro and Ludwig Boltzmann made great advances in explaining the behavior of gases, this dispute was not finally settled until Jean Perrin's experimental investigation of Einstein's atomic explanation of Brownian motion in the first decade of the 20th century.[59]
Well before the dispute had been settled, many had already applied the concept of atomism to chemistry. A major example was the ion theory of Svante Arrhenius which anticipated ideas about atomic substructure that did not fully develop until the 20th century. Michael Faraday was another early worker, whose major contribution to chemistry was electrochemistry, in which (among other things) a certain quantity of electricity during electrolysis or electrodeposition of metals was shown to be associated with certain quantities of chemical elements, and fixed quantities of the elements therefore with each other, in specific ratios.[citation needed] These findings, like those of Dalton's combining ratios, were early clues to the atomic nature of matter.
John Dalton
File:John Dalton by Charles Turner.jpg
John Dalton is remembered for his work on partial pressures in gases, color blindness, and atomic theory
Main articles: John Dalton and Atomic theory
In 1803, English meteorologist and chemist John Dalton proposed Dalton's law, which describes relationship between the components in a mixture of gases and the relative pressure each contributes to that of the overall mixture.[60] Discovered in 1801, this concept is also known as Dalton's law of partial pressures.
Dalton also proposed a modern atomic theory in 1803 which stated that all matter was composed of small indivisible particles termed atoms, atoms of a given element possess unique characteristics and weight, and three types of atoms exist: simple (elements), compound (simple molecules), and complex (complex molecules). In 1808, Dalton first published New System of Chemical Philosophy (1808-1827), in which he outlined the first modern scientific description of the atomic theory. This work identified chemical elements as a specific type of atom, therefore rejecting Newton's theory of chemical affinities.
Instead, Dalton inferred proportions of elements in compounds by taking ratios of the weights of reactants, setting the atomic weight of hydrogen to be identically one. Following Jeremias Benjamin Richter (known for introducing the term stoichiometry), he proposed that chemical elements combine in integral ratios. This is known as the law of multiple proportions or Dalton's law, and Dalton included a clear description of the law in his New System of Chemical Philosophy. The law of multiple proportions is one of the basic laws of stoichiometry used to establish the atomic theory. Despite the importance of the work as the first view of atoms as physically real entities and introduction of a system of chemical symbols, New System of Chemical Philosophy devoted almost as much space to the caloric theory as to atomism.
French chemist Joseph Proust proposed the law of definite proportions, which states that elements always combine in small, whole number ratios to form compounds, based on several experiments conducted between 1797 and 1804[61] Along with the law of multiple proportions, the law of definite proportions forms the basis of stoichiometry. The law of definite proportions and constant composition do not prove that atoms exist, but they are difficult to explain without assuming that chemical compounds are formed when atoms combine in constant proportions.
Jöns Jacob Berzelius
File:Jöns Jacob Berzelius from Familj-Journalen1873.png
Jöns Jacob Berzelius, the chemist who worked out the modern technique of chemical formula notation and is considered one of the fathers of modern chemistry
Main article: Jöns Jacob Berzelius
A Swedish chemist and disciple of Dalton, Jöns Jacob Berzelius embarked on a systematic program to try to make accurate and precise quantitative measurements and insure the purity of chemicals. Along Lavoisier, Boyle, and Dalton, Berzelius is known as the father of modern chemistry. In 1828 he compiled a table of relative atomic weights, where oxygen was set to 100, and which included all of the elements known at the time. This work provided evidence in favor of Dalton's atomic theory: that inorganic chemical compounds are composed of atoms combined in whole number amounts. He determined the exact elementary constituents of large numbers of compounds. The results strongly confirmed Proust's Law of Definite Proportions. In his weights, he used oxygen as a standard, setting its weight equal to exactly 100. He also measured the weights of 43 elements. In discovering that atomic weights are not integer multiples of the weight of hydrogen, Berzelius also disproved Prout's hypothesis that elements are built up from atoms of hydrogen.
Motivated by his extensive atomic weight determinations and in a desire to aid his experiments, he introduced the classical system of chemical symbols and notation with his 1808 publishing of Lärbok i Kemien, in which elements are abbreviated by one or two letters to make a distinct abbreviation from their Latin name. This system of chemical notation—in which the elements were given simple written labels, such as O for oxygen, or Fe for iron, with proportions noted by numbers—is the same basic system used today. The only difference is that instead of the subscript number used today (e.g., H2O), Berzelius used a superscript (H2O). Berzelius is credited with identifying the chemical elements silicon, selenium, thorium, and cerium. Students working in Berzelius's laboratory also discovered lithium and vanadium.
Berzelius developed the radical theory of chemical combination, which holds that reactions occur as stable groups of atoms called radicals are exchanged between molecules. He believed that salts are compounds of an acid and bases, and discovered that the anions in acids would be attracted to a positive electrode (the anode), whereas the cations in a base would be attracted to a negative electrode (the cathode). Berzelius did not believe in the Vitalism Theory, but instead in a regulative force which produced organization of tissues in an organism. Berzelius is also credited with originating the chemical terms "catalysis", "polymer", "isomer", and "allotrope", although his original definitions differ dramatically from modern usage. For example, he coined the term "polymer" in 1833 to describe organic compounds which shared identical empirical formulas but which differed in overall molecular weight, the larger of the compounds being described as "polymers" of the smallest. By this long superseded, pre-structural definition, glucose (C6H12O6) was viewed as a polymer of formaldehyde (CH2O).
New elements and gas laws
File:Humphry davy.jpg
Humphry Davy, the discover of several alkali and alkaline earth metals, as well as contributions to the discoveries of the elemental nature of chlorine and iodine.
Main article: Humphry Davy
English chemist Humphry Davy was a pioneer in the field of electrolysis, using Alessandro Volta's voltaic pile to split up common compounds and thus isolate a series of new elements. He went on to electrolyse molten salts and discovered several new metals, especially sodium and potassium, highly reactive elements known as the alkali metals. Potassium, the first metal that was isolated by electrolysis, was discovered in 1807 by Davy, who derived it from caustic potash (KOH). Before the 19th century, no distinction was made between potassium and sodium. Sodium was first isolated by Davy in the same year by passing an electric current through molten sodium hydroxide (NaOH). When Davy heard that Berzelius and Pontin prepared calcium amalgam by electrolyzing lime in mercury, he tried it himself. Davy was successful, and discovered calcium in 1808 by electrolyzing a mixture of lime and mercuric oxide.[62][63] He worked with electrolysis throughout his life and, in 1808, he isolated magnesium, strontium[64] and barium.[65]
Davy also experimented with gases by inhaling them. This experimental procedure nearly proved fatal on several occasions, but led to the discovery of the unusual effects of nitrous oxide, which came to be known as laughing gas. Chlorine was discovered in 1774 by Swedish chemist Carl Wilhelm Scheele, who called it "dephlogisticated marine acid" (see phlogiston theory) and mistakenly thought it contained oxygen. Scheele observed several properties of chlorine gas, such as its bleaching effect on litmus, its deadly effect on insects, its yellow-green colour, and the similarity of its smell to that of aqua regia. However, Scheele was unable to publish his findings at the time. In 1810, chlorine was given its current name by Humphry Davy (derived from the Greek word for green), who insisted that chlorine was in fact an element.[66] He also showed that oxygen could not be obtained from the substance known as oxymuriatic acid (HCl solution). This discovery overturned Lavoisier's definition of acids as compounds of oxygen. Davy was a popular lecturer and able experimenter.
File:Joseph louis gay-lussac.jpg
Joseph Louis Gay-Lussac, who stated that the ratio between the volumes of the reactant gases and the products can be expressed in simple whole numbers.
French chemist Joseph Louis Gay-Lussac shared the interest of Lavoisier and others in the quantitative study of the properties of gases. From his first major program of research in 1801–1802, he concluded that equal volumes of all gases expand equally with the same increase in temperature: this conclusion is usually called "Charles's law", as Gay-Lussac gave credit to Jacques Charles, who had arrived at nearly the same conclusion in the 1780s but had not published it.[67] The law was independently discovered by British natural philosopher John Dalton by 1801, although Dalton's description was less thorough than Gay-Lussac's.[68][69] In 1804 Gay-Lussac made several daring ascents of over 7,000 meters above sea level in hydrogen-filled balloons—a feat not equaled for another 50 years—that allowed him to investigate other aspects of gases. Not only did he gather magnetic measurements at various altitudes, but he also took pressure, temperature, and humidity measurements and samples of air, which he later analyzed chemically.
In 1808 Gay-Lussac announced what was probably his single greatest achievement: from his own and others' experiments he deduced that gases at constant temperature and pressure combine in simple numerical proportions by volume, and the resulting product or products—if gases—also bear a simple proportion by volume to the volumes of the reactants. In other words, gases under equal conditions of temperature and pressure react with one another in volume ratios of small whole numbers. This conclusion subsequently became known as "Gay-Lussac's law" or the "Law of Combining Volumes". With his fellow professor at the École Polytechnique, Louis Jacques Thénard, Gay-Lussac also participated in early electrochemical research, investigating the elements discovered by its means. Among other achievements, they decomposed boric acid by using fused potassium, thus discovering the element boron. The two also took part in contemporary debates that modified Lavoisier's definition of acids and furthered his program of analyzing organic compounds for their oxygen and hydrogen content.
The element iodine was discovered by French chemist Bernard Courtois in 1811.[70][71] Courtois gave samples to his friends, Charles Bernard Desormes (1777–1862) and Nicolas Clément (1779–1841), to continue research. He also gave some of the substance to Gay-Lussac and to physicist André-Marie Ampère. On December 6, 1813, Gay-Lussac announced that the new substance was either an element or a compound of oxygen.[72][73][74] It was Gay-Lussac who suggested the name "iode", from the Greek word ιώδες (iodes) for violet (because of the color of iodine vapor).[70][72] Ampère had given some of his sample to Humphry Davy. Davy did some experiments on the substance and noted its similarity to chlorine.[75] Davy sent a letter dated December 10 to the Royal Society of London stating that he had identified a new element.[76] Arguments erupted between Davy and Gay-Lussac over who identified iodine first, but both scientists acknowledged Courtois as the first to isolate the element.
In 1815, Humphry Davy invented the Davy lamp, which allowed miners within coal mines to work safely in the presence of flammable gases. There had been many mining explosions caused by firedamp or methane often ignited by open flames of the lamps then used by miners. Davy conceived of using an iron gauze to enclose a lamp's flame, and so prevent the methane burning inside the lamp from passing out to the general atmosphere. Although the idea of the safety lamp had already been demonstrated by William Reid Clanny and by the then unknown (but later very famous) engineer George Stephenson, Davy's use of wire gauze to prevent the spread of flame was used by many other inventors in their later designs. There was some discussion as to whether Davy had discovered the principles behind his lamp without the help of the work of Smithson Tennant, but it was generally agreed that the work of both men had been independent. Davy refused to patent the lamp, and its invention led to him being awarded the Rumford medal in 1816.[77]
File:Avogadro Amedeo.jpg
Amedeo Avogadro, who postulated that, under controlled conditions of temperature and pressure, equal volumes of gases contain an equal number of molecules. This is known as Avogadro's law.
Main articles: Amedeo Avogadro and Avogadro's law
After Dalton published his atomic theory in 1808, certain of his central ideas were soon adopted by most chemists. However, uncertainty persisted for half a century about how atomic theory was to be configured and applied to concrete situations; chemists in different countries developed several different incompatible atomistic systems. A paper that suggested a way out of this difficult situation was published as early as 1811 by the Italian physicist Amedeo Avogadro (1776-1856), who hypothesized that equal volumes of gases at the same temperature and pressure contain equal numbers of molecules, from which it followed that relative molecular weights of any two gases are the same as the ratio of the densities of the two gases under the same conditions of temperature and pressure. Avogadro also reasoned that simple gases were not formed of solitary atoms but were instead compound molecules of two or more atoms. Thus Avogadro was able to overcome the difficulty that Dalton and others had encountered when Gay-Lussac reported that above 100 °C the volume of water vapor was twice the volume of the oxygen used to form it. According to Avogadro, the molecule of oxygen had split into two atoms in the course of forming water vapor.
Avogadro's hypothesis was neglected for half a century after it was first published. Many reasons for this neglect have been cited, including some theoretical problems, such as Jöns Jacob Berzelius's "dualism", which asserted that compounds are held together by the attraction of positive and negative electrical charges, making it inconceivable that a molecule composed of two electrically similar atoms—as in oxygen—could exist. An additional barrier to acceptance was the fact that many chemists were reluctant to adopt physical methods (such as vapour-density determinations) to solve their problems. By mid-century, however, some leading figures had begun to view the chaotic multiplicity of competing systems of atomic weights and molecular formulas as intolerable. Moreover, purely chemical evidence began to mount that suggested Avogadro's approach might be right after all. During the 1850s, younger chemists, such as Alexander Williamson in England, Charles Gerhardt and Charles-Adolphe Wurtz in France, and August Kekulé in Germany, began to advocate reforming theoretical chemistry to make it consistent with Avogadrian theory.
Wöhler and the vitalism debate
Structural formula of urea
In 1825, Friedrich Wöhler and Justus von Liebig performed the first confirmed discovery and explanation of isomers, earlier named by Berzelius. Working with cyanic acid and fulminic acid, they correctly deduced that isomerism was caused by differing arrangements of atoms within a molecular structure. In 1827, William Prout classified biomolecules into their modern groupings: carbohydrates, proteins and lipids. After the nature of combustion was settled, another dispute, about vitalism and the essential distinction between organic and inorganic substances, began. The vitalism question was revolutionized in 1828 when Friedrich Wöhler synthesized urea, thereby establishing that organic compounds could be produced from inorganic starting materials and disproving the theory of vitalism. Never before had an organic compound been synthesized from inorganic material.[citation needed]
This opened a new research field in chemistry, and by the end of the 19th century, scientists were able to synthesize hundreds of organic compounds. The most important among them are mauve, magenta, and other synthetic dyes, as well as the widely used drug aspirin. The discovery of the artificial synthesis of urea contributed greatly to the theory of isomerism, as the empirical chemical formulas for urea and ammonium cyanate are identical (see Wöhler synthesis). In 1832, Friedrich Wöhler and Justus von Liebig discovered and explained functional groups and radicals in relation to organic chemistry, as well as first synthesizing benzaldehyde. Liebig, a German chemist, made major contributions to agricultural and biological chemistry, and worked on the organization of organic chemistry. Liebig is considered the "father of the fertilizer industry" for his discovery of nitrogen as an essential plant nutrient, and his formulation of the Law of the Minimum which described the effect of individual nutrients on crops.
In 1840, Germain Hess proposed Hess's law, an early statement of the law of conservation of energy, which establishes that energy changes in a chemical process depend only on the states of the starting and product materials and not on the specific pathway taken between the two states. In 1847, Hermann Kolbe obtained acetic acid from completely inorganic sources, further disproving vitalism. In 1848, William Thomson, 1st Baron Kelvin (commonly known as Lord Kelvin) established the concept of absolute zero, the temperature at which all molecular motion ceases. In 1849, Louis Pasteur discovered that the racemic form of tartaric acid is a mixture of the levorotatory and dextrotatory forms, thus clarifying the nature of optical rotation and advancing the field of stereochemistry.[78] In 1852, August Beer proposed Beer's law, which explains the relationship between the composition of a mixture and the amount of light it will absorb. Based partly on earlier work by Pierre Bouguer and Johann Heinrich Lambert, it established the analytical technique known as spectrophotometry.[79] In 1855, Benjamin Silliman, Jr. pioneered methods of petroleum cracking, which made the entire modern petrochemical industry possible.[80]
File:Kekule acetic acid formulae.jpg
Formulas of acetic acid given by August Kekulé in 1861.
Avogadro's hypothesis began to gain broad appeal among chemists only after his compatriot and fellow scientist Stanislao Cannizzaro demonstrated its value in 1858, two years after Avogadro's death. Cannizzaro's chemical interests had originally centered on natural products and on reactions of aromatic compounds; in 1853 he discovered that when benzaldehyde is treated with concentrated base, both benzoic acid and benzyl alcohol are produced—a phenomenon known today as the Cannizzaro reaction. In his 1858 pamphlet, Cannizzaro showed that a complete return to the ideas of Avogadro could be used to construct a consistent and robust theoretical structure that fit nearly all of the available empirical evidence. For instance, he pointed to evidence that suggested that not all elementary gases consist of two atoms per molecule—some were monatomic, most were diatomic, and a few were even more complex.
Another point of contention had been the formulas for compounds of the alkali metals (such as sodium) and the alkaline earth metals (such as calcium), which, in view of their striking chemical analogies, most chemists had wanted to assign to the same formula type. Cannizzaro argued that placing these metals in different categories had the beneficial result of eliminating certain anomalies when using their physical properties to deduce atomic weights. Unfortunately, Cannizzaro's pamphlet was published initially only in Italian and had little immediate impact. The real breakthrough came with an international chemical congress held in the German town of Karlsruhe in September 1860, at which most of the leading European chemists were present. The Karlsruhe Congress had been arranged by Kekulé, Wurtz, and a few others who shared Cannizzaro's sense of the direction chemistry should go. Speaking in French (as everyone there did), Cannizzaro's eloquence and logic made an indelible impression on the assembled body. Moreover, his friend Angelo Pavesi distributed Cannizzaro's pamphlet to attendees at the end of the meeting; more than one chemist later wrote of the decisive impression the reading of this document provided. For instance, Lothar Meyer later wrote that on reading Cannizzaro's paper, "The scales seemed to fall from my eyes."[81] Cannizzaro thus played a crucial role in winning the battle for reform. The system advocated by him, and soon thereafter adopted by most leading chemists, is substantially identical to what is still used today.
Perkin, Crookes, and Nobel
In 1856, Sir William Henry Perkin, age 18, given a challenge by his professor, August Wilhelm von Hofmann, sought to synthesize quinine, the anti-malaria drug, from coal tar. In one attempt, Perkin oxidized aniline using potassium dichromate, whose toluidine impurities reacted with the aniline and yielded a black solid—suggesting a "failed" organic synthesis. Cleaning the flask with alcohol, Perkin noticed purple portions of the solution: a byproduct of the attempt was the first synthetic dye, known as mauveine or Perkin's mauve. Perkin's discovery is the foundation of the dye synthesis industry, one of the earliest successful chemical industries.
German chemist August Kekulé von Stradonitz's most important single contribution was his structural theory of organic composition, outlined in two articles published in 1857 and 1858 and treated in great detail in the pages of his extraordinarily popular Lehrbuch der organischen Chemie ("Textbook of Organic Chemistry"), the first installment of which appeared in 1859 and gradually extended to four volumes. Kekulé argued that tetravalent carbon atoms - that is, carbon forming exactly four chemical bonds - could link together to form what he called a "carbon chain" or a "carbon skeleton," to which other atoms with other valences (such as hydrogen, oxygen, nitrogen, and chlorine) could join. He was convinced that it was possible for the chemist to specify this detailed molecular architecture for at least the simpler organic compounds known in his day. Kekulé was not the only chemist to make such claims in this era. The Scottish chemist Archibald Scott Couper published a substantially similar theory nearly simultaneously, and the Russian chemist Aleksandr Butlerov did much to clarify and expand structure theory. However, it was predominantly Kekulé's ideas that prevailed in the chemical community.
File:Crookes tube two views.jpg
British chemist and physicist William Crookes is noted for his cathode ray studies, fundamental in the development of atomic physics. His researches on electrical discharges through a rarefied gas led him to observe the dark space around the cathode, now called the Crookes dark space. He demonstrated that cathode rays travel in straight lines and produce phosphorescence and heat when they strike certain materials. A pioneer of vacuum tubes, Crookes invented the Crookes tube - an early experimental discharge tube, with partial vacuum with which he studied the behavior of cathode rays. With the introduction of spectrum analysis by Robert Bunsen and Gustav Kirchhoff (1859-1860), Crookes applied the new technique to the study of selenium compounds. Bunsen and Kirchoff had previously used spectroscopy as a means of chemical analysis to discover caesium and rubidium. In 1861, Crookes used this process to discover thallium in some seleniferous deposits. He continued work on that new element, isolated it, studied its properties, and in 1873 determined its atomic weight. During his studies of thallium, Crookes discovered the principle of the Crookes radiometer, a device that converts light radiation into rotary motion. The principle of this radiometer has found numerous applications in the development of sensitive measuring instruments.
In 1862, Alexander Parkes exhibited Parkesine, one of the earliest synthetic polymers, at the International Exhibition in London. This discovery formed the foundation of the modern plastics industry. In 1864, Cato Maximilian Guldberg and Peter Waage, building on Claude Louis Berthollet's ideas, proposed the law of mass action. In 1865, Johann Josef Loschmidt determined the exact number of molecules in a mole, later named Avogadro's number.
In 1865, August Kekulé, based partially on the work of Loschmidt and others, established the structure of benzene as a six carbon ring with alternating single and double bonds. Kekulé's novel proposal for benzene's cyclic structure was much contested but was never replaced by a superior theory. This theory provided the scientific basis for the dramatic expansion of the German chemical industry in the last third of the 19th century. Today, the large majority of known organic compounds are aromatic, and all of them contain at least one hexagonal benzene ring of the sort that Kekulé advocated. Kekulé is also famous for having clarified the nature of aromatic compounds, which are compounds based on the benzene molecule. In 1865, Adolf von Baeyer began work on indigo dye, a milestone in modern industrial organic chemistry which revolutionized the dye industry.
Swedish chemist and inventor Alfred Nobel found that when nitroglycerin was incorporated in an absorbent inert substance like kieselguhr (diatomaceous earth) it became safer and more convenient to handle, and this mixture he patented in 1867 as dynamite. Nobel later on combined nitroglycerin with various nitrocellulose compounds, similar to collodion, but settled on a more efficient recipe combining another nitrate explosive, and obtained a transparent, jelly-like substance, which was a more powerful explosive than dynamite. Gelignite, or blasting gelatin, as it was named, was patented in 1876; and was followed by a host of similar combinations, modified by the addition of potassium nitrate and various other substances.
Mendeleev's periodic table
File:Дмитрий Иванович Менделеев 4.gif
Dmitri Mendeleev, responsible for organizing the known chemical elements in a periodic table.
An important breakthrough in making sense of the list of known chemical elements (as well as in understanding the internal structure of atoms) was Dmitri Mendeleev's development of the first modern periodic table, or the periodic classification of the elements. Mendeleev, a Russian chemist, felt that there was some type of order to the elements and he spent more than thirteen years of his life collecting data and assembling the concept, initially with the idea of resolving some of the disorder in the field for his students. Mendeleev found that, when all the known chemical elements were arranged in order of increasing atomic weight, the resulting table displayed a recurring pattern, or periodicity, of properties within groups of elements. Mendeleev's law allowed him to build up a systematic periodic table of all the 66 elements then known based on atomic mass, which he published in Principles of Chemistry in 1869. His first Periodic Table was compiled on the basis of arranging the elements in ascending order of atomic weight and grouping them by similarity of properties.
Mendeleev had such faith in the validity of the periodic law that he proposed changes to the generally accepted values for the atomic weight of a few elements and, in his version of the periodic table of 1871, predicted the locations within the table of unknown elements together with their properties. He even predicted the likely properties of three yet-to-be-discovered elements, which he called ekaboron (Eb), ekaaluminium (Ea), and ekasilicon (Es), which proved to be good predictors of the properties of scandium, gallium, and germanium, respectively, which each fill the spot in the periodic table assigned by Mendeleev.
At first the periodic system did not raise interest among chemists. However, with the discovery of the predicted elements, notably gallium in 1875, scandium in 1879, and germanium in 1886, it began to win wide acceptance. The subsequent proof of many of his predictions within his lifetime brought fame to Mendeleev as the founder of the periodic law. This organization surpassed earlier attempts at classification by Alexandre-Émile Béguyer de Chancourtois, who published the telluric helix, an early, three-dimensional version of the periodic table of the elements in 1862, John Newlands, who proposed the law of octaves (a precursor to the periodic law) in 1864, and Lothar Meyer, who developed an early version of the periodic table with 28 elements organized by valence in 1864. Mendeleev's table did not include any of the noble gases, however, which had not yet been discovered. Gradually the periodic law and table became the framework for a great part of chemical theory. By the time Mendeleyev died in 1907, he enjoyed international recognition and had received distinctions and awards from many countries.
In 1873, Jacobus Henricus van 't Hoff and Joseph Achille Le Bel, working independently, developed a model of chemical bonding that explained the chirality experiments of Pasteur and provided a physical cause for optical activity in chiral compounds.[82] van 't Hoff's publication, called Voorstel tot Uitbreiding der Tegenwoordige in de Scheikunde gebruikte Structuurformules in de Ruimte, etc. (Proposal for the development of 3-dimensional chemical structural formulae) and consisting of twelve pages text and one page diagrams, gave the impetus to the development of stereochemistry. The concept of the "asymmetrical carbon atom", dealt with in this publication, supplied an explanation of the occurrence of numerous isomers, inexplicable by means of the then current structural formulae. At the same time he pointed out the existence of relationship between optical activity and the presence of an asymmetrical carbon atom.
Josiah Willard Gibbs
File:Josiah Willard Gibbs -from MMS-.jpg
J. Willard Gibbs formulated a concept of thermodynamic equilibrium of a system in terms of energy and entropy. He also did extensive work on chemical equilibrium, and equilibria between phases.
American mathematical physicist J. Willard Gibbs's work on the applications of thermodynamics was instrumental in transforming physical chemistry into a rigorous deductive science. During the years from 1876 to 1878, Gibbs worked on the principles of thermodynamics, applying them to the complex processes involved in chemical reactions. He discovered the concept of chemical potential, or the "fuel" that makes chemical reactions work. In 1876 he published his most famous contribution, "On the Equilibrium of Heterogeneous Substances", a compilation of his work on thermodynamics and physical chemistry which laid out the concept of free energy to explain the physical basis of chemical equilibria.[83] In these essays were the beginnings of Gibbs’ theories of phases of matter: he considered each state of matter a phase, and each substance a component. Gibbs took all of the variables involved in a chemical reaction - temperature, pressure, energy, volume, and entropy - and included them in one simple equation known as Gibbs' phase rule.
Within this paper was perhaps his most outstanding contribution, the introduction of the concept free energy, now universally called Gibbs free energy in his honor. The Gibbs free energy relates the tendency of a physical or chemical system to simultaneously lower its energy and increase its disorder, or entropy, in a spontaneous natural process. Gibbs's approach allows a researcher to calculate the change in free energy in the process, such as in a chemical reaction, and how fast it will happen. Since virtually all chemical processes and many physical ones involve such changes, his work has significantly impacted both the theoretical and experiential aspects of these sciences. In 1877, Ludwig Boltzmann established statistical derivations of many important physical and chemical concepts, including entropy, and distributions of molecular velocities in the gas phase.[84] Together with Boltzmann and James Clerk Maxwell, Gibbs created a new branch of theoretical physics called statistical mechanics (a term that he coined), explaining the laws of thermodynamics as consequences of the statistical properties of large ensembles of particles. Gibbs also worked on the application of Maxwell's equations to problems in physical optics. Gibbs's derivation of the phenomenological laws of thermodynamics from the statistical properties of systems with many particles was presented in his highly influential textbook Elementary Principles in Statistical Mechanics, published in 1902, a year before his death. In that work, Gibbs reviewed the relationship between the laws of thermodynamics and statistical theory of molecular motions. The overshooting of the original function by partial sums of Fourier series at points of discontinuity is known as the Gibbs phenomenon.
Late 19th century
German engineer Carl von Linde's invention of a continuous process of liquefying gases in large quantities formed a basis for the modern technology of refrigeration and provided both impetus and means for conducting scientific research at low temperatures and very high vacuums. He developed a methyl ether refrigerator (1874) and an ammonia refrigerator (1876). Though other refrigeration units had been developed earlier, Linde's were the first to be designed with the aim of precise calculations of efficiency. In 1895 he set up a large-scale plant for the production of liquid air. Six years later he developed a method for separating pure liquid oxygen from liquid air that resulted in widespread industrial conversion to processes utilizing oxygen (e.g., in steel manufacture).
In 1883, Svante Arrhenius developed an ion theory to explain conductivity in electrolytes.[85] In 1884, Jacobus Henricus van 't Hoff published Études de Dynamique chimique (Studies in Dynamic Chemistry), a seminal study on chemical kinetics.[86] In this work, van 't Hoff entered for the first time the field of physical chemistry. Of great importance was his development of the general thermodynamic relationship between the heat of conversion and the displacement of the equilibrium as a result of temperature variation. At constant volume, the equilibrium in a system will tend to shift in such a direction as to oppose the temperature change which is imposed upon the system. Thus, lowering the temperature results in heat development while increasing the temperature results in heat absorption. This principle of mobile equilibrium was subsequently (1885) put in a general form by Henry Louis Le Chatelier, who extended the principle to include compensation, by change of volume, for imposed pressure changes. The van 't Hoff-Le Chatelier principle, or simply Le Chatelier's principle, explains the response of dynamic chemical equilibria to external stresses.[87]
In 1884, Hermann Emil Fischer proposed the structure of purine, a key structure in many biomolecules, which he later synthesized in 1898. He also began work on the chemistry of glucose and related sugars.[88] In 1885, Eugene Goldstein named the cathode ray, later discovered to be composed of electrons, and the canal ray, later discovered to be positive hydrogen ions that had been stripped of their electrons in a cathode ray tube; these would later be named protons.[89] The year 1885 also saw the publishing of J. H. van 't Hoff's L'Équilibre chimique dans les Systèmes gazeux ou dissous à I'État dilué (Chemical equilibria in gaseous systems or strongly diluted solutions), which dealt with this theory of dilute solutions. Here he demonstrated that the "osmotic pressure" in solutions which are sufficiently dilute is proportionate to the concentration and the absolute temperature so that this pressure can be represented by a formula which only deviates from the formula for gas pressure by a coefficient i. He also determined the value of i by various methods, for example by means of the vapor pressure and François-Marie Raoult's results on the lowering of the freezing point. Thus van 't Hoff was able to prove that thermodynamic laws are not only valid for gases, but also for dilute solutions. His pressure laws, given general validity by the electrolytic dissociation theory of Arrhenius (1884-1887) - the first foreigner who came to work with him in Amsterdam (1888) - are considered the most comprehensive and important in the realm of natural sciences. In 1893, Alfred Werner discovered the octahedral structure of cobalt complexes, thus establishing the field of coordination chemistry.[90]
Ramsay's discovery of the noble gases
Main articles: William Ramsay and Noble gas
The most celebrated discoveries of Scottish chemist William Ramsay were made in inorganic chemistry. Ramsay was intrigued by the British physicist John Strutt, 3rd Baron Rayleigh's 1892 discovery that the atomic weight of nitrogen found in chemical compounds was lower than that of nitrogen found in the atmosphere. He ascribed this discrepancy to a light gas included in chemical compounds of nitrogen, while Ramsay suspected a hitherto undiscovered heavy gas in atmospheric nitrogen. Using two different methods to remove all known gases from air, Ramsay and Lord Rayleigh were able to announce in 1894 that they had found a monatomic, chemically inert gaseous element that constituted nearly 1 percent of the atmosphere; they named it argon.
The following year, Ramsay liberated another inert gas from a mineral called cleveite; this proved to be helium, previously known only in the solar spectrum. In his book The Gases of the Atmosphere (1896), Ramsay showed that the positions of helium and argon in the periodic table of elements indicated that at least three more noble gases might exist. In 1898 Ramsay and the British chemist Morris W. Travers isolated these elements—called neon, krypton, and xenon—from air brought to a liquid state at low temperature and high pressure. Sir William Ramsay worked with Frederick Soddy to demonstrate, in 1903, that alpha particles (helium nuclei) were continually produced during the radioactive decay of a sample of radium. Ramsay was awarded the 1904 Nobel Prize for Chemistry in recognition of "services in the discovery of the inert gaseous elements in air, and his determination of their place in the periodic system."
In 1897, J. J. Thomson discovered the electron using the cathode ray tube. In 1898, Wilhelm Wien demonstrated that canal rays (streams of positive ions) can be deflected by magnetic fields, and that the amount of deflection is proportional to the mass-to-charge ratio. This discovery would lead to the analytical technique known as mass spectrometry.[91]
Marie and Pierre Curie
Marie Curie, a pioneer in the field of radioactivity and the first twice-honored Nobel laureate (and still the only one in two different sciences)
Marie Skłodowska-Curie was a Polish-born French physicist and chemist who is famous for her pioneering research on radioactivity. She and her husband are considered to have laid the cornerstone of the nuclear age with their research on radioactivity. Marie was fascinated with the work of Henri Becquerel, a French physicist who discovered in 1896 that uranium casts off rays similar to the X-rays discovered by Wilhelm Röntgen. Marie Curie began studying uranium in late 1897 and theorized, according to a 1904 article she wrote for Century magazine, "that the emission of rays by the compounds of uranium is a property of the metal itself—that it is an atomic property of the element uranium independent of its chemical or physical state." Curie took Becquerel's work a few steps further, conducting her own experiments on uranium rays. She discovered that the rays remained constant, no matter the condition or form of the uranium. The rays, she theorized, came from the element's atomic structure. This revolutionary idea created the field of atomic physics and the Curies coined the word radioactivity to describe the phenomena.
File:Pierre Curie by Dujardin c1906.jpg
Pierre Curie, known for his work on radioactivity as well as on ferromagnetism, paramagnetism, and diamagnetism; notably Curie's law and Curie point.
Pierre and Marie further explored radioactivity by working to separate the substances in uranium ores and then using the electrometer to make radiation measurements to ‘trace’ the minute amount of unknown radioactive element among the fractions that resulted. Working with the mineral pitchblende, the pair discovered a new radioactive element in 1898. They named the element polonium, after Marie's native country of Poland. On December 21, 1898, the Curies detected the presence of another radioactive material in the pitchblende. They presented this finding to the French Academy of Sciences on December 26, proposing that the new element be called radium. The Curies then went to work isolating polonium and radium from naturally occurring compounds to prove that they were new elements. In 1902, the Curies announced that they had produced a decigram of pure radium, demonstrating its existence as a unique chemical element. While it took three years for them to isolate radium, they were never able to isolate polonium. Along with the discovery of two new elements and finding techniques for isolating radioactive isotopes, Curie oversaw the world's first studies into the treatment of neoplasms, using radioactive isotopes. With Henri Becquerel and her husband, Pierre Curie, she was awarded the 1903 Nobel Prize for Physics. She was the sole winner of the 1911 Nobel Prize for Chemistry. She was the first woman to win a Nobel Prize, and she is the only woman to win the award in two different fields.
While working with Marie to extract pure substances from ores, an undertaking that really required industrial resources but that they achieved in relatively primitive conditions, Pierre himself concentrated on the physical study (including luminous and chemical effects) of the new radiations. Through the action of magnetic fields on the rays given out by the radium, he proved the existence of particles electrically positive, negative, and neutral; these Ernest Rutherford was afterward to call alpha, beta, and gamma rays. Pierre then studied these radiations by calorimetry and also observed the physiological effects of radium, thus opening the way to radium therapy. Among Pierre Curie's discoveries were that ferromagnetic substances exhibited a critical temperature transition, above which the substances lost their ferromagnetic behavior - this is known as the "Curie point." He was elected to the Academy of Sciences (1905), having in 1903 jointly with Marie received the Royal Society's prestigious Davy Medal and jointly with her and Becquerel the Nobel Prize for Physics. He was run over by a carriage in the rue Dauphine in Paris in 1906 and died instantly. His complete works were published in 1908.
Ernest Rutherford
File:Ernest Rutherford 1908.jpg
Ernest Rutherford, discoverer of the nucleus and considered the father of nuclear physics
New Zealand-born chemist and physicist Ernest Rutherford is considered to be "the father of nuclear physics." Rutherford is best known for devising the names alpha, beta, and gamma to classify various forms of radioactive "rays" which were poorly understood at his time (alpha and beta rays are particle beams, while gamma rays are a form of high-energy electromagnetic radiation). Rutherford deflected alpha rays with both electric and magnetic fields in 1903. Working with Frederick Soddy, Rutherford explained that radioactivity is due to the transmutation of elements, now known to involve nuclear reactions.
File:Rutherford gold foil experiment results.svg
Top: Predicted results based on the then-accepted plum pudding model of the atom. Bottom: Observed results. Rutherford disproved the plum pudding model and concluded that the positive charge of the atom must be concentrated in a small, central nucleus.
He also observed that the intensity of radioactivity of a radioactive element decreases over a unique and regular amount of time until a point of stability, and he named the halving time the "half-life." In 1901 and 1902 he worked with Frederick Soddy to prove that atoms of one radioactive element would spontaneously turn into another, by expelling a piece of the atom at high velocity. In 1906 at the University of Manchester, Rutherford oversaw an experiment conducted by his students Hans Geiger (known for the Geiger counter) and Ernest Marsden. In the Geiger–Marsden experiment, a beam of alpha particles, generated by the radioactive decay of radon, was directed normally onto a sheet of very thin gold foil in an evacuated chamber. Under the prevailing plum pudding model, the alpha particles should all have passed through the foil and hit the detector screen, or have been deflected by, at most, a few degrees.
However, the actual results surprised Rutherford. Although many of the alpha particles did pass through as expected, many others were deflected at small angles while others were reflected back to the alpha source. They observed that a very small percentage of particles were deflected through angles much larger than 90 degrees. The gold foil experiment showed large deflections for a small fraction of incident particles. Rutherford realized that, because some of the alpha particles were deflected or reflected, the atom had a concentrated centre of positive charge and of relatively large mass - Rutherford later termed this positive center the "atomic nucleus". The alpha particles had either hit the positive centre directly or passed by it close enough to be affected by its positive charge. Since many other particles passed through the gold foil, the positive centre would have to be a relatively small size compared to the rest of the atom - meaning that the atom is mostly open space. From his results, Rutherford developed a model of the atom that was similar to the solar system, known as Rutherford model. Like planets, electrons orbited a central, sun-like nucleus. For his work with radiation and the atomic nucleus, Rutherford received the 1908 Nobel Prize in Chemistry.
20th century
File:1911 Solvay conference.jpg
The first Solvay Conference was held in Brussels in 1911 and was considered a turning point in the world of physics and chemistry.
In 1903, Mikhail Tsvet invented chromatography, an important analytic technique. In 1904, Hantaro Nagaoka proposed an early nuclear model of the atom, where electrons orbit a dense massive nucleus. In 1905, Fritz Haber and Carl Bosch developed the Haber process for making ammonia, a milestone in industrial chemistry with deep consequences in agriculture. The Haber process, or Haber-Bosch process, combined nitrogen and hydrogen to form ammonia in industrial quantities for production of fertilizer and munitions. The food production for half the world's current population depends on this method for producing fertilizer. Haber, along with Max Born, proposed the Born–Haber cycle as a method for evaluating the lattice energy of an ionic solid. Haber has also been described as the "father of chemical warfare" for his work developing and deploying chlorine and other poisonous gases during World War I.
Robert A. Millikan, who is best known for measuring the charge on the electron, won the Nobel Prize in Physics in 1923.
In 1905, Albert Einstein explained Brownian motion in a way that definitively proved atomic theory. Leo Baekeland invented bakelite, one of the first commercially successful plastics. In 1909, American physicist Robert Andrews Millikan - who had studied in Europe under Walther Nernst and Max Planck - measured the charge of individual electrons with unprecedented accuracy through the oil drop experiment, in which he measured the electric charges on tiny falling water (and later oil) droplets. His study established that any particular droplet's electrical charge is a multiple of a definite, fundamental value — the electron's charge — and thus a confirmation that all electrons have the same charge and mass. Beginning in 1912, he spent several years investigating and finally proving Albert Einstein's proposed linear relationship between energy and frequency, and providing the first direct photoelectric support for Planck's constant. In 1923 Millikan was awarded the Nobel Prize for Physics.
In 1909, S. P. L. Sørensen invented the pH concept and develops methods for measuring acidity. In 1911, Antonius Van den Broek proposed the idea that the elements on the periodic table are more properly organized by positive nuclear charge rather than atomic weight. In 1911, the first Solvay Conference was held in Brussels, bringing together most of the most prominent scientists of the day. In 1912, William Henry Bragg and William Lawrence Bragg proposed Bragg's law and established the field of X-ray crystallography, an important tool for elucidating the crystal structure of substances. In 1912, Peter Debye develops the concept of molecular dipole to describe asymmetric charge distribution in some molecules.
Niels Bohr
File:Niels Bohr.jpg
Niels Bohr, the developer of the Bohr model of the atom, and a leading founder of quantum mechanics
Main articles: Niels Bohr and Bohr model
In 1913, Niels Bohr, a Danish physicist, introduced the concepts of quantum mechanics to atomic structure by proposing what is now known as the Bohr model of the atom, where electrons exist only in strictly defined circular orbits around the nucleus similar to rungs on a ladder. The Bohr Model is a planetary model in which the negatively charged electrons orbit a small, positively charged nucleus similar to the planets orbiting the Sun (except that the orbits are not planar) - the gravitational force of the solar system is mathematically akin to the attractive Coulomb (electrical) force between the positively charged nucleus and the negatively charged electrons.
In the Bohr model, however, electrons orbit the nucleus in orbits that have a set size and energy - the energy levels are said to be quantized, which means that only certain orbits with certain radii are allowed; orbits in between simply don't exist. The energy of the orbit is related to its size - that is, the lowest energy is found in the smallest orbit. Bohr also postulated that electromagnetic radiation is absorbed or emitted when an electron moves from one orbit to another. Because only certain electron orbits are permitted, the emission of light accompanying a jump of an electron from an excited energy state to ground state produces a unique emission spectrum for each element.
Niels Bohr also worked on the principle of complementarity, which states that an electron can be interpreted in two mutually exclusive and valid ways. Electrons can be interpreted as wave or particle models. His hypothesis was that an incoming particle would strike the nucleus and create an excited compound nucleus. The formed the basis of his liquid drop model and later provided a theory base for the explanation of nuclear fission.
In 1913, Henry Moseley, working from Van den Broek's earlier idea, introduces concept of atomic number to fix inadequacies of Mendeleev's periodic table, which had been based on atomic weight. The peak of Frederick Soddy's career in radiochemistry was in 1913 with his formulation of the concept of isotopes, which stated that certain elements exist in two or more forms which have different atomic weights but which are indistinguishable chemically. He is remembered for proving the existence of isotopes of certain radioactive elements, and is also credited, along with others, with the discovery of the element protactinium in 1917. In 1913, J. J. Thomson expanded on the work of Wien by showing that charged subatomic particles can be separated by their mass-to-charge ratio, a technique known as mass spectrometry.
Gilbert N. Lewis
Main article: Gilbert N. Lewis
American physical chemist Gilbert N. Lewis laid the foundation of valence bond theory; he was instrumental in developing a bonding theory based on the number of electrons in the outermost "valence" shell of the atom. In 1902, while Lewis was trying to explain valence to his students, he depicted atoms as constructed of a concentric series of cubes with electrons at each corner. This "cubic atom" explained the eight groups in the periodic table and represented his idea that chemical bonds are formed by electron transference to give each atom a complete set of eight outer electrons (an "octet").
Lewis's theory of chemical bonding continued to evolve and, in 1916, he published his seminal article "The Atom of the Molecule", which suggested that a chemical bond is a pair of electrons shared by two atoms. Lewis's model equated the classical chemical bond with the sharing of a pair of electrons between the two bonded atoms. Lewis introduced the "electron dot diagrams" in this paper to symbolize the electronic structures of atoms and molecules. Now known as Lewis structures, they are discussed in virtually every introductory chemistry book.
Shortly after publication of his 1916 paper, Lewis became involved with military research. He did not return to the subject of chemical bonding until 1923, when he masterfully summarized his model in a short monograph entitled Valence and the Structure of Atoms and Molecules. His renewal of interest in this subject was largely stimulated by the activities of the American chemist and General Electric researcher Irving Langmuir, who between 1919 and 1921 popularized and elaborated Lewis's model. Langmuir subsequently introduced the term covalent bond. In 1921, Otto Stern and Walther Gerlach establish concept of quantum mechanical spin in subatomic particles.
For cases where no sharing was involved, Lewis in 1923 developed the electron pair theory of acids and base: Lewis redefined an acid as any atom or molecule with an incomplete octet that was thus capable of accepting electrons from another atom; bases were, of course, electron donors. His theory is known as the concept of Lewis acids and bases. In 1923, G. N. Lewis and Merle Randall published Thermodynamics and the Free Energy of Chemical Substances, first modern treatise on chemical thermodynamics.
The 1920s saw a rapid adoption and application of Lewis's model of the electron-pair bond in the fields of organic and coordination chemistry. In organic chemistry, this was primarily due to the efforts of the British chemists Arthur Lapworth, Robert Robinson, Thomas Lowry, and Christopher Ingold; while in coordination chemistry, Lewis's bonding model was promoted through the efforts of the American chemist Maurice Huggins and the British chemist Nevil Sidgwick.
Quantum mechanics
Quantum mechanics in the 1920s
From left to right, top row: Louis de Broglie (1892–1987) and Wolfgang Pauli (1900–58); second row: Erwin Schrödinger (1887–1961) and Werner Heisenberg (1901–76)
In 1924, French quantum physicist Louis de Broglie published his thesis, in which he introduced a revolutionary theory of electron waves based on wave–particle duality in his thesis. In his time, the wave and particle interpretations of light and matter were seen as being at odds with one another, but de Broglie suggested that these seemingly different characteristics were instead the same behavior observed from different perspectives — that particles can behave like waves, and waves (radiation) can behave like particles. Broglie's proposal offered an explanation of the restriction motion of electrons within the atom. The first publications of Broglie's idea of "matter waves" had drawn little attention from other physicists, but a copy of his doctoral thesis chanced to reach Einstein, whose response was enthusiastic. Einstein stressed the importance of Broglie's work both explicitly and by building further on it.
In 1925, Austrian-born physicist Wolfgang Pauli developed the Pauli exclusion principle, which states that no two electrons around a single nucleus in an atom can occupy the same quantum state simultaneously, as described by four quantum numbers. Pauli made major contributions to quantum mechanics and quantum field theory - he was awarded the 1945 Nobel Prize for Physics for his discovery of the Pauli exclusion principle - as well as solid-state physics, and he successfully hypothesized the existence of the neutrino. In addition to his original work, he wrote masterful syntheses of several areas of physical theory that are considered classics of scientific literature.
In 1926 at the age of 39, Austrian theoretical physicist Erwin Schrödinger produced the papers that gave the foundations of quantum wave mechanics. In those papers he described his partial differential equation that is the basic equation of quantum mechanics and bears the same relation to the mechanics of the atom as Newton's equations of motion bear to planetary astronomy. Adopting a proposal made by Louis de Broglie in 1924 that particles of matter have a dual nature and in some situations act like waves, Schrödinger introduced a theory describing the behaviour of such a system by a wave equation that is now known as the Schrödinger equation. The solutions to Schrödinger's equation, unlike the solutions to Newton's equations, are wave functions that can only be related to the probable occurrence of physical events. The readily visualized sequence of events of the planetary orbits of Newton is, in quantum mechanics, replaced by the more abstract notion of probability. (This aspect of the quantum theory made Schrödinger and several other physicists profoundly unhappy, and he devoted much of his later life to formulating philosophical objections to the generally accepted interpretation of the theory that he had done so much to create.)
German theoretical physicist Werner Heisenberg was one of the key creators of quantum mechanics. In 1925, Heisenberg discovered a way to formulate quantum mechanics in terms of matrices. For that discovery, he was awarded the Nobel Prize for Physics for 1932. In 1927 he published his uncertainty principle, upon which he built his philosophy and for which he is best known. Heisenberg was able to demonstrate that if you were studying an electron in an atom you could say where it was (the electron's location) or where it was going (the electron's velocity), but it was impossible to express both at the same time. He also made important contributions to the theories of the hydrodynamics of turbulent flows, the atomic nucleus, ferromagnetism, cosmic rays, and subatomic particles, and he was instrumental in planning the first West German nuclear reactor at Karlsruhe, together with a research reactor in Munich, in 1957. Considerable controversy surrounds his work on atomic research during World War II.
Quantum chemistry
Main article: Quantum chemistry
Some view the birth of quantum chemistry in the discovery of the Schrödinger equation and its application to the hydrogen atom in 1926.[citation needed] However, the 1927 article of Walter Heitler and Fritz London[92] is often recognised as the first milestone in the history of quantum chemistry. This is the first application of quantum mechanics to the diatomic hydrogen molecule, and thus to the phenomenon of the chemical bond. In the following years much progress was accomplished by Edward Teller, Robert S. Mulliken, Max Born, J. Robert Oppenheimer, Linus Pauling, Erich Hückel, Douglas Hartree, Vladimir Aleksandrovich Fock, to cite a few.[citation needed]
Still, skepticism remained as to the general power of quantum mechanics applied to complex chemical systems.[citation needed] The situation around 1930 is described by Paul Dirac:[93]
Hence the quantum mechanical methods developed in the 1930s and 1940s are often referred to as theoretical molecular or atomic physics to underline the fact that they were more the application of quantum mechanics to chemistry and spectroscopy than answers to chemically relevant questions. In 1951, a milestone article in quantum chemistry is the seminal paper of Clemens C. J. Roothaan on Roothaan equations.[94] It opened the avenue to the solution of the self-consistent field equations for small molecules like hydrogen or nitrogen. Those computations were performed with the help of tables of integrals which were computed on the most advanced computers of the time.[citation needed]
In the 1940s many physicists turned from molecular or atomic physics to nuclear physics (like J. Robert Oppenheimer or Edward Teller). Glenn T. Seaborg was an American nuclear chemist best known for his work on isolating and identifying transuranium elements (those heavier than uranium). He shared the 1951 Nobel Prize for Chemistry with Edwin Mattison McMillan for their independent discoveries of transuranium elements. Seaborgium was named in his honour, making him the only person, along Albert Einstein, for whom a chemical element was named during his lifetime.
Molecular biology and biochemistry
By the mid 20th century, in principle, the integration of physics and chemistry was extensive, with chemical properties explained as the result of the electronic structure of the atom; Linus Pauling's book on The Nature of the Chemical Bond used the principles of quantum mechanics to deduce bond angles in ever-more complicated molecules. However, though some principles deduced from quantum mechanics were able to predict qualitatively some chemical features for biologically relevant molecules, they were, till the end of the 20th century, more a collection of rules, observations, and recipes than rigorous ab initio quantitative methods.[citation needed]
File:DNA chemical structure.svg
Diagrammatic representation of some key structural features of DNA
This heuristic approach triumphed in 1953 when James Watson and Francis Crick deduced the double helical structure of DNA by constructing models constrained by and informed by the knowledge of the chemistry of the constituent parts and the X-ray diffraction patterns obtained by Rosalind Franklin.[95] This discovery lead to an explosion of research into the biochemistry of life.
In the same year, the Miller–Urey experiment demonstrated that basic constituents of protein, simple amino acids, could themselves be built up from simpler molecules in a simulation of primordial processes on Earth. Though many questions remain about the true nature of the origin of life, this was the first attempt by chemists to study hypothetical processes in the laboratory under controlled conditions.[citation needed]
In 1983 Kary Mullis devised a method for the in-vitro amplification of DNA, known as the polymerase chain reaction (PCR), which revolutionized the chemical processes used in the laboratory to manipulate it. PCR could be used to synthesize specific pieces of DNA and made possible the sequencing of DNA of organisms, which culminated in the huge human genome project.
An important piece in the double helix puzzle was solved by one of Pauling's students Matthew Meselson and Frank Stahl, the result of their collaboration (Meselson–Stahl experiment) has been called as "the most beautiful experiment in biology".
They used a centrifugation technique that sorted molecules according to differences in weight. Because nitrogen atoms are a component of DNA, they were labelled and therefore tracked in replication in bacteria.
Late 20th century
Buckminsterfullerene, C60
In 1970, John Pople developed the Gaussian program greatly easing computational chemistry calculations.[96] In 1971, Yves Chauvin offered an explanation of the reaction mechanism of olefin metathesis reactions.[97] In 1975, Karl Barry Sharpless and his group discovered a stereoselective oxidation reactions including Sharpless epoxidation,[98][99] Sharpless asymmetric dihydroxylation,[100][101][102] and Sharpless oxyamination.[103][104][105] In 1985, Harold Kroto, Robert Curl and Richard Smalley discovered fullerenes, a class of large carbon molecules superficially resembling the geodesic dome designed by architect R. Buckminster Fuller.[106] In 1991, Sumio Iijima used electron microscopy to discover a type of cylindrical fullerene known as a carbon nanotube, though earlier work had been done in the field as early as 1951. This material is an important component in the field of nanotechnology.[107] In 1994, Robert A. Holton and his group achieved the first total synthesis of Taxol.[108][109][110] In 1995, Eric Cornell and Carl Wieman produced the first Bose–Einstein condensate, a substance that displays quantum mechanical properties on the macroscopic scale.[111]
Mathematics and chemistry
Classically, before the 20th century, chemistry was defined as the science of the nature of matter and its transformations. It was therefore clearly distinct from physics which was not concerned with such dramatic transformation of matter. Moreover, in contrast to physics, chemistry was not using much of mathematics. Even some were particularly reluctant to use mathematics within chemistry. For example, Auguste Comte wrote in 1830:
However, in the second part of the 19th century, the situation changed and August Kekulé wrote in 1867:
I rather expect that we shall someday find a mathematico-mechanical explanation for what we now call atoms which will render an account of their properties.
Scope of chemistry
After the discovery by Rutherford and Bohr of the atomic structure in 1912, and by Marie and Pierre Curie of radioactivity, scientists had to change their viewpoint on the nature of matter. The experience acquired by chemists was no longer pertinent to the study of the whole nature of matter but only to aspects related to the electron cloud surrounding the atomic nuclei and the movement of the latter in the electric field induced by the former (see Born–Oppenheimer approximation). The range of chemistry was thus restricted to the nature of matter around us in conditions which are not too far (or exceptionally far) from standard conditions for temperature and pressure and in cases where the exposure to radiation is not too different from the natural microwave, visible or UV radiations on Earth. Chemistry was therefore re-defined as the science of matter that deals with the composition, structure, and properties of substances and with the transformations that they undergo.[citation needed]
However the meaning of matter used here relates explicitly to substances made of atoms and molecules, disregarding the matter within the atomic nuclei and its nuclear reaction or matter within highly ionized plasmas. This does not mean that chemistry is never involved with plasma or nuclear sciences or even bosonic fields nowadays, since areas such as Quantum Chemistry and Nuclear Chemistry are currently well developed and formally recognized sub-fields of study under the Chemical sciences (Chemistry), but what is now formally recognized as subject of study under the Chemistry category as a science is always based on the use of concepts that describe or explain phenomena either from matter or to matter in the atomic or molecular scale, including the study of the behavior of many molecules as an aggregate or the study of the effects of a single proton on a single atom, but excluding phenomena that deal with different (more "exotic") types of matter (e.g. Bose–Einstein condensate, Higgs boson, dark matter, naked singularity, etc.) and excluding principles that refer to intrinsic abstract laws of nature in which their concepts can be formulated completely without a precise formal molecular or atomic paradigmatic view (e.g. Quantum Chromodynamics, Quantum Electrodynamics, String Theory, parts of Cosmology (see Cosmochemistry), certain areas of Nuclear Physics (see Nuclear Chemistry), etc.). Nevertheless the field of chemistry is still, on our human scale, very broad and the claim that chemistry is everywhere is accurate.
Chemical industry
Main article: Chemical industry
The later part of the nineteenth century saw a huge increase in the exploitation of petroleum extracted from the earth for the production of a host of chemicals and largely replaced the use of whale oil, coal tar and naval stores used previously. Large-scale production and refinement of petroleum provided feedstocks for liquid fuels such as gasoline and diesel, solvents, lubricants, asphalt, waxes, and for the production of many of the common materials of the modern world, such as synthetic fibers, plastics, paints, detergents, pharmaceuticals, adhesives and ammonia as fertilizer and for other uses. Many of these required new catalysts and the utilization of chemical engineering for their cost-effective production.
In the mid-twentieth century, control of the electronic structure of semiconductor materials was made precise by the creation of large ingots of extremely pure single crystals of silicon and germanium. Accurate control of their chemical composition by doping with other elements made the production of the solid state transistor in 1951 and made possible the production of tiny integrated circuits for use in electronic devices, especially computers.
See also
Histories and timelines
Notable chemists
listed chronologically:
1. ^ Selected Classic Papers from the History of Chemistry
2. ^ "History of Gold". Gold Digest. Retrieved 2007-02-04.
3. ^ Photos, E., 'The Question of Meteorictic versus Smelted Nickel-Rich Iron: Archaeological Evidence and Experimental Results' World Archaeology Vol. 20, No. 3, Archaeometallurgy (February 1989), pp. 403–421. Online version accessed on 2010-02-08.
4. ^ a b W. Keller (1963) The Bible as History, p. 156 ISBN 0-340-00312-X
6. ^ Neolithic Vinca was a metallurgical culture Stonepages from news sources November 2007
7. ^ Will Durant wrote in The Story of Civilization I: Our Oriental Heritage:
11. ^ a b Will Durant (1935), Our Oriental Heritage:
13. ^ Lucretius (50 BCE). "de Rerum Natura (On the Nature of Things)". The Internet Classics Archive. Massachusetts Institute of Technology. Retrieved 2007-01-09. Check date values in: |date= (help)
14. ^ Norris, John A. (2006). "The Mineral Exhalation Theory of Metallogenesis in Pre-Modern Mineral Science". Ambix 53: 43. doi:10.1179/174582306X93183.
16. ^ Strathern, 2000. Page 79.
17. ^ Holmyard, E.J. (1957). Alchemy. New York: Dover, 1990. pp. 15, 16.
18. ^ William Royall Newman. Atoms and Alchemy: Chymistry and the experimental origins of the scientific revolution. University of Chicago Press, 2006. p.xi
19. ^ Holmyard, E.J. (1957). Alchemy. New York: Dover, 1990. pp. 48, 49.
21. ^ Brock, William H. (1992). The Fontana History of Chemistry. London, England: Fontana Press. pp. 32–33. ISBN 0-00-686173-3.
22. ^ Brock, William H. (1992). The Fontana History of Chemistry. London, England: Fontana Press. ISBN 0-00-686173-3.
23. ^ The History of Ancient Chemistry
26. ^ Dr. A. Zahoor (1997), JABIR IBN HAIYAN (Jabir), University of Indonesia
27. ^ Paul Vallely, How Islamic inventors changed the world, The Independent
(cf. Ahmad Y Hassan. "A Critical Reassessment of the Geber Problem: Part Three". Retrieved 2008-08-09. )
33. ^ Alakbarov, Farid (2001). "A 13th-Century Darwin? Tusi's Views on Evolution". Azerbaijan International 9: 2.
35. ^ Asarnow, Herman (2005-08-08). "Sir Francis Bacon: Empiricism". An Image-Oriented Introduction to Backgrounds for English Renaissance Literature. University of Portland. Retrieved 2007-02-22.
36. ^ Crosland, M.P. (1959). "The use of diagrams as chemical 'equations' in the lectures of William Cullen and Joseph Black." Annals of Science, Vol 15, No. 2, June
37. ^ Robert Boyle
41. ^ Ursula Klein (July 2007). "Styles of Experimentation and Alchemical Matter Theory in the Scientific Revolution". Metascience (Springer) 16 (2): 247–256 [247]. ISSN 1467-9981. doi:10.1007/s11016-007-9095-8.
42. ^ Nordisk familjebok – Cronstedt: "den moderna mineralogiens och geognosiens grundläggare" = "the modern mineralogy's and geognosie's founder"
43. ^ Cooper, Alan (1999). "Joseph Black". History of Glasgow University Chemistry Department. University of Glasgow Department of Chemistry. Archived from the original on 2006-04-10. Retrieved 2006-02-23.
45. ^ Partington, J.R. (1989). A Short History of Chemistry. Dover Publications, Inc. ISBN 0-486-65977-1.
47. ^ "Joseph Priestley". Chemical Achievers: The Human Face of Chemical Sciences. Chemical Heritage Foundation. 2005. Retrieved 2007-02-22.
48. ^ "Carl Wilhelm Scheele". History of Gas Chemistry. Center for Microscale Gas Chemistry, Creighton University. 2005-09-11. Retrieved 2007-02-23.
49. ^ Saunders, Nigel (2004). Tungsten and the Elements of Groups 3 to 7 (The Periodic Table). Chicago: Heinemann Library. ISBN 1-4034-3518-9.
52. ^ Mottelay, Paul Fleury (2008). Bibliographical History of Electricity and Magnetism (Reprint of 1892 ed.). Read Books. p. 247. ISBN 1-4437-2844-6.
53. ^ "Inventor Alessandro Volta Biography". The Great Idea Finder. The Great Idea Finder. 2005. Retrieved 2007-02-23.
54. ^ Lavoisier, Antoine (1743-1794) -- from Eric Weisstein's World of Scientific Biography, ScienceWorld
55. ^
56. ^
57. ^
58. ^
59. ^ a b Pullman, Bernard (2004). The Atom in the History of Human Thought. Reisinger, Axel. USA: Oxford University Press Inc. ISBN 0-19-511447-7.
60. ^ "John Dalton". Chemical Achievers: The Human Face of Chemical Sciences. Chemical Heritage Foundation. 2005. Retrieved 2007-02-22.
61. ^ "Proust, Joseph Louis (1754-1826)". 100 Distinguished Chemists. European Association for Chemical and Molecular Science. 2005. Retrieved 2007-02-23.
63. ^ Davy, Humphry (1808). "On some new Phenomena of Chemical Changes produced by Electricity, particularly the Decomposition of the fixed Alkalies, and the Exhibition of the new Substances, which constitute their Bases". Philosophical Transactions of the Royal Society of London (Royal Society of London.) 98 (0): 1–45. doi:10.1098/rstl.1808.0001.
64. ^ Weeks, Mary Elvira (1933). "XII. Other Elements Isolated with the Aid of Potassium and Sodium: Beryllium, Boron, Silicon and Aluminum". The Discovery of the Elements. Easton, Pennsylvania: Journal of Chemical Education. ISBN 0-7661-3872-0.
66. ^ Sir Humphry Davy (1811). "On a Combination of Oxymuriatic Gas and Oxygene Gas". Philosophical Transactions of the Royal Society 101 (0): 155–162. doi:10.1098/rstl.1811.0008.
67. ^ Gay-Lussac, J. L. (L'An X – 1802), "Recherches sur la dilatation des gaz et des vapeurs" [Researches on the expansion of gases and vapors], Annales de chimie 43: 137–175 Check date values in: |date= (help). English translation (extract).
On page 157, Gay-Lussac mentions the unpublished findings of Charles: "Avant d'aller plus loin, je dois prévenir que quoique j'eusse reconnu un grand nombre de fois que les gaz oxigène, azote, hydrogène et acide carbonique, et l'air atmosphérique se dilatent également depuis 0° jusqu'a 80°, le cit. Charles avait remarqué depuis 15 ans la même propriété dans ces gaz ; mais n'avant jamais publié ses résultats, c'est par le plus grand hasard que je les ai connus." (Before going further, I should inform [you] that although I had recognized many times that the gases oxygen, nitrogen, hydrogen, and carbonic acid [i.e., carbon dioxide], and atmospheric air also expand from 0° to 80°, citizen Charles had noticed 15 years ago the same property in these gases; but having never published his results, it is by the merest chance that I knew of them.)
68. ^ J. Dalton (1802) "Essay IV. On the expansion of elastic fluids by heat," Memoirs of the Literary and Philosophical Society of Manchester, vol. 5, pt. 2, pages 595-602.
69. ^
73. ^ Gay-Lussac, J. (1813). "Sur la combination de l'iode avec d'oxigène". Annales de chimie 88: 319.
74. ^ Gay-Lussac, J. (1814). "Mémoire sur l'iode". Annales de chimie 91: 5.
75. ^ Davy, H. (1813). "Sur la nouvelle substance découverte par M. Courtois, dans le sel de Vareck". Annales de chimie 88: 322.
76. ^ Davy, Humphry (January 1, 1814). "Some Experiments and Observations on a New Substance Which Becomes a Violet Coloured Gas by Heat". Phil. Trans. R. Soc. Lond. 104: 74. doi:10.1098/rstl.1814.0007.
77. ^ David Knight, ‘Davy, Sir Humphry, baronet (1778–1829)’, Oxford Dictionary of National Biography, Oxford University Press, 2004 accessed 6 April 2008
78. ^ "History of Chirality". Stheno Corporation. 2006. Archived from the original on 2007-03-07. Retrieved 2007-03-12.
79. ^ "Lambert-Beer Law". Sigrist-Photometer AG. 2007-03-07. Retrieved 2007-03-12.
80. ^ "Benjamin Silliman, Jr. (1816–1885)". Picture History. Picture History LLC. 2003. Retrieved 2007-03-24.
81. ^ Moore, F. J. (1931). A History of Chemistry. McGraw-Hill. pp. 182–1184. ISBN 0-07-148855-3. (2nd edition)
82. ^ "Jacobus Henricus van't Hoff". Chemical Achievers: The Human Face of Chemical Sciences. Chemical Heritage Foundation. 2005. Retrieved 2007-02-22.
83. ^ O'Connor, J. J.; Robertson, E.F. (1997). "Josiah Willard Gibbs". MacTutor. School of Mathematics and Statistics University of St Andrews, Scotland. Retrieved 2007-03-24.
84. ^ Weisstein, Eric W. (1996). "Boltzmann, Ludwig (1844–1906)". Eric Weisstein's World of Scientific Biography. Wolfram Research Products. Retrieved 2007-03-24.
85. ^ "Svante August Arrhenius". Chemical Achievers: The Human Face of Chemical Sciences. Chemical Heritage Foundation. 2005. Retrieved 2007-02-22.
86. ^ "Jacobus H. van 't Hoff: The Nobel Prize in Chemistry 1901". Nobel Lectures, Chemistry 1901–1921. Elsevier Publishing Company. 1966. Retrieved 2007-02-28.
87. ^ "Henry Louis Le Châtelier". World of Scientific Discovery. Thomson Gale. 2005. Retrieved 2007-03-24.
88. ^ "Emil Fischer: The Nobel Prize in Chemistry 1902". Nobel Lectures, Chemistry 1901–1921. Elsevier Publishing Company. 1966. Retrieved 2007-02-28.
89. ^ "History of Chemistry". Intensive General Chemistry. Columbia University Department of Chemistry Undergraduate Program. Retrieved 2007-03-24.
90. ^ "Alfred Werner: The Nobel Prize in Chemistry 1913". Nobel Lectures, Chemistry 1901–1921. Elsevier Publishing Company. 1966. Retrieved 2007-03-24.
91. ^ "Alfred Werner: The Nobel Prize in Physics 1911". Nobel Lectures, Physics 1901–1921. Elsevier Publishing Company. 1967. Retrieved 2007-03-24.
92. ^ W. Heitler and F. London, Wechselwirkung neutraler Atome und Homöopolare Bindung nach der Quantenmechanik, Z. Physik, 44, 455 (1927).
93. ^ P.A.M. Dirac, Quantum Mechanics of Many-Electron Systems, Proc. R. Soc. London, A 123, 714 (1929).
94. ^ C.C.J. Roothaan, A Study of Two-Center Integrals Useful in Calculations on Molecular Structure, J. Chem. Phys., 19, 1445 (1951).
95. ^ Watson, J. and Crick, F., "Molecular Structure of Nucleic Acids" Nature, April 25, 1953, p 737–8
96. ^ W. J. Hehre, W. A. Lathan, R. Ditchfield, M. D. Newton, and J. A. Pople, Gaussian 70 (Quantum Chemistry Program Exchange, Program No. 237, 1970).
97. ^ Catalyse de transformation des oléfines par les complexes du tungstène. II. Télomérisation des oléfines cycliques en présence d'oléfines acycliques Die Makromolekulare Chemie Volume 141, Issue 1, Date: 9 February 1971, Pages: 161–176 Par Jean-Louis Hérisson, Yves Chauvin doi:10.1002/macp.1971.021410112
98. ^ Katsuki, T.; Sharpless, K. B. J. Am. Chem. Soc. 1980, 102, 5974. (doi:10.1021/ja00538a077)
100. ^ Jacobsen, E. N.; Marko, I.; Mungall, W. S.; Schroeder, G.; Sharpless, K. B. J. Am. Chem. Soc. 1988, 110, 1968. (doi:10.1021/ja00214a053)
101. ^ Kolb, H. C.; Van Nieuwenhze, M. S.; Sharpless, K. B. Chem. Rev. 1994, 94, 2483–2547. (Review) (doi:10.1021/cr00032a009)
102. ^ Gonzalez, J.; Aurigemma, C.; Truesdale, L. Org. Syn., Coll. Vol. 10, p.603 (2004); Vol. 79, p.93 (2002). (Article)
103. ^ Sharpless, K. B.; Patrick, D. W.; Truesdale, L. K.; Biller, S. A. J. Am. Chem. Soc. 1975, 97, 2305. (doi:10.1021/ja00841a071)
104. ^ Herranz, E.; Biller, S. A.; Sharpless, K. B. J. Am. Chem. Soc. 1978, 100, 3596–3598. (doi:10.1021/ja00479a051)
105. ^ Herranz, E.; Sharpless, K. B. Org. Syn., Coll. Vol. 7, p.375 (1990); Vol. 61, p.85 (1983). (Article)
106. ^ "The Nobel Prize in Chemistry 1996". The Nobel Foundation. Retrieved 2007-02-28.
107. ^ "Benjamin Franklin Medal awarded to Dr. Sumio Iijima, Director of the Research Center for Advanced Carbon Materials, AIST". National Institute of Advanced Industrial Science and Technology. 2002. Retrieved 2007-03-27.
108. ^ First total synthesis of taxol 1. Functionalization of the B ring Robert A. Holton, Carmen Somoza, Hyeong Baik Kim, Feng Liang, Ronald J. Biediger, P. Douglas Boatman, Mitsuru Shindo, Chase C. Smith, Soekchan Kim, et al.; J. Am. Chem. Soc.; 1994; 116(4); 1597–1598. DOI Abstract
109. ^ First total synthesis of taxol. 2. Completion of the C and D rings Robert A. Holton, Hyeong Baik Kim, Carmen Somoza, Feng Liang, Ronald J. Biediger, P. Douglas Boatman, Mitsuru Shindo, Chase C. Smith, Soekchan Kim, and et al. J. Am. Chem. Soc.; 1994; 116(4) pp 1599–1600 DOI Abstract
110. ^ A synthesis of taxusin Robert A. Holton, R. R. Juo, Hyeong B. Kim, Andrew D. Williams, Shinya Harusawa, Richard E. Lowenthal, Sadamu Yogai J. Am. Chem. Soc.; 1988; 110(19); 6558–6560. Abstract
111. ^ "Cornell and Wieman Share 2001 Nobel Prize in Physics". NIST News Release. National Institute of Standards and Technology. 2001. Retrieved 2007-03-27.
Further reading
External links |
66872b8e785fab07 |
Shot noise in non-adiabatically driven nanoscale conductors
[ [ Institut für Physik, Universität Augsburg, Universitätsstraße 1, 86135 Augsburg, Germany
We investigate the noise properties of pump currents through molecular wires and coupled quantum dots. As a model we employ a two level system that is connected to electron reservoirs and is non-adiabatically driven. Concerning the electron-electron interaction, we focus on two limits: non-interacting electrons and strong Coulomb repulsion. While the former case is treated within a Floquet scattering formalism, we derive for the latter case a master equation formalism for the computation of the current and the zero-frequency noise. For a pump operated close to internal resonances, the differences between the non-interacting and the strongly interacting limit turn out to be surprisingly small.
driven transport, shot noise, Coulomb repulsion
theDOIsuffix \Volumexx \Issuex \Copyrightissuexx \Month10 \Year2007 \pagespan1 \ReceiveddateMarch 31, 2020 \subjclass[pacs] 05.60.Gg, 73.63.-b, 72.40.+w, 05.40.-a
F. J. Kaiser]Franz J. Kaiser S. Kohler]Sigmund Kohler111Corresponding author: S. Kohler, e-mail: , Phone: +49 821 598 3316, Fax: +49 821 598 3222
1 Introduction
Recent experiments with coherently coupled quantum dots [1, 2, 3, 4] and molecular wires [5, 6] deal with the transport properties of small systems with a discrete level structure. These experimental achievements generated new theoretical interest in the transport properties of nanoscale systems [7, 8]. One particular field of interest is the interplay of transport and electronic excitations by an oscillating gate voltage, a microwave field, or an infrared laser, respectively. Such excitations bear intriguing phenomena like photon-assisted tunnelling [9, 10, 11, 12, 13, 14, 3, 15, 16] and the suppression of both the dc current [17, 18] and the zero-frequency noise [19, 20].
A further intriguing phenomenon in this context is electron pumping induced by a cyclic change of the parameters in the absence of any external bias voltage [21, 22, 23]. For adiabatically slow driving, the transfered charge per cycle is determined by the area enclosed in parameter space during the cyclic evolution [24, 25]. This implies that the resulting current is proportional to the driving frequency and, thus, suggests that non-adiabatic electron pumping is more effective. For practical applications, it is also desirable that the pump current flows at a sufficiently low noise level. It has been found that adiabatic pumps can be practically noiseless [26]. This happens, however, on the expense of having only a small or even vanishing current [27]. Outside the adiabatic regime, when the driving frequency is close to the internal resonances of the conductor, the current assumes much larger values while its noise nevertheless is clearly sub-Poissonian [28]. Since this prediction of an optimal working point has been made for non-interacting electrons, the question on the influence of Coulomb repulsion arises.
An intuitive description of the electron transport through mesoscopic systems is provided by the Landauer scattering formula [29, 30] and its various generalisations. In this formalism, both the average current [31] and the transport noise characteristics [30, 32] can be expressed in terms of the quantum transmission probabilities of the respective scattering channels. If one heuristically postulates that the current obeys a scattering formula, one should worry whether this complies with the Pauli principle or if it has to be ensured by introducing “blocking factors” [31]. For static conductors the current being the experimentally relevant quantity, is independent of these blocking factors, which renders this question rather academic. This is no longer the case when the scattering potential is time-dependent. Then a scattered electron can absorb or emit energy quanta from the driving field, which opens inelastic transport channels [33, 34, 35]. So blocking factors indeed can have a net effect on the current, and it has been suggested to test the demand for them experimentally with driven conductors [36, 37]. In order to avoid such conflicts, one should start from a many-particle description. In this spirit, within a Green function approach, a formal solution for the current through a time-dependent conductor has been presented, e.g., in Refs. [36] and [38] without taking advantage of the full Floquet theory for the wire. Nevertheless in some special cases like, e.g., for conductors consisting of only a single level [39, 40] or for the scattering by a piecewise constant potential [41], an explicit solution becomes feasible. A complete Floquet theory provides in addition to a current formula a prescription for the computation of the Green function [42, 16].
The spectral density of the current fluctuations has been derived for the low-frequency ac conductance [43, 44] and the scattering by a slowly time-dependent potential [45]. For arbitrary driving frequencies, the noise has been characterised by its zero-frequency component [19]. A remarkable feature of the current noise in the presence of time-dependent fields is its dependence on the phase of the transmission amplitudes [45, 19]. By clear contrast, both the current in the driven case [19] and the noise in the static case [30] depend solely on transmission probabilities.
When electron-electron interactions beyond the mean-field level become relevant, the direct application of a Landauer-like theory is no longer possible and one has to resort to other methods like, e.g., a master equation description for the reduced density operator of the wire [46, 47, 48, 49]. For time-dependent conductors, this enables a rather efficient treatment of transport problems after decomposing the wire density operator into a Floquet basis. Then it is possible to study relatively large driven conductors [16] and to include also electron-electron interactions [50, 51] and electron-phonon interactions [52]. For the computation of the current fluctuations, one can employ a generalised master equation that resolves the number of the transported electrons. This degree of freedom is traced out after introducing a counting variable [53]. For various static transport problems, this approach has been followed by several groups [54, 55, 56, 57, 58, 59, 60].
After introducing our model, we review in Sec. 2 the Floquet scattering theory for the computation of the current and the zero-frequency noise. In Sec. 3, we derive a master equation approach which is applicable also in the presence of electron-electron interactions. These formalisms are in Sec. 4 employed for investigating the influence of Coulomb repulsion on the noise in non-adiabatic electron pumps.
1.1 Wire-lead model
A frequently used model for a nanoscale conductors like molecular wires or coupled quantum dots is sketched in Fig. 1. It is described by the time-dependent Hamiltonian
where the different terms correspond to the central conductor (“wire”), electron reservoirs (“leads”), and the wire-lead couplings, respectively. We focus on the regime of coherent quantum transport where the main physics at work occurs on the wire itself. In doing so, we neglect other possible influences originating from driving-induced hot electrons in the leads and dissipation on the wire. Then, the wire Hamiltonian reads in a tight-binding approximation with orbitals
For a molecular wire, this constitutes the so-called Hückel description where each site corresponds to one atom. The fermion operators , annihilate and create, respectively, an electron with spin in the orbital . The influence of an applied ac field or an oscillating gate voltage with frequency results in a periodic time-dependence of the wire Hamiltonian: . For the interaction Hamiltonian, we assume a capacitor model, so that
where describes the number of electrons on the wire. Below we shall focus on two limits, namely the interaction-free case and strong interaction, , which finally means that the Coulomb repulsion is so strong that only states with zero or one excess electron play a role.
The leads are modelled by ideal electron gases,
where () creates an electron in the state () in the left (right) lead. The wire-lead tunnelling Hamiltonian
establishes the contact between the sites , and the respective lead. This tunnelling coupling is described by the spectral density
of lead . In the following, we restrict ourselves to the so-called wide-band limit in which the spectral density is assumed to be energy-independent, .
Figure 1: Level structure of a double quantum dot with orbitals. The terminating sites are coupled to leads with chemical potential and , respectively.
To fully specify the dynamics, we choose as an initial condition for the left/right lead a grand-canonical electron ensemble at temperature and electro-chemical potential . Thus, the initial density matrix reads
where is the number of electrons in lead and denotes the Boltzmann constant times temperature. An applied voltage maps to a chemical potential difference with being the electron charge. Then, at initial time , the only nontrivial expectation values of the wire operators read where denotes the Fermi function.
1.2 Charge, current, and current fluctuations
To avoid the explicit appearance of commutators in the definition of correlation functions, we perform the derivation of the central transport quantities in the Heisenberg picture. As a starting point we choose the operator
which describes the charge accumulated in lead with respect to the initial state. Due to total charge conservation, equals the net charge transmitted across the contact ; its time derivative defines the corresponding current
The current noise is described by the symmetrised correlation function
of the current fluctuation operator , where the anticommutator ensures hermiticity. At long times, shares the time-periodicity of the driving [42]. Therefore, it is possible to characterise the noise level by the zero-frequency component of averaged over the driving period,
Moreover for two-terminal devices, is independent of the contact , i.e., .
The evaluation of the zero-frequency noise directly from its definition (11) can be tedious due to the explicit appearance of both times, and . This inconvenience can be circumvented by employing the relation
which follows from the integral representation of Eqs. (8) and (9), , in the limit . By averaging Eq. (12) over the driving period and using , we obtain
where denotes the charge fluctuation operator and the time average. The fact that the time average can be evaluated from the limit allows to interpret the zero-frequency noise as the “charge diffusion coefficient”. As a dimensionless measure for the relative noise strength, we employ the so-called Fano factor [61]
which can provide information about the nature of the transport mechanism [30, 62]. Here, denotes the time-average of the current expectation value . Historically, the zero-frequency noise (11) contains a factor , i.e. , resulting from a different definition of the Fourier transform. Then, the Fano factor is defined as .
1.3 Full counting statistics
A more complete picture of the current fluctuations beyond second order correlations is provided by the full counting statistics. It is determined by the moment generating function
and allows the direct computation of the th moment of the charge in the left lead via the relation
Subtracting from the moments the trivial contributions that depend on a shift of the initial values, one obtains the cumulants. They are defined and generated via the so-called cumulant generating function which replaces in Eq. (16) [63], so that the th cumulant reads
In a continuum limit for the leads, both the moments and the cumulants diverge as a function of time, and one focusses on the rates at which these quantities change in the long-time limit. This establishes between the first two cumulants and and the relations
For driven systems, these quantities are time-dependent even in the asymptotic limit and, thus, we characterise the transport by the corresponding averages over one driving period. Then expressions (18) and (19) become identical to the previously defined time averages and , respectively. Herein we restrict ourselves to the computation of the first and the second cumulant, despite the fact that also higher-order cumulants can be measured [64, 65].
2 Floquet scattering theory
We now derive from the model described in Section 1.1 in the absence of electron-electron interactions expressions for both the current through the wire and the associated noise by solving the corresponding Heisenberg equations of motions. Since for , the both spin directions contribute independently to the current, we ignore the spin index which means that we consider the current per spin projection. We start from the equations of motion for the annihilation operators in lead ,
which are straightforwardly integrated to read
where denotes the molecular wire site attached to lead , i.e., and . Inserting (21) into the Heisenberg equations for the wire operators yields in the asymptotic limit
For a energy-dependent spectral density , the dissipative part of the Heisenberg equation (22) acquires a memory kernel which complicates not only its solution but also the derivation of a current formula. For details, we refer the reader to Ref. [16].
The influence of the operator-valued Gaussian noise
is fully specified by the expectation values and
which follow directly from the definition (24) and the initial conditions (7). It is convenient to define the Fourier representation of the noise operator, whose correlation function
follows directly from Eq. (25).
2.1 Retarded Green function
The equations of motion (22) and (23) represent a set of linear inhomogeneous equations and, thus, can be solved with the help of a retarded Green function , which obeys
where and is the one-particle Hamiltonian corresponding to Eq. (2). At this stage, it is important to note that the propagator of the homogeneous equations obeys . Accordingly, the Fourier representation of the retarded Green function
is also -periodic in the time argument, so that it can be represented as a Fourier series. Physically, the Fourier coefficients describe the propagation of an electron with initial energy under the absorption (emission) of photons for (). In the limiting case of a time-independent situation, all sideband contributions with vanish and becomes time-independent and identical to . From the definition (27) of the Green function and its Fourier representation (28), it can be shown that the solution of the Heisenberg equations (22), (23) reads
where we have defined .
A proof starts from the definition of the Green function, Eq. (27). By Fourier transformation with respect to , we obtain the relation
which we multiply by from the left. The difference between the resulting expression and its hermitian adjoint with and interchanged is relation (30).
2.2 Current through the driven molecular wire
Owing to charge conservation, the (net) current flowing from lead into the molecular wire is determined by the negative time derivative of the charge in lead . Thus, the current operator reads , where denotes the corresponding electron number and the electron charge. From Eqs. (21) and (24) then follows
This operator-valued expression for the time-dependent current is a convenient starting point for the evaluation of expectation values like the dc and ac current and the current noise.
2.2.1 Time-average current
To obtain the current , we insert the solution (29) of the Heisenberg equation into the current operator (32) and use the expectation values (26). The resulting expression
still contains back-scattering terms and, thus, is not of a “scattering form”. Indeed, bringing (33) into a form that resembles the static current formula requires some tedious algebra. Such a derivation has been presented for the linear conductance of time-independent systems [66], for tunnelling barriers [67] and mesoscopic conductors [68] in the static case for finite voltage, and for a wire consisting of levels that couple equally strong to both leads [38]. For the periodically time-dependent case in the absence of electron-electron interactions, such an expression has been derived only recently [19, 42].
Inserting the matrix element of equation (30) eliminates the back-scattering terms and yields for the time-dependent current the expression
where denotes the charge oscillating between the left lead and the wire. Obviously, since is time-periodic and bounded, its time derivative cannot contribute to the average current. The time-dependent current is determined by the time-dependent transmission
The corresponding expression for follows from the replacement . We emphasise that expression (34) obeys the form of the current formula for a static conductor within a scattering formalism. In particular, consistent with Refs. [36, 31], no “Pauli blocking factors” appear in our derivation.
The dc current obtained from (34) by time-averaging can be written in an even more compact form if we insert for the Green function the Fourier representation (28). This results in
denote the transmission probabilities for electrons from the right to the left lead and vice versa, respectively, with initial energy and final energy , i.e., the probability for an scattering event under the absorption (emission) of photons if ().
For a static situation, all contributions with vanish and . Therefore, it is possible to write the current (36) as a product of a single transmission , which is independent of the direction, and the difference of the Fermi functions, . We emphasise that in the driven case this no longer holds true.
2.2.2 Noise power
In order to derive a related expression for the time-averaged current-current correlation function (11), we insert the current operator (32) and the solution (29) of the Heisenberg equations of motion. Then, we again employ relation (30) and the shorthand notation , so that we finally obtain
2.2.3 Floquet decomposition in the wide-band limit
Solving the equations of motion (27) for the Green function is equivalent to computing a complete set of solutions for the equation
which is linear and possesses time-dependent, -periodic coefficients. Thus, it is possible to construct a complete set of solutions with the Floquet ansatz
The so-called Floquet states obey the time-periodicity of and have been decomposed into a Fourier series. In a Hilbert space that is extended by a periodic time coordinate, the so-called Sambe space [69], they obey the Floquet eigenvalue equation [70, 71]
Due to the Brillouin zone structure of the Floquet spectrum [72, 69, 70], it is sufficient to compute all eigenvalues of the first Brillouin zone, . Since the operator on the l.h.s. of Eq. (43) is non-Hermitian, the eigenvalues are generally complex-valued and the (right) eigenvectors are not mutually orthogonal. Thus, to determine the propagator, we need to solve also the adjoint Floquet equation yielding again the same eigenvalues but providing the adjoint eigenvectors . It can be shown that the Floquet states together with the adjoint states form at equal times a complete bi-orthogonal basis: and . A proof requires to account for the time-periodicity of the Floquet states since the eigenvalue equation (43) holds in a Hilbert space extended by a periodic time coordinate [73, 70].
Using the Floquet equation (43), it is straightforward to show that the propagator can be written as
In general, the Floquet equation (43) has to be solved numerically. In the zero temperature limit considered here, the Fermi functions in the expressions for the average current (36) and the zero-frequency noise (39) become step functions. Therefore, the integrands are rational functions and the remaining energy integrals can be performed analytically.
3 Master equation approach
In the presence of electron-electron interactions, an exact treatment of the electron transport within a scattering theory is no longer possible and a master equation formalism can be an appropriate tool for the computation of currents [46, 51, 13, 52, 16]. Recently, master equations have been established for the computation of current noise of various static conductors as well [53, 54, 55, 56, 57, 58, 59, 60]. In the following, we develop such an approach for the case of periodically time-dependent conductors.
3.1 Perturbation theory and reduced density operator
We start our derivation of a master equation formalism from the Liouville-von Neumann equation for the total density operator . By standard techniques we obtain the exact equation of motion
where the tilde denotes the interaction picture with respect to the lead and the wire Hamiltonian, , and is the propagator without the coupling. Below we will employ Floquet theory in order to obtain explicit expressions for these operators.
As already discussed above, the moment generating function contains the full information about the counting statistics. For its explicit computation, we define in the Hilbert space of the wire the operator
whose limit obviously is the reduced density operator of the wire, . After tracing out the wire degrees of freedom, becomes the moment generating function . It will prove convenient to decompose into a Taylor series,
where the coefficients provide direct access to the moments .
Our strategy is now to derive from the master equation (47) for the full density operator an equation of motion for the . For that purpose, we transform the master equation for back to the Schrödinger picture and multiply it from the left by the operator . By tracing out the lead degrees of freedom and using the commutation relations and , we obtain
In order to achieve this compact notation, we have defined the superoperators and the time-dependent Liouville operator
which also determines the time-evolution of the reduced density operator, . The tilde denotes the interaction picture operator and the Fermi function of lead , while . The current operators
describe the tunnelling of an electron from the left lead to the wire and the opposite process, respectively. Note that these superoperators still contain a non-trivial time-dependence stemming from the interaction-picture representation of the creation and annihilation operators of wire electrons.
3.2 Computation of moments and cumulants
For computation of the current (18) and the zero-frequency noise (19), we generalise the approach of Ref. [56] to the time-dependent case. Since we restrict the noise characterisation to the Fano factor, it is sufficient to compute the long-time behaviour of the first and the second moment of the electron number in the left lead. This information is fully contained in the time-derivative of the operator up to second order in , for which we obtain by Taylor expansion of the equation of motion (50) the hierarchy
The first equation determines the time-evolution of the reduced density operator, which in the long-time limit becomes the stationary solution . Note that for a driven system, it generally is time-dependent. Replacing in Eq. (55) by and using the fact that for any operator , we obtain the stationary current
The dc current follows simply by averaging over one driving period and one ends up with the current formula of Ref. [74].
The computation of is hindered by the fact that the inverse of a Liouvillian generally does not exist. For static systems this is obvious from the fact that the stationary solution fulfils , which implies that is singular. This unfortunately also complicates the computation of the second cumulant and we proceed in the following way: We start from Eq. (12) which relates the zero frequency noise to the charge fluctuation in the leads and write the time derivative of the first and the second moment of the electron number in the left lead by the operators . From the equations of motion (55) and (56), we then find , where we again used the relation . An important observation is now that the first part of this expression vanishes for , which can easily be demonstrated by inserting the current expectation value (57). Since acts as a projector onto the stationary solution , we can define the “perpendicular” part
which fulfils the relation and obeys the equation of motion
We will see below that in contrast to , the long-time limit of the traceless can be computed directly from the equation of motion (59). Upon inserting Eq. (58) into the equation of motion (56), we finally obtain for the still time-dependent “charge diffusion coefficient” the expression
whose time-average finally provides the Fano factor .
3.3 Floquet decomposition
The remaining task is now to compute the stationary solutions and from the time-dependent equations of motion (54) and (55). Like for the computation of the dc current in our previous work [74], we solve this problem within a Floquet treatment of the isolated wire, which provides a convenient representation of the electron creation and annihilation operators.
3.3.1 Fermionic Floquet operators
In the driven wire Hamiltonian (2), the single-particle contribution commutes with the interaction term and, thus, these two Hamiltonians possess a complete set of common many-particle eigenstates. Here we start by diagonalising the first part of the Hamiltonian which describes the single-particle dynamics determined by the time-periodic matrix elements . According to the Floquet theorem, the corresponding (single particle) Schrödinger equation possesses a complete solution of the form
with the so-called quasienergies and the -periodic Floquet states
The Floquet states and the quasienergies are obtained by solving the eigenvalue problem
whose solution allows one to construct via Slater determinants many-particle Floquet states. In analogy to the quasimomenta in Bloch theory for spatially periodic potentials, the quasienergies come in classes
of which all members represent the same physical solution of the Schrödinger equation. Thus we can restrict ourselves to states within one Brillouin zone like for example .
For the numerical computation of the operators and , it is essential to have an explicit expression for the interaction picture representation of the wire operators. It can be obtained from the (fermionic) Floquet creation and annihilation operators [16] defined via the transformation
The inverse transformation
follows from the mutual orthogonality and the completeness of the Floquet states at equal times [70]. Note that the right-hand side of Eq. (66) becomes time independent after the summation. The Floquet annihilation operator (65) has the interaction picture representation
with the important feature that the time difference enters only via the exponential prefactor. This allows us to evaluate the -integration of the master equation (54) after a Floquet decomposition. Relation (68) can easily be shown by computing the time derivative with respect to which by use of the Floquet equation (63) becomes
Together with the initial condition follows relation (68). Note that the time evolution induced by conserves the number of electrons on the wire.
3.3.2 Master equation and current formula
In order to make use of the Floquet ansatz, we decompose the master equation (54) and the current formula (57) into the Floquet basis derived in the last subsection. For that purpose we use the fact that we are finally interested in the current at asymptotically large times in the limit of large interaction . The latter has the consequence that only wire states with at most one excess electron play a role, so that the stationary density operator can be decomposed into the dimensional basis , where denotes the wire state in the absence of excess electrons and . Moreover, it can be shown that at large times, the density operator becomes diagonal in the electron number , so that a proper ansatz reads
Note that we keep terms with , which means that we work beyond a rotating-wave approximation. Indeed in a non-equilibrium situation, the off-diagonal density matrix elements will not vanish and neglecting them might lead to artefacts [75, 16].
By inserting the decomposition (70) into the master equation (54), we obtain an equation of motion for the matrix elements . We evaluate the trace over the lead states and compute the matrix element . Thereby we neglect the two-particle terms which are of the structure . Formally, these terms drop out in the limit of strong Coulomb repulsion because they are accompanied by a rapidly oscillating phase factor . Then the -integration results in a factor which vanishes in the limit of large . Since the total Hamiltonian (1) is diagonal in the spin index , we find that the density matrix elements are spin-independent as well so that after a transient stage
and . Moreover, the stationary density operator (70) obeys the time periodicity of the driving field [16] and, thus, can be decomposed into the Fourier series
and accordingly.
After some algebra, we arrive at a set of coupled equations of motion for which in Fourier representation read |
a13c0fd9685c8ab4 | Title: Use basic examples to calibrate exponents
Prerequisites: Undergraduate analysis and combinatorics.
Example 1. (Elementary identities) There is a familiar identity for the sum of the first n squares:
\displaystyle 1^2 + 2^2 + 3^2 + \ldots + n^2 = ??? (1)
But imagine that one has forgotten exactly what the RHS of (1) was supposed to be… one remembers that it was some polynomial in n, but can’t remember what the degree or coefficients of the polynomial were. Now one can of course try to rederive the identity, but there are faster (albeit looser) ways to reconstruct the right-hand side. Firstly, we can look at the asymptotic test case n \to \infty. On the LHS, we are summing n terms of size at most n^2, so the LHS is at most n^3; thus, if we believe the RHS to be a polynomial in n, it should be at most cubic in n. We can do a bit better by approximating the sum in the LHS by the integral \int_0^n x^2\ dx = n^3/3, which strongly suggests that the cubic term on the RHS should be n^3/3. So now we have
\displaystyle 1^2 + 2^2 + 3^2 + \ldots + n^2 = \frac{1}{3} n^3 + a n^2 + b n + c
for some coefficients a,b,c that we still have to work out.
We can plug in some other basic cases. A simple one is n=0. The LHS is now zero, and so the constant coefficient c on the RHS should also vanish. A slightly less obvious base case is n=-1. The LHS is still zero (note that the LHS for n-1 should be the LHS for n, minus n^2), and so the RHS still vanishes here; thus by the factor theorem, the RHS should have both n and n+1 as factors. We are now looking at
\displaystyle 1^2 + 2^2 + 3^2 + \ldots + n^2 = n(n+1) ( \frac{1}{3} n + d )
for some unspecified constant d. But now we just need to try one more test case, e.g. n=1, and we learn that d = 1/6, thus recovering the correct formula
\displaystyle 1^2 + 2^2 + 3^2 + \ldots + n^2 = \frac{n(n+1) (2n+1)}{6}. (1′)
Once one has the formula (1′) in hand, of course, it is not difficult to verify by a textbook use of mathematical induction that the formula is in fact valid. (Alternatively, one can prove a more abstract theorem that the sum of the first n k^{th} powers is necessarily a polynomial in n for any given k, at which point the above analysis actually becomes a rigorous derivation of (1′).)
Note that the optimal strategy here is to start with the most basic test cases (n \to \infty, n = 0, n = -1) first before moving on to less trivial cases. If instead one used, e.g. n=1, n=2, n=3, n=4 as the test cases, one would eventually have obtained the right answer, but it would have been more work.
Exercise 1. (Partial fractions) If w_1,\ldots,w_k are distinct complex numbers, and P(z) is a polynomial of degree less than k, establish the existence of a partial fraction decomposition
\displaystyle \frac{P(z)}{(z-w_1) \ldots (z-w_k)} = \frac{c_1}{z-w_1} + \ldots + \frac{c_k}{z-w_k},
(Hint: use the remainder theorem and induction) and use the test cases z \to w_j for j=1,\ldots,k to compute the coefficients c_1,\ldots,c_k. Use this to deduce the Lagrange interpolation formula. \diamond
Example 2. (Counting cycles in a graph) Suppose one has a graph G on N vertices with an edge density of \delta (thus, the number of edges is \delta \binom{N}{2}, or roughly \delta N^2 up to constants). There is a standard Cauchy-Schwarz argument that gives a lower bound on the number of four-cycles C_4 (i.e. a circuit of four vertices connected by four edges) present in G, as a function of \delta and N. It only takes a few minutes to reconstruct this argument to obtain the precise bound, but suppose one was in a hurry and wanted to guess the bound rapidly. Given the “polynomial” nature of the Cauchy-Schwarz inequality, the bound is likely to be some polynomial combination of \delta and N, such as \delta^p N^q (omitting constants and lower order terms). But what should p and q be?
Well, one can test things with some basic examples. A really trivial example is the empty graph (where \delta = 0), but this is too trivial to tell us anything much (other than that p should probably be positive). At the other extreme, consider the complete graph on N vertices, where \delta = 1; this renders p irrelevant, but still makes q non-trivial (and thus, hopefully, computable). In the complete graph, every set of four points yields a four-cycle C_4, so the number of four-cycles here should be about N^4 (give or take some constant factors, such as 4! – remember that we are in a hurry here, and are ignoring these sorts of constant factors). This tells us that q should be at most 4, and if we expect the Cauchy-Schwarz bound to be saturated for the complete graph (which is a good bet – arguments based on the Cauchy-Schwarz inequality tend to work well in very “uniformly distributed” situations) – then we would expect q to be exactly 4.
To calibrate p, we need to test with graphs of density \delta less than 1. Given the previous intuition that Cauchy-Schwarz arguments work well in uniformly distributed situations, we would want to use a test graph of density \delta that is more or less uniformly distributed. A good example of such a graph is a random graph G on N vertices, in which each edge has an independent probability of \delta of lying in G. By the law of large numbers, we expect the edge density of such a random graph to be close to \delta on the average. On the other hand, each one of the roughly N^4 four-cycles C_4 connecting the N vertices has a probability about \delta^4 of lying in the graph, since the C_4 has four edges, each with an independent probability of \delta of lying in the edge. The events that each of the four-cycles lies in the graph G aren’t completely independent of each other, but they are still close enough to being so that one can guess using the law of large numbers that the total number of 4-cycles should be about \delta^4 N^4 on the average (up to constants). [Actually, linearity of expectation will give us this claim even without any independence whatsoever.] So this leads one to predict p=4, thus the number of 4-cycles in any graph on N vertices of density \delta should be \geq c \delta^4 N^4 for some absolute constant c>0, and this is indeed the case (provided that one also counts degenerate cycles, in which some vertices are repeated).
If one is nervous about using the random graph as the test graph, one could try a graph at the other end of the spectrum – e.g. the complete graph on about \sqrt{\delta} N vertices, which also has edge density about \delta. Here one quickly calculates that the number of 4-cycles is about \delta^2 N^4, which is a larger quantity than in the random case (and this fits with the intuition that this graph is packing the same number of edges into a tighter space, and should thus increase the number of cycles). So the random graph is still the best candidate for a near-extremiser for this bound. (Actually, if the number of 4-cycles is close to the Cauchy-Schwarz lower bound, then the graph becomes pseudorandom, which roughly speaking means any randomly selected small subgraph of that graph is indistinguishable from a random graph.)
One should caution that sometimes the random object is not the extremiser, and so does not always calibrate an estimate correctly. For instance, consider Szemerédi’s theorem, that asserts that given any 0 < \delta < 1 and k > 1, that any subset of \{1,\ldots,N\} of density \delta should contain at least one arithmetic progression of length k, if N is large enough. One can then ask what is the minimum number of k-term arithmetic progressions such a set would contain. Using the random subset of \{1,\ldots,N\} of density \delta as a test case, we would guess that there should be about \delta^k N^2 (up to constants depending on k). However, it turns out that the number of progressions can be significantly less than this (basically thanks to the old counterexample of Behrend): given any constant C, one can get significantly fewer than \delta^C N^2 k-term progressions. But, thanks to an averaging argument of Varnavides, it is known that there are at least c(k,\delta) N^2 k-term progressions (for N large enough), where c(k,\delta) > 0 is a positive quantity. (Determining the exact order of magnitude of c(k,\delta) is still an important open problem in this subject.) So one can at least calibrate the correct dependence on N, even if the dependence on \delta is still unknown.
Example 3. (Sobolev embedding) Given a reasonable function f: {\Bbb R}^n \to {\Bbb R} (e.g. a Schwartz class function will do), the Sobolev embedding theorem gives estimates such as
\displaystyle \| f \|_{L^q({\Bbb R}^n)} \leq C_{n,p,q} \|\nabla f\|_{L^p({\Bbb R}^n)} (2)
for various exponents p, q. Suppose one has forgotten the exact relationship between p, q, and n and wants to quickly reconstruct it, without rederiving the proof of the theorem or looking it up. One could use dimensional analysis to work out the relationship (and we will come to that shortly), but an equivalent way to achieve the same result is to test the inequality (2) against a suitably basic example, preferably one that one expects to saturate (2).
To come as close to saturating (2) as possible, one wants to keep the gradient of f small, while making f large; among other things, this suggests that unnecessary oscillations in f should be kept to a minimum. A natural candidate for an extremiser, then, would be a rescaled bump function f(x) = A\phi(x/L), where \phi \in C^\infty_0({\Bbb R}^n) is some fixed bump function, A > 0 is an amplitude parameter, and L > 0 is a parameter, thus f is a rescaled bump function of bounded amplitude O(A) that is supported on a ball of radius O(r) centred at the origin. [As the estimate (2) is linear, the amplitude A turns out to ultimately be irrelevant here, but the amplitude plays a more crucial role in nonlinear estimates; for instance, it explains why nonlienar estimates typically have the same number of appearances of a given unknown function f in each term. Also, it is sometimes convenient to carefully choose the amplitude in order to attain a convenient normalisation, e.g. to set one of the norms in (2) equal to 1.]
The ball that f is supported on has volume about O(L^n) (allowing implied constants to depend on n), and so the L^q norm of f should be about O(L^{n/q} ) (allowing implied constants to depend on q as well). As for the gradient of f, since f oscillates by O(A) over a length scale of O(L), one expects \nabla f to have size about O(A/L) on this ball (remember, derivatives measure “rise over run“!), and so the L^p norm of \nabla f should be about O( \frac{A}{L} L^{n/p} ). Inserting this numerology into (2), and equating powers of L (note A cancels itself into irrelevance, and could in any case be set to equal 1), we are led to the relation
\displaystyle \frac{n}{p} - 1 = \frac{n}{q} (2)
which is indeed one of the necessary conditions for (2). (The other necessary conditions are that p and q lie strictly between 1 and infinity, but these require a more complicated test example to establish.)
One can efficiently perform the above argument using the language of dimensional analysis. Giving f the units of amplitude A, and giving space the units of length L, we see that the n-dimensional integral \int_{{\Bbb R}^n}\ dx has units of L^n, and thus L^p({\Bbb R}^n) norms have units of L^{n/p}. Meanwhile, from the rise-over-run interpretation of the derivative, \nabla f has units of A/L, thus the LHS and RHS of (2) have units of A L^{n/q} and \frac{A}{L} L^{n/p} respectively. Equating these dimensions gives (3). Observe how this argument is basically a shorthand form of the argument based on using the rescaled bump function as a test case; with enough practice one can use this shorthand to calibrate exponents rapidly for a wide variety of estimates.
Exercise 2. Convert the above discussion into a rigorous proof that (3) is a necessary condition for (2). (Hint: exploit the freedom to send L to zero or to infinity.) What happens to the necessary conditions if {\Bbb R}^n is replaced with a bounded domain (such as the unit cube {}[0,1]^n, assuming Dirichlet boundary conditions) or a discrete domain (such as the lattice {\Bbb Z}^n, replacing the gradient with a discrete gradient of course)? \diamond
Exercise 3. If one replaces (2) by the variant estimate
\displaystyle \| f \|_{L^q({\Bbb R}^n)} \leq C_{n,p,q} (\|f\|_{L^p({\Bbb R}^n)} + \|\nabla f\|_{L^p({\Bbb R}^n)}) (2′)
establish the necessary condition
\displaystyle \frac{n}{p} - 1 \leq \frac{n}{q} \leq \frac{n}{p}. (3′)
What happens to the dimensional analysis argument in this case? \diamond
Remark 1. There are many other estimates in harmonic analysis which are saturated by some modification of a bump function; in addition to the isotropically rescaled bump functions used above, one could also rescale bump functions by some non-isotropic linear transformation (thus creating various “squashed” or “stretched” bumps adapted to disks, tubes, rectangles, or other sets), or modulate bumps by various frequencies, or translate them around in space. One can also try to superimpose several such transformed bump functions together to amplify the counterexample. The art of selecting good counterexamples can be somewhat difficult, although with enough trial and error one gets a sense of what kind of arrangement of bump functions are needed to make the right-hand side small and the left-hand side large in the estimate under study. \diamond
Example 3 (Scale-invariance in nonlinear PDE) The model equations and systems studied in nonlinear PDE often enjoy various symmetries, notably scale-invariance symmetry, that can then be used to calibrate various identities and estimates regarding solutions to those equations. For sake of discussion, let us work with the nonlinear Schrödinger equation (NLS)
\displaystyle i u_t + \Delta u = |u|^{p-1} u (4)
where u: {\Bbb R} \times {\Bbb R}^n \to {\Bbb C} is the unknown field, \Delta is the spatial Laplacian, and p > 1 is a fixed exponent. (One can also place in some other constants in (4), such as Planck’s constant \hbar, but we have normalised this constant to equal 1 here, although it is sometimes useful to reinstate this constant for calibration purposes.) If u is one solution to (4), then we can form a rescaled family u^{(\lambda)} of such solutions by the formula
\displaystyle u^{(\lambda)}(t,x) := \frac{1}{\lambda^a} u( \frac{t}{\lambda^b}, \frac{x}{\lambda} ) (5)
for some specific exponents a, b; these play the role of the rescaled bump functions in Example 2. The exponents a,b can be worked out by testing (4) using (5), and we leave this as an exercise to the reader, but let us instead use the shorthand of dimensional analysis to work these exponents out. Let’s give u the units of amplitude A, space the units of length L, and time the units of duration T. Then the three terms in (4) have units A/T, A/L^2, and A^p respectively; equating these dimensions gives T=L^2 and A=L^{-2/(p-1)}. (In particular, time has “twice the dimension” of space; this is a feature of many non-relativistic equations such as Schrödinger, heat, or viscosity equations. For relativistic equations, of course, time and space have the same dimension with respect to scaling.) On the other hand, the scaling (5) multiplies A, T, and L by \lambda^{-a}, \lambda^b, and \lambda respectively; to maintain consistency with the relations T=L^2 and A=L^{-2/(p-1)} we must thus have a=2/(p-1) and b=2.
Exercise 4. Solutions to (4) (with suitable smoothness and decay properties) enjoy a conserved Hamiltonian H(u), of the form
\displaystyle H(u) = \int_{{\Bbb R}^n} \frac{1}{2} |\nabla u|^2 + \alpha |u|^q\ dx
for some constants \alpha, q. Use dimensional analysis (or the rescaled solutions (5) as test cases) to compute q. (The constant \alpha, unfortunately, cannot be recovered from dimensional analysis, and other model test cases, such as solitons or other solutions obtained via separation of variables, also turn out unfortunately to not be sensitive enough to \alpha to calibrate this parameter.) \diamond
Remark 2. The scaling symmetry (5) is not the only symmetry that can be deployed to calibrate identities and estimates for solutions to NLS. For instance, we have a simple phase rotation symmetry u \mapsto e^{i\theta} u for such solutions, where \theta \in {\Bbb R} is an arbitrary phase. This symmetry suggests that in any identity involving u and its complex conjugate \bar{u}, the net number of factors of u, minus the factors of \bar{u}, in each term of the identity should be the same. (Factors without phase, such as |u|, should be ignored for this analysis.) Other important symmetries of NLS, which can also be used for calibration, include space translation symmetry, time translation symmetry, and Galilean invariance. (While these symmetries can of course be joined together, to create a large-dimensional family of transformed solutions arising from a single base solution u, for the purposes of calibration it is usually best to just use each of the generating symmetries separately.) For gauge field equations, gauge invariance is of course a crucial symmetry, though one can make the calibration procedure with respect to this symmetry automatic by working exclusively with gauge-invariant notation (see also my earlier post on gauge theory). Another important test case for Schrödinger equations is the high-frequency limit |\xi| \to \infty, closely related to the semi-classical limit \hbar \to 0, that allows one to use classical mechanics to calibrate various identities and estimates in quantum mechanics. \diamond
Exercise 5. Solutions to (4) (again assuming suitable smoothness and decay) also enjoy a virial identity of the form
\displaystyle \partial_{tt} \int_{{\Bbb R}^n} x^2 |u(t,x)|^2\ dx = \int_{{\Bbb R}^n} ???\ dx
where the right-hand side only involves u and its spatial derivatives \nabla u, and does not explicitly involve the spatial variable x. Using the various symmetries, predict the type of terms that should go on the right-hand side. (Again, the coefficients of these terms are unable to be calibrated using these methods, but the exponents should be accessible.) \diamond
Remark 3. Einstein used this sort of calibration technique (using the symmetry of spacetime diffeomorphisms, better known as the general principle of relativity, as well as the non-relativistic limit of Newtonian gravity as another test case) to derive the Einstein equations of gravity, although the one constant that he was unable to calibrate in this fashion was the cosmological constant. \diamond
Example 4 (Fourier-analytic identities in additive combinatorics). Fourier analysis is a useful tool in additive combinatorics for counting various configurations in sets, such as arithmetic progressions n, n+r, n+2r of length three. (It turns out that classical Fourier analysis is not able to handle progressions of any longer length, but that is a story for another time – see e.g. this paper of Gowers for some discussion.) A typical situation arises when working in a finite group such as {\Bbb Z}/N{\Bbb Z}, and one has to compute an expression such as
\displaystyle \sum_{n, r \in {\Bbb Z}/N{\Bbb Z}} f(n) g(n+r) h(n+2r) (6)
for some functions f,g,h: {\Bbb Z}/N{\Bbb Z} \to {\Bbb C} (for instance, these functions could all be the indicator function of a single set A \subset {\Bbb Z}/N{\Bbb Z}). The quantity (6) can be expressed neatly in terms of the Fourier transforms \hat f, \hat g, \hat h: {\Bbb Z}/N{\Bbb Z} \to {\Bbb C}, which we normalise as \hat f(\xi) := \frac{1}{N} \sum_{x \in {\Bbb Z}/N{\Bbb Z}} f(x) e^{-2\pi i x \xi/N}. It is not too difficult to compute this expression by means of the Fourier inversion formula and some routine calculation, but suppose one was in a hurry and only had a vague recollection of what the Fourier-analytic expression of (6) was – something like
\displaystyle N^p \sum_{\xi \in {\Bbb Z}/N{\Bbb Z}}\hat f( a \xi ) \hat g( b \xi ) \hat h( c \xi ) (7)
for some coefficients p, a, b, c, but the precise values of which have been forgotten. (In view of some other Fourier-analytic formulae, one might think that some of the Fourier transforms \hat f, \hat g, \hat h might need to be complex conjugated for (7), but this should not happen here, because (6) is linear in f,g,h rather than anti-linear; cf. the discussion in Example 3 about factors of u and \bar{u}.) How can one quickly calibrate the values of p,a,b,c without doing the full calculation?
To isolate the exponent p, we can consider the basic case f \equiv g \equiv h \equiv 1, in which case the Fourier transforms are just the Kronecker delta function (e.g. \hat f(\xi) equals 1 for \xi=0 and vanishes otherwise). The expression (6) is just N^2, while the expression (7) is N^p (because only one of the summands is non-trivial); thus p must equal 2. (Exercise: reinterpret the above analysis as a dimensional analysis.)
Next, to calibrate a,b,c, we modify the above basic test case slightly, modulating the f,g,h so that a different element of the sum in (7) is non-zero. Let us take f(x) := e^{2\pi i a x \xi/N}, g(x) := e^{2\pi i b x \xi/N}, h(x) := e^{2\pi i c x \xi/N} for some fixed frequency \xi; then (4) is again equal to N^p=N^2, while (6) is equal to
\displaystyle \sum_{n,r \in {\Bbb Z}/N{\Bbb Z}} e^{2\pi i [ a n + b (n+r) + c(n+2r)] \xi / N}.
In order for this to equal N^2 for any \xi, we need the linear form an+b(n+r)+c(n+2r) to vanish identically, which forces a=c and b=-2a. We can normalise a=1 (by using the change of variables \xi \mapsto a \xi), thus leading us to the correct expression for (7), namely
\displaystyle N^2 \sum_{\xi \in {\Bbb Z}/N{\Bbb Z}}\hat f( \xi ) \hat g( -2 \xi ) \hat h( \xi ).
Once one actually has this formula, of course, it is a routine matter to check that it actually is the right answer.
Remark 4. One can also calibrate a,b,c in (7) by observing the identity n - 2(n+r) + (n+2r)=0 (which reflects the fact that the second derivative of a linear function is necessarily zero), which gives a modulation symmetry f(x) \mapsto f(x) e^{2\pi i \alpha x}, g(x) \mapsto g(x) e^{-4\pi i \alpha x}, h(x) \mapsto h(x) e^{2\pi i \alpha x} to (6). Inserting this symmetry into (7) reveals that a=c and b=-2a as before. \diamond
Remark 5. By choosing appropriately normalised conventions, one can avoid some calibration duties altogether. For instance, when using Fourier analysis on a finite group such as {\Bbb Z}/N{\Bbb Z}, if one expects to be analysing functions that are close to constant (or subsets of the group of positive density), then it is natural to endow physical space with normalised counting measure (and thus, by Pontryagin duality, frequency space should be given non-normalised counting measure). [Conversely, if one is analysing functions concentrated on only a bounded number of points, then it may be more convenient to give physical space counting measure and frequency space normalised counting measure.] In practical terms, this means that any physical space sum, such as \sum_{x \in {\Bbb Z}/N{\Bbb Z}} f(x), should instead be replaced with a physical space average {\Bbb E}_{x \in {\Bbb Z}/N{\Bbb Z}} f(x) = \frac{1}{N} \sum_{x \in {\Bbb Z}/N{\Bbb Z}} f(x), while keeping sums over frequency space variables unchanged; when one does so, all powers of N “miraculously” disappear, and there is no longer any need to calibrate using the constant function 1 as was done above. Of course, this does not eliminate the need to perform other calibrations, such as that of the coefficients a,b,c above. \diamond |
4787c1f1dbd50c64 | crestless wave
A wave is a disturbance that propagates through space and time, usually with transference of energy. While a mechanical wave exists in a medium (which on deformation is capable of producing elastic restoring forces), waves of electromagnetic radiation (and probably gravitational radiation) can travel through vacuum, that is, without a medium. Waves travel and transfer energy from one point to another, often with little or no permanent displacement of the particles of the medium (that is, with little or no associated mass transport); instead there are oscillations around almost fixed locations.
Agreeing on a single, all-encompassing definition for the term wave is non-trivial. A vibration can be defined as a back-and-forth motion around a point m around a reference value. However, defining the necessary and sufficient characteristics that qualify a phenomenon to be called a wave is, at least, flexible. The term is often understood intuitively as the transport of disturbances in space, not associated with motion of the medium occupying this space as a whole. In a wave, the energy of a vibration is moving away from the source in the form of a disturbance within the surrounding medium (Hall, 1980: 8). However, this notion is problematic for a standing wave (for example, a wave on a string), where energy is moving in both directions equally, or for electromagnetic / light waves in a vacuum, where the concept of medium does not apply.
For such reasons, wave theory represents a peculiar branch of physics that is concerned with the properties of wave processes independently from their physical origin (Ostrovsky and Potapov, 1999). The peculiarity lies in the fact that this independence from physical origin is accompanied by a heavy reliance on origin when describing any specific instance of a wave process. For example, acoustics is distinguished from optics in that sound waves are related to a mechanical rather than an electromagnetic wave-like transfer / transformation of vibratory energy. Concepts such as mass, momentum, inertia, or elasticity, become therefore crucial in describing acoustic (as opposed to optic) wave processes. This difference in origin introduces certain wave characteristics particular to the properties of the medium involved (for example, in the case of air: vortices, radiation pressure, shock waves, etc., in the case of solids: Rayleigh waves, dispersion, etc., and so on).
Other properties, however, although they are usually described in an origin-specific manner, may be generalized to all waves. For example, based on the mechanical origin of acoustic waves there can be a moving disturbance in space-time if and only if the medium involved is neither infinitely stiff nor infinitely pliable. If all the parts making up a medium were rigidly bound, then they would all vibrate as one, with no delay in the transmission of the vibration and therefore no wave motion (or rather infinitely fast wave motion). On the other hand, if all the parts were independent, then there would not be any transmission of the vibration and again, no wave motion (or rather infinitely slow wave motion). Although the above statements are meaningless in the case of waves that do not require a medium, they reveal a characteristic that is relevant to all waves regardless of origin: within a wave, the phase of a vibration (that is, its position within the vibration cycle) is different for adjacent points in space because the vibration reaches these points at different times.
Similarly, wave processes revealed from the study of wave phenomena with origins different from that of sound waves can be equally significant to the understanding of sound phenomena. A relevant example is Young's principle of interference (Young, 1802, in Hunt, 1978: 132). This principle was first introduced in Young's study of light and, within some specific contexts (for example, scattering of sound by sound), is still a researched area in the study of sound.
Periodic waves are characterized by crests (highs) and troughs (lows), and may usually be categorized as either longitudinal or transverse. Transverse waves are those with vibrations perpendicular to the direction of the propagation of the wave; examples include waves on a string and electromagnetic waves. Longitudinal waves are those with vibrations parallel to the direction of the propagation of the wave; examples include most sound waves.
When an object bobs up and down on a ripple in a pond, it experiences an orbital trajectory because ripples are not simple transverse sinusoidal waves .
All waves have common behavior under a number of standard situations. All waves can experience the following:
A wave is polarized, if it can only oscillate in one direction. The polarization of a transverse wave describes the direction of oscillation, in the plane perpendicular to the direction of travel. Longitudinal waves such as sound waves do not exhibit polarization, because for these waves the direction of oscillation is along the direction of travel. A wave can be polarized by using a polarizing filter.
Examples of waves include:
Mathematical description
From a mathematical point of view, the most primitive or fundamental wave is harmonic (sinusoidal) wave which is described by the equation f(x,t) = Asin(omega t-kx)), where A is the amplitude of a wave - a measure of the maximum disturbance in the medium during one wave cycle (the maximum distance from the highest point of the crest to the equilibrium). In the illustration to the right, this is the maximum vertical distance between the baseline and the wave. The units of the amplitude depend on the type of wave — waves on a string have an amplitude expressed as a distance (meters), sound waves as pressure (pascals) and electromagnetic waves as the amplitude of the electric field (volts/meter). The amplitude may be constant (in which case the wave is a c.w. or continuous wave), or may vary with time and/or position. The form of the variation of amplitude is called the envelope of the wave.
The wavelength (denoted as lambda) is the distance between two sequential crests (or troughs). This generally is measured in meters; it is also commonly measured in nanometers for the optical part of the electromagnetic spectrum.
A wavenumber k can be associated with the wavelength by the relation
k = frac{2 pi}{lambda}. ,
The period T is the time for one complete cycle for an oscillation of a wave. The frequency f (also frequently denoted as nu) is how many periods per unit time (for example one second) and is measured in hertz. These are related by:
f=frac{1}{T}. ,
In other words, the frequency and period of a wave are reciprocals of each other.
The angular frequency omega represents the frequency in terms of radians per second. It is related to the frequency by
omega = 2 pi f = frac{2 pi}{T}. ,
There are two velocities that are associated with waves. The first is the phase velocity, which gives the rate at which the wave propagates, is given by
v_p = frac{omega}{k} = {lambda}f.
The second is the group velocity, which gives the velocity at which variations in the shape of the wave's amplitude propagate through space. This is the rate at which information can be transmitted by the wave. It is given by
v_g = frac{partial omega}{partial k}. ,
The wave equation
The wave equation is a differential equation that describes the evolution of a harmonic wave over time. The equation has slightly different forms depending on how the wave is transmitted, and the medium it is traveling through. Considering a one-dimensional wave that is traveling down a rope along the x-axis with velocity v and amplitude u (which generally depends on both x and t), the wave equation is
frac{1}{v^2}frac{partial^2 u}{partial t^2}=frac{partial^2 u}{partial x^2}. ,
In three dimensions, this becomes
frac{1}{v^2}frac{partial^2 u}{partial t^2} = nabla^2 u. ,
where nabla^2 is the Laplacian.
The velocity v will depend on both the type of wave and the medium through which it is being transmitted.
A general solution for the wave equation in one dimension was given by d'Alembert. It is
This can be viewed as two pulses traveling down the rope in opposite directions; F in the +x direction, and G in the −x direction. If we substitute for x above, replacing it with directions x, y, z, we then can describe a wave propagating in three dimensions.
The Schrödinger equation describes the wave-like behavior of particles in quantum mechanics. Solutions of this equation are wave functions which can be used to describe the probability density of a particle. Quantum mechanics also describes particle properties that other waves, such as light and sound, have on the atomic scale and below.
Traveling waves
Simple wave or a traveling wave, also sometimes called a progressive wave is a disturbance that varies both with time t and distance z in the following way:
y(z,t) = A(z, t)sin (kz - omega t + phi), ,
where A(z,t) is the amplitude envelope of the wave, k is the wave number and phi is the phase. The phase velocity vp of this wave is given by
v_p = frac{omega}{k}= lambda f, ,
where lambda is the wavelength of the wave.
Standing wave
The sum of two counter-propagating waves (of equal amplitude and frequency) creates a standing wave. Standing waves commonly arise when a boundary blocks further propagation of the wave, thus causing wave reflection, and therefore introducing a counter-propagating wave. For example when a violin string is displaced, longitudinal waves propagate out to where the string is held in place at the bridge and the "nut", where upon the waves are reflected back. At the bridge and nut, the two opposed waves are in antiphase and cancel each other, producing a node. Halfway between two nodes there is an antinode, where the two counter-propagating waves enhance each other maximally. There is on average no net propagation of energy.
Also see: Acoustic resonance, Helmholtz resonator, and organ pipe
Propagation through strings
v=sqrt{frac{T}{mu}}. ,
Transmission medium
The medium that carries a wave is called a transmission medium. It can be classified into one or more of the following categories:
• A bounded medium if it is finite in extent, otherwise an unbounded medium.
• A linear medium if the amplitudes of different waves at any particular point in the medium can be added.
• A uniform medium if its physical properties are unchanged at different locations in space.
• An isotropic medium if its physical properties are the same in different directions.
See also
• Campbell, M. and Greated, C. (1987). The Musician’s Guide to Acoustics. New York: Schirmer Books.
• Hunt, F. V. (1978). Origins in Acoustics. New York: Acoustical Society of America Press, (1992).
• Ostrovsky, L. A. and Potapov, A. S. (1999). Modulated Waves, Theory and Applications. Baltimore: The Johns Hopkins University Press.
• Vassilakis, P.N. (2001) Perceptual and Physical Properties of Amplitude Fluctuation and their Musical Significance. Doctoral Dissertation. University of California, Los Angeles.
External links
Search another word or see crestless waveon Dictionary | Thesaurus |Spanish
Copyright © 2015, LLC. All rights reserved.
• Please Login or Sign Up to use the Recent Searches feature |
cd8c1bf1312ec4f2 | Become a fan of Slashdot on Facebook
Forgot your password?
Chameleon-Like Behavior of Neutrino Confirmed 191
Anonymous Apcoheur writes "Scientists from CERN and INFN of the OPERA Collaboration have announced the first direct observation of a muon neutrino turning into a tau neutrino. 'The OPERA result follows seven years of preparation and over three years of beam provided by CERN. During that time, billions of billions of muon-neutrinos have been sent from CERN to Gran Sasso, taking just 2.4 milliseconds to make the trip. The rarity of neutrino oscillation, coupled with the fact that neutrinos interact very weakly with matter, makes this kind of experiment extremely subtle to conduct. ... While closing a chapter on understanding the nature of neutrinos, the observation of neutrino oscillations is strong evidence for new physics. The Standard Model of fundamental particles posits no mass for the neutrino. For them to be able to oscillate, however, they must have mass.'"
Chameleon-Like Behavior of Neutrino Confirmed
Comments Filter:
• Re:What if... (Score:5, Informative)
by Steve Max (1235710) on Monday May 31, 2010 @05:03PM (#32411522) Journal
You'd need a pretty complex theory to get non-mass oscillations to match all the data we got over the past 12 years, which is very compatible with a three-state, mass-driven oscillation scenario. Besides, you'd have to explain more than what the current "new standard model" (the SM with added neutrino masses) does if you want your theory to be accepted. If two theories explain the same data equally well, the simplest is more likely.
• by dumuzi (1497471) on Monday May 31, 2010 @05:17PM (#32411680) Journal
I agree. In QCD quarks and gluons can undergo colour changes [], this would be "chameleon-like behavior". Neutrinos on the other hand change flavour [], this would be "Willy Wonka like behavior".
• Re: What if... (Score:3, Informative)
by Black Parrot (19622) on Monday May 31, 2010 @05:22PM (#32411716)
If two theories explain the same data equally well, the simplest is more likely./quote?
Make that "more preferred". In general we don't know anything about likelihood.
The thing about Occam's Razor is that it filters out "special pleading" type arguments. If you want your pet in the show, you've got to provide motivation for including it.
• by pz (113803) on Monday May 31, 2010 @05:23PM (#32411738) Journal
How could something have mass and so weakly interact with normal matter?
Neutrinos are thought to have a very small mass. So exceedingly small that they barely interact with anything (they also have no charge, so they are even less likely to interact). But zero mass and really, really, really small but not zero mass, are two different things.
• by BitterOak (537666) on Monday May 31, 2010 @05:49PM (#32411982)
The fact that they barely interact with anything has nothing to do with the fact that they are nearly massless. Photons are massless and they interact with anything that carries an electric charge. Electrons are much lighter than muons, but they are just as likely to interact with something. The only force that gets weaker as the mass goes down is gravity, which is by far the weakest of the fundamental forces.
by BitterOak (537666) on Monday May 31, 2010 @07:31PM (#32412774)
That's the way I've always understood the mass/oscillation connection too. But then I thought... wait... don't photons oscillate too? They're just coherent oscillations of the EM field; oscillating back and forth between electric and transverse magnetic in free space. If there's something different about neutrino oscillation which makes it necessary for the neutrino to travel at sublight, what is it specifically?
The situation you describe with the EM field is an example of wave-particle duality. Light can behave like both a wave and a particle, but it doesn't make sense to analyze it both ways at the same time. As a wave, it does manifest itself as oscillating electric and magnetic fields and as a particle, it manifests itself as a photon, which doesn't change into a different type of particle. (There's no such thing as an "electric photon" and a "magnetic photon".)
Neutrinos, too, are described quantum mechanically by wavefunctions, and these wavefunctions have frequencies associated with them, related to the energy of the particle. But these have nothing to do with the oscillation frequencies described here, in which a neutrino of one flavor (eg. mu) can change into a different flavor (eg. tau). Quantum mechanically speaking, we say the mass eigenstates of the neutrino (states of definite mass) don't coincide with the weak eigenstates (states of definite flavor: i.e. e, mu, or tau). Without mass, there would be no distinct mass eigenstates at all, and so mixing of the weak eigenstates would not occur as the neutrino propagates through free space.
• by Steve Max (1235710) on Monday May 31, 2010 @08:15PM (#32413138) Journal
Light doesn't oscillate in this way. A photon is a photon, and remains a photon. Electric and magnetic fields oscillate, but the particle "photon" doesn't. Neutrinos start as one particle (say, as muon-neutrinos) and are detected as a completely different particle (say, as a tau-neutrino).
The explanation for that is that what we call "electron-neutrino", "muon-neutrino" and "tau-neutrino" aren't states with a definite mass; they're a mixture of three neutrino states with definite, different mass (one of those masses can be zero, but at most one). Then, from pure quantum mechanics (and nothing more esoteric than that: pure Schrödinger equation) you see that, if those three defined-mass states have slightly different mass, you will have a probability of creating an electron neutrino and detecting it as a tau neutrino, and every other combination. Those probabilities follow a simple expansion, based on only five parameters (two mass differences and three angles), and depend on the energy of the neutrino and the distance in a very specific way. We can test that dependency, and use very different experiments to measure the five parameters; and everything fits very well. Right now (specially after MINOS saw the energy dependency of the oscillation probability), nobody questions neutrino oscillations. This OPERA result only confirms what we already knew.
• Re:What if... (Score:3, Informative)
by khayman80 (824400) on Monday May 31, 2010 @08:18PM (#32413160) Homepage Journal
Thanks. I just found some [] equations [] that appear to reinforce what you said.
Since the oscillation frequency is proportional to the difference of the squared masses of the mass eigenstates, perhaps it's more accurate to say that neutrino flavor oscillation implies the existence of several mass eigenstates which aren't identical to flavor eigenstates. Since two mass eigenstates would need different eigenvalues in order to be distinguishable, this means at least one mass eigenvalue has to be nonzero. There's probably some sort of "superselection rule" which prevents particles from oscillating between massless and massive eigenstates, so both mass eigenstates have to be non-zero. Cool.
• by Anonymous Coward on Monday May 31, 2010 @11:03PM (#32414422)
Photons are masless and chargeless, right?
• by Young Master Ploppy (729877) on Tuesday June 01, 2010 @07:30AM (#32417022) Homepage Journal
I'm not a "real" physicist - but I did study this at undergrad level, so here goes:
Heisenberg's Uncertainty Principle ( [] ) states that there must always be a minimum uncertainty in certain pairs of related variables - e.g. position and momentum, i.e. the more accurately you know the position of something, the less accurately you know how it's moving. Another related pair is energy and time - the more accurately you know the energy of something, the less accurately you know when the measurement was taken.
(disclaimer - this makes perfect sense when expressed mathematically, it onlysounds like handwavery when you translate it into English, as words are ambiguous and mean different things to different people)
Anyway, this uncertainty means that there is a small but non-zero probability of a higher-energy event occuring in the history of a lower-energy particle (often mis-stated as "particles can borrow energy for a short time, but check the wiki page for a more accurate statement). It sounds nuts, I know, but it has many real-world implications that have no explanation in non-quantum physics. Particles can "tunnel" through barriers that they shouldn't be able to cross, for instance - this is how semi-conductors work.
By implication, there is a small probability of the neutrino acting as if it had a higher energy, and *this* is how neutrino-flipping occurs without violating conservation of energy.
• Re:What if... (Score:4, Informative)
by Steve Max (1235710) on Tuesday June 01, 2010 @09:41AM (#32418340) Journal
No. All flavour eigenstates MUST be massive: they are superpositions of the three mass eigenstates, one of which can have zero mass. Calling the three mass eigenstates n1, n2 and n3; and the three flavour eigenstates ne, nm and nt, we'd have:
So, if any of n1, n2 or n3 has a non-zero mass (and at least two of them MUST have non-zero masses, since we know two different and non-zero mass differences), all three flavour eigenstates have non-zero masses.
Also, remember that the limit for the neutrino mass is at about 1eV, while it's hard to have neutrinos travelling with energies under 10^6 eV. In other words, the gamma factor is huge, and they're always ultrarelativistic, travelling practically at "c".
Another point is that the mass differences are really, really small; of the order of 0.01 eV. This is ridiculously small; so small that the uncertainty principle makes it possible for one state to "tunnel" to the other.
I really can't go any deeper than that without resorting to quantuim field theory. I can only say that standard QM is not compatible with relativity: Schrödinger's equation comes from the classical Hamiltonian, for example. To take special relativity into account, you need a different set of equations (Dirac's), which use the relativistic Hamiltonian. In this particular case, the result is the same using Dirac, Schrödinger or the full QFT, but the three-line Schrödinger solution becomes a full-page Dirac calculation, or ten pages of QFT. In this particular case, unfortunately, the best I can do is say "trust me, it works; you'll see it when you get more background".
by Steve Max (1235710) on Tuesday June 01, 2010 @09:52AM (#32418462) Journal
The time-dependent Schrödinger's equation doesn't apply for massless particles. It was never intended to. It isn't relativistic. Try to apply a simple boost and you'll see it's not Poincaré invariant. The main point is that you get the same probabilities if you use a relativistic theory, but you need A LOT of work to get there.
Oscillations work and happen in QFT, which is Poincaré-invariant and assumes special relativity. I can't find any references in a quick search, but I've done all the (quite painful) calculations a long time ago to make sure it works. It's one of those cases where the added complexity of relativistic quantum field theory doesn't change the results from a simple Schrödinger solution.
|
a44e03dd07b3fe26 | Coupled cluster
From Wikipedia, the free encyclopedia
Jump to: navigation, search
Coupled cluster (CC) is a numerical technique used for describing many-body systems. Its most common use is as one of several post-Hartree–Fock ab initio quantum chemistry methods in the field of computational chemistry. It essentially takes the basic Hartree–Fock molecular orbital method and constructs multi-electron wavefunctions using the exponential cluster operator to account for electron correlation. Some of the most accurate calculations for small to medium-sized molecules use this method.[1][2][3]
The method was initially developed by Fritz Coester and Hermann Kümmel in the 1950s for studying nuclear physics phenomena, but became more frequently used when in 1966 Jiři Čížek (and later together with Josef Paldus) reformulated the method for electron correlation in atoms and molecules. It is now one of the most prevalent methods in quantum chemistry that includes electronic correlation. CC theory is simply the perturbative variant of the Many Electron Theory (MET) of Oktay Sinanoğlu, which is the exact (and variational) solution of the many electron problem, so it was also called "Coupled Pair MET (CPMET)". J. Čížek used the correlation function of MET and used Goldstone type perturbation theory to get the energy expression while original MET was completely variational. Čížek first developed the Linear-CPMET and then generalized it to full CPMET in the same paper in 1966. He then also performed an application of it on benzene molecule with O. Sinanoğlu in the same year. Because MET is somewhat difficult to perform computationally, CC is simpler and thus, in today's computational chemistry, CC is the best variant of MET and gives highly accurate results in comparison to experiments.[4][5][6]
Wavefunction ansatz[edit]
Coupled-cluster theory provides the exact solution to the time-independent Schrödinger equation
where is the Hamiltonian of the system, the exact wavefunction, and E the exact energy of the ground state. Coupled-cluster theory can also be used to obtain solutions for excited states using, for example, linear-response,[7] equation-of-motion,[8] state-universal multi-reference coupled cluster,[9] or valence-universal multi-reference coupled cluster[10] approaches.
The wavefunction of the coupled-cluster theory is written as an exponential ansatz:
where , the reference wave function, which is typically a Slater determinant constructed from Hartree–Fock molecular orbitals, though other wave functions such as Configuration interaction, Multi-configurational self-consistent field, or Brueckner orbitals can also be used. is the cluster operator which, when acting on , produces a linear combination of excited determinants from the reference wave function (see section below for greater detail).
The choice of the exponential ansatz is opportune because (unlike other ansatzes, for example, configuration interaction) it guarantees the size extensivity of the solution. Size consistency in CC theory, also unlike other theories, does not depend on the size consistency of the reference wave function. This is easily seen, for example, in the single bond breaking of F when using a restricted Hartree-Fock (RHF) reference, which is not size consistent, at the CCSDT level of theory which provides an almost exact, full CI-quality, potential energy surface and does not dissociate the molecule into F and F ions, like the RHF wave function, but rather into two neutral F atoms.[11] If one were to use, for example, the CCSD, CCSD[T], or CCSD(T) levels of theory, they would not provide reasonable results for the bond breaking of F, with the latter two approaches providing unphysical potential energy surfaces,[12] though this is for reasons other than just size consistency.
A criticism of the method is that the conventional implementation employing the similarity-transformed Hamiltonian (see below) is not variational, though there are bi-variational and quasi-variational approaches that have been developed since the first implementations of the theory. While the above ansatz for the wave function itself has no natural truncation, however, for other properties, such as energy, there is a natural truncation when examining expectation values, which has its basis in the linked- and connected-cluster theorems, and thus does not suffer from issues such as lack of size extensivity, like the variational configuration interaction approach.
Cluster operator[edit]
The cluster operator is written in the form,
where is the operator of all single excitations, is the operator of all double excitations and so forth. In the formalism of second quantization these excitation operators are expressed as
and for the general n-fold cluster operator
In the above formulae and denote the creation and annihilation operators respectively and i, j stand for occupied (hole) and a, b for unoccupied (particle) orbitals (states). The creation and annihilation operators in the coupled cluster terms above are written in canonical form, where each term is in the normal order form, with respect to the Fermi vacuum, . Being the one-particle cluster operator and the two-particle cluster operator, and convert the reference function into a linear combination of the singly and doubly excited Slater determinants, respectively, if applied without the exponential (such as in CI where a linear excitation operator is applied to the wave function). Applying the exponential cluster operator to the wave function, one can then generate more than doubly excited determinants due to the various powers of and that appear in the resulting expressions (see below). Solving for the unknown coefficients and is necessary for finding the approximate solution .
The exponential operator may be expanded as a Taylor series and if we consider only the and cluster operators of , we can write:
Though this series is finite in practice because the number of occupied molecular orbitals is finite, as is the number of excitations, it is still very large, to the extent that even modern day massively parallel computers are inadequate, except for problems of a dozen or so electrons and very small basis sets, when considering all contributions to the cluster operator and not just and . Often, as was done above, the cluster operator includes only singles and doubles (see CCSD below) as this offers a computationally affordable method that performs better than MP2 and CISD, but is not very accurate usually. For accurate results some form of triples (approximate or full) are needed, even near the equilibrium geometry (in the Franck-Condon region), and especially when breaking single-bonds or describing diradical species (these latter examples are often what is referred to as multi-reference problems, since more than one determinant has a significant contribution to the resulting wave function). For double bond breaking, and more complicated problems in chemistry, quadruple excitations often become important as well, though usually they are small for most problems, and as such, the contribution of , etc. to the operator is typically small. Furthermore, if the highest excitation level in the operator is n,
then Slater determinants for an N-electron system excited more than n (< N) times may still contribute to the coupled cluster wave function because of the non-linear nature of the exponential ansatz, and therefore, coupled cluster terminated at usually recovers more correlation energy than CI with maximum n excitations.
Coupled-cluster equations[edit]
The Schrödinger equation can be written, using the coupled-cluster wave function, as
where there are a total of q coefficients (t-amplitudes) to solve for. To obtain the q equations, first, we multiply the above Schrödinger equation on the left by and then project onto the entire set of up to m-tuply excited determinants, where m is the highest order excitation included in , that can be constructed from the reference wave function , denoted by , and individually, are singly excited determinants where the electron in orbital i has been excited to orbital a; are doubly excited determinants where the electron in orbital i has been excited to orbital a and the electron in orbital j has been excited to orbital b, etc. In this way we generate a set of coupled energy-independent non-linear algebraic equations needed to determine the t-amplitudes.
(note, we have made use of , the identity operator, and we are also assuming that we are using orthogonal orbitals, though this does not necessarily have to be true, e.g., valence bond orbitals, and in such cases the last set of equations are not necessarily equal to zero) the latter being the equations to be solved and the former the equation for the evaluation of the energy.
Considering the basic CCSD method:
in which the similarity transformed Hamiltonian, , can be explicitly written down using Hadamard's formula in Lie algebra, also called Hadamard's lemma (see also Baker–Campbell–Hausdorff formula (BCH formula), though note they are different, in that Hadamard's formula is a lemma of the BCH formula):
The subscript C designates the connected part of the corresponding operator expression.
The resulting similarity transformed Hamiltonian is non-Hermitian, resulting in different left- and right-handed vectors (wave functions) for the same state of interest (this is what is often referred to in coupled cluster theory as the biorthogonality of the solution, or wave function, though it also applies to other non-Hermitian theories as well). The resulting equations are a set of non-linear equations which are solved in an iterative manner. Standard quantum chemistry packages (GAMESS (US), NWChem, ACES II, etc.) solve the coupled cluster equations using the Jacobi method and direct inversion of the iterative subspace (DIIS) extrapolation of the t-amplitudes to accelerate convergence.
Types of coupled-cluster methods[edit]
The classification of traditional coupled-cluster methods rests on the highest number of excitations allowed in the definition of . The abbreviations for coupled-cluster methods usually begin with the letters "CC" (for coupled cluster) followed by
1. S – for single excitations (shortened to singles in coupled-cluster terminology)
2. D – for double excitations (doubles)
3. T – for triple excitations (triples)
4. Q – for quadruple excitations (quadruples)
Thus, the operator in CCSDT has the form
Terms in round brackets indicate that these terms are calculated based on perturbation theory. For example, the CCSD(T) method means:
1. Coupled cluster with a full treatment singles and doubles.
2. An estimate to the connected triples contribution is calculated non-iteratively using Many-Body Perturbation Theory arguments.
General description of the theory[edit]
The complexity of equations and the corresponding computer codes, as well as the cost of the computation increases sharply with the highest level of excitation. For many applications CCSD, while relatively inexpensive, does not provide sufficient accuracy except for the smallest systems (approximately 2 to 4 electrons), and often an approximate treatment of triples is needed. The most well known coupled cluster method that provides an estimate of connected triples is CCSD(T), which provides a good description of closed-shell molecules near the equilibrium geometry, but breaks down in more complicated situations such as bond breaking and diradicals. Another popular method that makes up for the failings of the standard CCSD(T) approach is CR-CC(2,3), where the triples contribution to the energy is computed from the difference between the exact solution and the CCSD energy, and is not based on perturbation theory arguments. More complicated coupled-cluster methods such as CCSDT and CCSDTQ are used only for high-accuracy calculations of small molecules. The inclusion of all n levels of excitation for the n-electron system gives the exact solution of the Schrödinger equation within the given basis set, within the Born–Oppenheimer approximation (although schemes have also been drawn up to work without the BO approximation[13][14]).
One possible improvement to the standard coupled-cluster approach is to add terms linear in the interelectronic distances through methods such as CCSD-R12. This improves the treatment of dynamical electron correlation by satisfying the Kato cusp condition and accelerates convergence with respect to the orbital basis set. Unfortunately, R12 methods invoke the resolution of the identity which requires a relatively large basis set in order to be a good approximation.
The coupled-cluster method described above is also known as the single-reference (SR) coupled-cluster method because the exponential ansatz involves only one reference function . The standard generalizations of the SR-CC method are the multi-reference (MR) approaches: state-universal coupled cluster (also known as Hilbert space coupled cluster), valence-universal coupled cluster (or Fock space coupled cluster) and state-selective coupled cluster (or state-specific coupled cluster).
Historical accounts[edit]
In the first reference below, Kümmel comments:
Considering the fact that the CC method was well understood around the late fifties it looks strange that nothing happened with it until 1966, as Jiři Čížek published his first paper on a quantum chemistry problem. He had looked into the 1957 and 1960 papers published in Nuclear Physics by Fritz and myself. I always found it quite remarkable that a quantum chemist would open an issue of a nuclear physics journal. I myself at the time had almost gave up the CC method as not tractable and, of course, I never looked into the quantum chemistry journals. The result was that I learnt about Jiři's work as late as in the early seventies, when he sent me a big parcel with reprints of the many papers he and Joe Paldus had written until then.
Josef Paldus also wrote his first hand account of the origins of coupled-cluster theory, its implementation, and exploitation in electronic wave function determination; his account is primarily about the making of coupled-cluster theory rather than about the theory itself.[15]
Relation to other theories[edit]
Configuration Interaction[edit]
The Cj excitation operators defining the CI expansion of an N-electron system for the wave function ,
are related to the cluster operators , since in the limit of including up to in the cluster operator the CC theory must be equal to full CI, we obtain the following relationships[16][17]
etc. For general relationships see J. Paldus, in Methods in Computational Molecular Physics, Vol. 293 of Nato Advanced Study Institute Series B: Physics, edited by S. Wilson and G.H.F. Diercksen (Plenum, New York, 1992), pp. 99–194.
Symmetry Adapted Cluster[edit]
The Symmetry adapted cluster (SAC)[18][19] approach determines the (spin and) symmetry adapted cluster operator
by solving the following system of energy dependent equations,
, , ,
where are the n-tuply excited determinants relative to (usually they are the spin- and symmetry-adapted configuration state functions, in practical implementations), and is the highest-order of excitation included in the SAC operator. If all of the nonlinear terms in are included then the SAC equations become equivalent to the standard coupled-cluster equations of Jiři Čížek. This is due to the cancellation of the energy-dependent terms with the disconnected terms contributing to the product of , resulting in the same set of nonlinear energy-independent equations. Typically, all nonlinear terms, except are dropped, as higher-order nonlinear terms are usually small.[20]
See also[edit]
1. ^ Kümmel, H. G. (2002). "A biography of the coupled cluster method". In Bishop, R. F.; Brandes, T.; Gernoth, K. A.; Walet, N. R.; Xian, Y. Recent progress in many-body theories Proceedings of the 11th international conference. Singapore: World Scientific Publishing. pp. 334–348. ISBN 978-981-02-4888-8.
2. ^ Cramer, Christopher J. (2002). Essentials of Computational Chemistry. Chichester: John Wiley & Sons, Ltd. pp. 191–232. ISBN 0-471-48552-7.
3. ^ Shavitt, Isaiah; Bartlett, Rodney J. (2009). Many-Body Methods in Chemistry and Physics: MBPT and Coupled-Cluster Theory. Cambridge University Press. ISBN 978-0-521-81832-2.
4. ^ Čížek, Jiří (1966). "On the Correlation Problem in Atomic and Molecular Systems. Calculation of Wavefunction Components in Ursell-Type Expansion Using Quantum-Field Theoretical Methods". The Journal of Chemical Physics. 45 (11): 4256. Bibcode:1966JChPh..45.4256C. doi:10.1063/1.1727484.
5. ^ Sinanoğlu, O.; Brueckner, K. (1971). Three approaches to electron correlation in atoms. Yale Univ. Press. ISBN 0-300-01147-4. and references therein
6. ^ Si̇nanoğlu, Oktay (1962). "Many-Electron Theory of Atoms and Molecules. I. Shells, Electron Pairs vs Many-Electron Correlations". The Journal of Chemical Physics. 36 (3): 706. Bibcode:1962JChPh..36..706S. doi:10.1063/1.1732596.
7. ^ Monkhorst, H.J. (1977). "Calculation of properties with the coupled-cluster method". International Journal of Quantum Chemistry. 12, S11: 421. doi:10.1002/qua.560120850.
8. ^ Stanton, John F.; Bartlett, Rodney J. (1993). "The equation of motion coupled-cluster method. A systematic biorthogonal approach to molecular excitation energies, transition probabilities, and excited state properties". The Journal of Chemical Physics. 98 (9): 7029. Bibcode:1993JChPh..98.7029S. doi:10.1063/1.464746.
9. ^ Jeziorski, B.; Monkhorst, H. (1981). "Coupled-cluster method for multideterminantal reference states". Physical Review A. 24 (4): 1668. Bibcode:1981PhRvA..24.1668J. doi:10.1103/PhysRevA.24.1668.
10. ^ Lindgren, D.; Mukherjee, Debashis (1987). "On the connectivity criteria in the open-shell coupled-cluster theory for general model spaces". Physics Reports. 151 (2): 93. Bibcode:1987PhR...151...93L. doi:10.1016/0370-1573(87)90073-1.
11. ^ Kowalski, K.; Piecuch, P. (2001). "A comparison of the renormalized and active-space coupled-cluster methods: Potential energy curves of BH and F2". Chemical Physics Letters. 344: 165. Bibcode:2001CPL...344..165K. doi:10.1016/s0009-2614(01)00730-8.
12. ^ Ghose, K.B.; Piecuch, P.; Adamowicz, L. (1995). "Improved computational strategy for the state‐selective coupled‐cluster theory with semi‐internal triexcited clusters: Potential energy surface of the HF molecule". Journal of Physical Chemistry. 103 (21): 9331. Bibcode:1995JChPh.103.9331G. doi:10.1063/1.469993.
13. ^ Monkhorst, Hendrik J (1987). "Chemical physics without the Born-Oppenheimer approximation: The molecular coupled-cluster method". Physical Review A. 36 (4): 1544–1561. Bibcode:1987PhRvA..36.1544M. doi:10.1103/PhysRevA.36.1544. PMID 9899035.
14. ^ Nakai, Hiromi; Sodeyama, Keitaro (2003). "Many-body effects in nonadiabatic molecular theory for simultaneous determination of nuclear and electronic wave functions: Ab initio NOMO/MBPT and CC methods". The Journal of Chemical Physics. 118 (3): 1119. Bibcode:2003JChPh.118.1119N. doi:10.1063/1.1528951.
15. ^ Paldus, J. (2005). "The beginnings of coupled-cluster theory: an eyewitness account". In Dykstra, C. Theory and Applications of Computational Chemistry: The First Forty Years. Elsivier B.V. p. 115.
16. ^ Paldus, J (1981). Diagrammatic Methods for Many-Fermion Systems (Lecture Notes ed.). University of Nijmegen, Njimegen, The Netherlands.
17. ^ Bartlett, R.J.; Dykstra, C.E.; Paldus, J. (1984). Dykstra, C.E., ed. Advanced Theories and Computational Approaches to the Electronic Structure of Molecules. p. 127.
18. ^ Nakatsuji, H.; Hirao, K. (1977). "Cluster expansion of the wavefunction. Pseduo-orbital theory applied to spin correlation". Chemical Physics Letters. 47 (3): 569. Bibcode:1977CPL....47..569N. doi:10.1016/0009-2614(77)85042-2.
19. ^ Nakatsuji, H.; Hirao, K. (1978). "Cluster expansion of the wavefunction. Symmetry‐adapted‐cluster expansion, its variational determination, and extension of open‐shell orbital theory". Journal of Chemical Physics. 68 (5): 2053. Bibcode:1978JChPh..68.2053N. doi:10.1063/1.436028.
20. ^ Ohtsuka, Y.; Piecuch, P.; Gour, J.R.; Ehara, M.; Nakatsuji, H. (2007). "Active-space symmetry-adapted-cluster configuration-interaction and equation-of-motion coupled-cluster methods for high accuracy calculations of potential energy surfaces of radicals". Journal of Chemical Physics. 126 (16): 164111. Bibcode:2007JChPh.126p4111O. doi:10.1063/1.2723121. PMID 17477593.
External resources[edit] |
28fd1dd61eca7b5b | in English | suomeksi aboutlecturesdiscussion
Adobe Flash plugin required for showing the video here. Works in Linux, Mac and Windows. You can also download the video and play it locally in e.g. VLC player.
Adobe Flash plugin required for showing the slides here. Works in Linux, Mac and Windows. You can also download the pdf and open it locally in your preferred PDF viewer.
Brief instructions:
Prof. Heikki Hyötyniemi
AS-74.4192 Elementary Cybernetics
Lecture 3: Towards Modeling of Emergence
Helsinki University of Technology, 6.2.2009
(v.2009.04.10, only a rough machine translation, not cleaned yet!)
[0:00 / 1]
Welcome once again -- the subject this time is emergens.
Courage to take the bull by the horns, because without it, that we somehow extract emergens idea, we can not do complex systems mallitusta.
[0:22 / 2]
Here is a little bit the problem that we have something like leased Wonderland here -- that if you ask Irvikissalta "In which direction should go?", So Irvikissa ask "What you would like to do?"
And if the answer is "I do not now I really know", then Irvikissa says, "Well then, is exactly the same what you do" -- to which direction to go, so anything from then is there.
This Escher image files quite well with this situation, if the source of some of the stairs rise up, stand up in agreement, then suddenly huomataankin the others being the bottom of the stairs, and it will continue -- and typically happens in such a manner that will be back to square one -- makes a kind of a cycle there.
Well, the lecture time is right to try to find a coherent way forward, that we start piling up understanding of the old consensus on.
The target would be that even if we have a very holistic problem, so the tools to deal with it would be reduktionistisia -- because the only tools we have, are reduktionistisia.
Initially, we could look down on the top of this mallittamisprosessia general, the manner in which these complex processes has traditionally been approached.
Shortly same repetition than last time we visited.
[2:03 / 3]
A typical approach is this -- the basis of facing up ponnistaen.
There are some -- for example -- simple components, which are known, simple models.
It comes to them then to gather together -- as if the construction of brick houses, and hope that when enough bricks are together, it houses about emergoituu.
A typical example is the growth of this sort in the models.
Let's go even move Exponential Growth, and when you will notice that it is not sufficient, then perhaps take one of the logistic model, or Monodin model, according to.
It will be then such models.
This publication is an example of one of the bacterial model in which the behavior of the substrate is mallitettu, and then the formation of acid, and alcohol consumption.
But the problem in these cases is that, even when this has been the basic building out, so you have a countless number of free parameters which must be affixed.
You must be a large amount of data, so that you could draw the parameters.
If you have so many experiments of a complex system that all of its parameters, you get tied up, as you already very much familiar with the system behavior.
You, not so much about the model no longer any joy is not even.
This is like a fiddler's paradise that it is easy to invent new, such as the nonlinear terms, for this area, and always can make a new publication from then.
The problem, of course, is that the models are very difficult to analyze because they are highly nonlinear, typically.
Could almost say that they are hardly even useable.
[4:35 / 4]
Here is a concrete example of how this assumption as if the hazardous impact on the underlying situation.
Suppose that we want to model the grass, and hares interaction.
Construction of such a model ruoholle that it is just this sort pure integrator here -- that the grass is growing all the time.
Here is one of constant growth factor, which causes it to weed all the time will be.
And then here on this model hares.
This is connected back to a positive cycle -- means that the more hares, the stronger they will grow.
And the more there are hares, so the more you eat the grass.
It is considered simulate this, but states that this model of behavior is absolutely absurd.
Here it leaves the grass enough to stand up in this way, quite correctly, but the hares exponential growth is so strong, that this scale -- we have tens of thousands of them -- will notice that this growth ekspontentiaalinen eat all the grass and even in the way that the linear model leads to a negative ground biomass.
Well, this clearly must be corrected in some way.
[6:02 / 5]
Added to it just this sort Monodin term for it.
Which means that the growth reduction on the basis if we have the food there is not enough.
Now, we see that after the behavior starts to recall already -- in some way -- a meaningful population behavior.
There will be a cycle of this sort.
The problem is that now that the grass be still doing the zero level.
So, the grass biomass is sometimes zero.
That we should certainly be corrected, so that it is credible.
[6:50 / 6]
Put there a restriction of this sort -- that is the grass will be up to zero by means of.
Well, it happens then it means that the grass will decrease to zero, the hares and the number is set to a constant value.
But this is still physically insane, because this Monodin model does not take into account the fact that if indeed the food goes to zero, so this model suggests that the number of hares remains constant thereafter.
[7:25 / 7]
It does not fit here, or we can extend this model by taking to the logistic model of the.
We note that this is beginning to behave more rationally, ie the amount of grass here is set to one of the rational.
And even this number of hares in this set may have one.
Is seen that all the time and others are still quite poskellaan, or should we start now the parameters to match the findings of.
In short, this is an endless task, this model of Tuning in this way, if we go forward basis, to encourage -- in the way it is traditionally, insinöörimäisesti approach.
[8:18 / 8]
Bring recent example was very simple, but when you take a sufficiently large model, it is gonna show you credible.
And on issues like models are used quite a lot further.
Take the example of the Forrester model, which is based on, for example, this time written for the Club of Rome report.
This Forrester block model to emulate, it was found that by 2000, we have all messed up the world -- all the raw materials is at an end, oil is at an end, and it is a kind of catastrophe may have been the world.
It was built in this model the world exactly the way it is in that I have just made.
In other words, states that, for example, natural resources -- they are always eaten, this natural resource stock decreases, and the number of people grows, and so on.
But the problem with issues like hand-coded models are typically the fact that they pretty much models are exactly what they were built mallittamaan.
As long as the results are not what we expected, so as long tweaked the model forward.
The end result is typically that the models can not be so much more to tell than what the model builder the default starting point has been.
Since the models are so complicated, there are so many free parameters, that -- it always will agree to any pre-assumption, quite frankly.
The worst, of course, is that the qualitative behavior depends on the parameters.
If any of the feedback factor is too big, so the model becomes unstable, for example, while a smaller parameter negative feedback stabilizes the system.
So, have the other kinds of approaches?
[10:47 / 9]
The target would be an ambitious goal would be to find methods to approach the problems head on -- rather than the source then the basis of physical models of collecting, building models, so it tries to reach the behavior of movement.
So, starting with the chaos here, and the aim is to reach your order, elikä model.
When I just went in another direction -- that is based on a simple model, but was chaos.
[11:30 / 10]
Here is a simple example of how the principle may be to think that this sort of this holistic approach, or a systemic approach, could be surprisingly effective.
An example of this sort is a refrigerator case.
Well, just an invented example -- think that the 1000-watt refrigerator with an efficiency of 30%, is connected to a network, but forgotten in the fridge door open.
This refrigerator has a room which has been isolated from their environment.
How the temperature of the room you need, when this refrigerator presumably puskee all the time from the cold room?
Is there any suggestions ...? Warms or cools?
[I would suggest that the warming up. ]
Yes, this is the first level on issues like systemic thinking that it is -- because the refrigerator, however, is this sort of heat engine, as a whole, it produces more heat than cold.
But, to be used in favor of the overall system -- or the whole of this systemic approach -- we need to look at the entire room with insulation system, which will be 1000-watt output in the heat.
Actually, this is the whole problem-solving.
Quite indifferent to that because what the 1000-watt output in a room is made -- however, there always will be 1000 watts, which is a moment.
Ie the room warms up in 1000-watt power level, in the same way regardless of the efficiency of the refrigerator, or the other.
As a strong performance can in principle be achieved if we have a simplified, merely to the system.
A sad thing of course is -- last time it was found -- that kyberneettiset these systems do not in any way to simple systems.
For example, cropping systems and insulation hypothesis is never true.
Tyypillisestihän is assumed that these must be open systems, so that this entropian accumulation can be explained.
We have to kind of a compromise to find these approaches range from -- therefore, in the very basis of the departing, and then this totally off on to examine the approach to -- be some way to get, to find something like the link between them.
[14:44 / 11]
One approach is to karkeistaminen, or could speak granularisoinnista.
Take a complex system, and it comes -- more or less -- hand-picked from the avainsuureita.
Suppose that the complex system behavior can be described by a small number of key variables.
Well, this is perhaps a sufficiently intuitive example, that if you have a person even if, as some of the key features that a person can get one connected to the painting.
Those who are more 70-century lived, so will know immediately that this is Kekkonen.
Although this is very far karkeistettu.
[15:45 / 12]
This karkeistamisessa is always a risk that if you go too far to go too simple descriptions, so it is no longer responsible, is no longer capable of any level to describe the real world.
This statue is Urho Kekkonen statue, which is over there Kajaanin cemetery, or church land, and it is the artist's vision for this -- that the screw cap image Kekkosta best.
Maybe this is a political vision.
But really, that in general terms, the idea of Holism -- or holistic mallituksessa idea -- would be to get more than what is, in parts alone.
But if this is being done incorrectly, this approach, then the hole is actually bigger than what would be desirable.
So, we left with only -- abstained.
[17:02 / 13]
Well, to get connected to these lower-level models -- that is, the only models we can build our tools, and then this senior-level holistic thinking, or tämmönen holistic intuition, so that proves to be, we must at some level to answer the question that what is emergens.
Because of complex systems always describe this emergens concept.
Complex system is typically semmoinen that there emergoituu any behavior, which is the basic component not.
And if you really want to, an ambitious approach to this complex system mallitusongelmaa, so we must at some level to take a position on this, that what is emergens.
So, you could -- in a lecture day in books -- to consider that what you think is emergens, and how you are approached to the problem.
Here are the final lecture deals with the root of this problem that how we approached it, and how these approaches follow a specific approach in which the name is neokybernetiikka.
But -- you have to remember that this is only one approach to these matters, and if you develop a different kind of approach, so you can rename it to another name.
These are very much still open, these questions.
[18:52 / 14]
Complex systems research is not really anything beforehand Poker.
In this way, that you can choose the approach that the concepts, methods and application areas, the time freely, and is, at least for the moment, it is interesting to study, because, as the last time it was found, so this tutkimuksenala attract just such a very emergent-minded people.
You have quite a lot of freedom there.
But, really, so that we have something concrete to achieve, so this course will focus on just this neokybernetiikkaan now.
So, going into detail, and see what follows from premises, and then when it comes to these starting points to build the system, so can be considered what kind of on top in the visible characteristics, emergente properties then it is.
Now, defining the approaches and concepts, and certainly worth noting that originally, these things were not so straightforward -- this is all mares iterative, and it later can be interpreted that these may be summarized in these basic ideas jonkunlaiseksi set of ideas.
But intuition was originally the driving force.
[20:45 / 15]
This is really very much on issues like conflicting intuition in relation to these complex systems, as stated last time.
And now indeed chosen a line, and try to justify why that is one line, or why it is then a line is selected.
Even last time, it was found that a very large number of kompleksisuustutkijoista focus on this form of surface -- or on issues like fraktaalitutkijat are pretty much satisfied with the fact that if a formula to produce something Fractals and complex behavior, ignore a very large extent on how this formula could tul-in environment, how one gene could semmoista function to implement, for example.
Wolfram These mussels are examples of this approach, too.
And if you read emergens-names book, so there is a comparison of this sort, that the complex system of the brain, are surprisingly well-the same shape as the Hamburg city map was during the Middle Ages.
Did not this have at one -- two complex systems -- a link now ...?
In a sense it is a certain way, yes the link, because they are these the local players here -- neurons, or people -- who will start form on issues like installations, and as if this increase of this sort so in some sense -- they have a common underlying explanation.
But instead of examining this result, the mares inflation outcome, it is fruitful to start to examine what functions these people, and what do these brain cells is here -- and why they are, after all, to some evolution have been proven stable structures.
Prefer to explore these deep structures than surface structures.
[23:19 / 16]
Now determined that this course on issues like these deep structures are precisely those characters emergente.
Does not of itself to be a very revolutionary statement, but it comes to integrate these concepts together -- that is, however, that might be necessary to take a position on that what are the deep structures, and what are the emergent dress, as defined, only that they are now -- to use both terms, but they point to the same matter.
Come on now just concrete -- trying to model, or at least considered as examples of emergent behavior of the concrete, to get some kind of intuition that is what we are, the more common they all have.
We certainly have examples of very specific emergent phenomena,
[24:32 / 17]
and this next slide illustrates the one area in emergens various levels.
Here is a short, gas mallitusta various abstraktiotasoilla, or a range of karkeistuksen levels.
Just at the lowest level, everything -- including the gas particles -- can be modeled alkeishiukkasten, quantum mechanical tools.
It is the lowest level here.
There exist stochastic, random new laws -- if we can kind of too much talk about even then.
And if you want something a great number of gas models, this kvanttitaso or alkeishiukkasten level is not very useful, because we will have a tremendous amount of Schrödinger equations, which we should in some way to simplify.
This simplification is, frankly, been going on -- we can rely on these results.
It can be concluded that it is observed that the sufficient level karkeistuksen, these atoms and the behavior of atoms -- that is their interaction kvarkkien thanks -- both those atoms can be the ideal gas model to look as if the self-biljardipalloina.
So, from this sort deterministic model, in which the atoms behave like each other törmäilevät balls.
This is achieved Newton's mechanics, which dominated the world of gases.
This is significantly better -- if you have a large number of atoms, or the ideal gas particles, so it is much more intuitive, user-friendly and more useful, all in all, this model, as this kvanttitason model, or alkeishiukkasmalli.
But then, if you have millions of millions of particles, is beginning to be quite impossible to monitor all of collisions, and we are forced to live with it, that examines this issue in some way statistically.
But it proves indeed that the macroscopic phenomena in terms, as if these emergent greats -- such as temperature and pressure -- are extremely well enough to describe the gas space.
We do not need to know the individual molecules or particles of the behavior, when we know only that everything that we can head to measure the reverse -- that is the temperature and pressure -- they have a range of statistical functions of these particles.
We know, for example, that temperature is directly proportional to the particles of the average kinetic energy, which in turn is directly proportional to the speed squared average.
But then when it comes to large volumes of gas -- in the way that the particles they no longer have access to the same probability in all parts of the tank, for example.
And if you have this sort macroscopic large tank, so environmental impact of the reservoir begins to be different in different.
It follows that, for example, temperature differences -- there gas -- will start to become significant, the different items.
It is the beginning -- is that we should start to take into account the convection, and large quantities of gas are concerned, different turbulent phenomena.
So, this just a temperature and pressure of the model -- the assumption that we must stop these quantities, the same throughout the volume -- it is no longer true.
We will have to look at every point separately.
And then it starts to come of this sort are a statistical model, because the turbulence, for example, can not properly deal with any other than the statistical model.
And if you go from here -- mind that we are fully turbulent, ie hahmoton gas mass, so one could imagine that we will no longer be able to do anything concrete, since these statistical mallitkin a very fluid results, but it proves that when you go down again at a higher level, so This fully mixed, fully turbulenttinen gas tank, it begins to behave as ideaalisekoitin -- ie it can be assumed that every point the cartridge is the same as if -- or when it is fully mixed with the same concentration, the same temperature at which point gas.
Reach to the fact that we can once again start mallittamaan centralized system components, ie, a single concentration or lämpötilamuuttujalla.
Again we have this sort deterministic model.
Well, now we are at that level, that these models, we have issues like the gas tank design, but when we want to model something to an industrial plant where there are dozens of issues like the gas tank, or tanks, so huomataankin that if you approach the problem in just this way the traditional sense -- that is built for each of the shell's own model, what we can do, assuming the ideal confused why -- so when these will be hundreds of these ideaalisekoitinmalleja, then one hundred, this variable is a whole design begins once again be semmoinen that it is unable to control -- no longer possible to say that which of these variables are really important, and what kinds of behaviors, this variable 100 entity then receives a time when there is a different way, connected to the tanks with each other.
It is precisely in these modern automation systems is a problem that even when all the components can be models for a very precise, its overall system robustisuusominaisuuksia or qualitative features do not really possible to say, without any simulation or otherwise.
This would be a challenge, these engineers who will be our future.
So, all those below the basic components are in control, but that the manner in which the entity is managed and understood, so it should be -- it would be the next big challenge.
Similarly, those who are not in our area, so the challenge there -- and, in principle, although understood in a man's behavior, so the entire range of human behavior can not be further from the lead, but it is another of this sort emergenti behavior.
It can be assumed that if this line with developments in this continuing, so the level of stochastic deterministic monitors, and again this determinististä stochastic.
So, one of the statistical mares can be expected to need.
This is really an example on issues like the system where these emergent phenomena following one another -- in other words, it is not in any way -- even if they are truly emergente, these levels are in the way that does not agree to return the lower-level variables, or the behavior of the upper-level phenomena -- it is much convenient to use the upper level of the parameters and laws of the return to the lowest level of quantum mechanics -- so this is, however, semmoinen concrete system in which we can in principle to restore the upper-level definitions -- or upper-level definitions will revert to the lower-level phenomena, and it can be a little more detail here now to consider that how these systems as if they are the new features emergoituvat.
First, if I start from here-floor level, or go one level to another, so that the aikaskaalat grow.
At the lowest level is a very fast phenomena, and they slow down what will be higher.
The other hand, the lowest level is a huge amount of alkeishiukkasia, and it is reduced all the time when you will be higher up, or the number of variables is reduced, ie a certain way just makes this abstraction -- a large number of variables is forgotten, and only some kind of cumulative variables are ignored.
Held to be the mind.
[34:42 / 18]
This is justified, why the hell is a kind of logical that the stochastic and deterministic level following each other.
One might think that if there are two levels of determinististä a row, then they kind of collapse on top -- that is, we could be the upper-level variables deterministiä dealt with low-level deterministic variables among equally well.
Similarly, if two stochastic level would be a row, then it should be the same stochastic model adaptable to all.
Making this sort are bold generalization, kind of, that is when I just found that aikaskaalat will always only be longer, so we think that this emergens comes a stage when time at a lower level, goes to infinity -- that is to say there are an infinite number of time steps, or an infinite number of particles törmäilemässä.
So the one hand, an infinite number, and infinite time -- we think that with sufficient accuracy, if they have something statistically meaningful behavior, the so-stationary situation is reached -- that this gonna poukkoilemaan the term of this sort than stationaarisuus and so forth.
The other hand, it is also ergodisuudesta, if anyone is interested in mathematical matters, because in this way, combined with a series average and time average, ie the view that all the particles are also identical to each other, ie to go and the time-axis of the space variable-axis to infinity, and the same sort of behavior is found.
So, what happens when the time-axis are eliminated?
[36:59 / 19]
Here's intuition of this sort now in use, that is in some sense infinite and emergens are with each other married on the agenda.
If we make on issues like the formula, that is integrated into something even if it is, minus the endless to this moment, so if it receives a vak-a practical value -- means that if this is statistically meaningful, stationary, the signal -- so we wait for some value out of this clause.
This E-operator -- this will be now, if you think the expectation operator, so will be very much wrong to use this, this course, or it might be wiser at this stage to nominate the E-operator emergens operator, jolloinka do not have this problem, but nevertheless pretty much just this expectation is at stake.
In practice, then have to be some way approksimoimaan this infinity -- is assumed that a smaller set of data already achieved this stationaarisuusominaisuus, or if there is stationaarista data as a lower amount of data may be sufficient as an infinite amount.
This approach -- although this is extremely simple -- it is just the good side, that this is mathematically extremely compact and simple, and unique, if you define that emergens is this -- and loosening up a bit, saying that poor emergens is this -- then we have tools to move forward.
There has been some intuitively correct features already evident in this definition -- that is if you have a tree in woodland, or by one, even if the noise sample, which is quite different from all other samples, and it is a single sample in the way that there are no other issues like samples -- in the way that it is not statistically significant -- so the waiting value of, we get this completely disappear in this one tree.
Elikä it does not affect our models samme its forest model from guarantee.
The individual particles, or individual samples of the time do not mean anything.
Only that, if one of the behaviors have something long-term correlations, someone else -- or if it is, if it is semmoista repetitive, and also visible in the past, and hopefully in the future, because we build models for this now nollahetkelle, and we hope that knowing the past will tell something in the future.
In this sense, we should indeed expect that the system is stationary and the statistical properties are preserved, in the future.
[40:36 / 20]
This is of this sort since the new film, what does the old kalvoseteissä been -- I want to little problems connected to this, whether it is really just a emergens averaging?
It is, however, this emergens core, in some way.
Take, for example, it is this, that the manner in which the gas temperature is comparable to those below the particles, or their properties, we could say that really defines the temperature of the average kinetic energy, in the shell, ie the average kinetic energy, it is proportional to the mean square speed, ie does not need aware of the particulate flow directions and does not change, only its scalar speed, and its square, and in light of all of these mean, it is directly proportional to temperature.
So, in this sense, the temperature is low due emergenti feature, or feature semmoinen they then have the individual particles is.
But it is now when I start to approach these genuinely interesting complex systems -- so it is kind of a little semmoinen newer approach to this issue is that we want to link to this emergens the particles the interaction also -- not a single particle properties, but the two little mutual connections, connections, and its expectation value.
The simplest, if we have an i-particle and the j-particle, and they have a feature x, so we want to calculate the expectation value of the properties of particles results.
They are familiar with these mathematical things will notice immediately that, when this and this kind of combined, so get to the end of the day are interested in the system variables covariance.
So back to the then next time.
Here's the good thing about that, as it does this will find the so-linearity trying to maintain in these models, as far as possible.
Although this is a nonlinear function of this correlation, so they can be further analyzed, or they could build a model, linea Aris terms.
It will be seen next time, how it succeeds.
[43:35 / 21]
This is yet more on issues like cross-intuition.
Traditionally, models are to the individual players and individual time points and is now explicitly do not want individuals Designs.
This is, of course, it is a bad side, that at least in principle, we can not, then build -- can not be predicted with a single käyttäytyjän, or a single operator's behavior.
This is the time to understand in itself, after all, these players have the free will.
But a large number of can then find some of the laws.
It is what is the good thing about this approach is that quite a lot of fundamental issues like what the problems of complex systems, which have teeth, or what caused debates on issues like, so they can be overridden, the same way.
For example, quite a lot has been discussed in the theory of evolution that is it now the selfish gene is that really is the operator who is in favor of models, or whether it is this individual, which contain these genes jyllävät, so whether it is the individual, which is in favor of models.
Following this review, the only rational review of the level is really the populations where these genes are pretty.
In other words, the starting point of this sort as soon as the assumption here.
And the other thing is that -- for example, such a profound time for Darwin's starting point for this -- that it is interested only in whether the best, that is, the best survive and reproduce, so this framework is about the entire population, after all, a very large number in the population will continue its genealogy , not only of the best.
Or, say in the way that, if only it would continue the best relationship, so it would die out very quickly throughout the population.
In other words, precisely the population -- or biological strength comes from diversity, that there are differences involved.
These basic Darwiniaaninen theory -- this is not really able to effectively tackle at all, because it -- so tightly connected to the winner, it is a theory.
Just another area to take this for example, that in the NP-hard problems, which are, in principle, be solved by non-polynomiaalisessa time, so much time is devoted time to find the best possible solution, but we think that nature is quite the same situation -- it is one of optimization problem, and it tries to find the best for the solution but did not find it.
It enters into any of the suboptimaaliseen situation, which is pretty good, which is reasonably good, but does not typically have quite the absolute best.
When rupeamme build kyberneettisiä these models, so it is not the best solution to the model, but rather it is a set of models, each of these osamalleista reflects in its own way a good solution, which in some way is able to respond to environmental challenges in their own way.
Then, these models for more than an attempt to build a compact model.
Eli kyberneettiset models are very much the model templates.
[47:47 / 22]
Again, this sort have the opposite intuition here.
There are two options when it comes to complex systems modeling.
Well, that Simon, in the complexity of the architectural book specifically on this issue in the way that there are two choices: either rupeamme to look at these characters, or processes.
And now we have already found that rupeamme to look at these characters, but these processes are probably very interesting, because when rupeamme these characters to look at these systems, we as säätöteoreettisilla or methods, so I will come back to these processes in themselves -- as if through the back and later -- and after all that process philosophy is very close to what this will do.
But, however, that the source directly into circulation for some processes, so it is not really the point of departure, as the time many of the other study, has been detected.
Ie -- no, for example, artificial intelligence will bring a book, what I read at the time, so that at the outset, it is noted that all these artificial methods of interpretation of issues like the process framework, as if the intelligent agent framework, which means that the light completely in this way with an appropriate sense.
[49:42 / 23]
Here is a little motivation for why it is so strongly had the surface.
Justification is, of course, this computer, because all of what the computer is doing on issues like algoritmisia processes, it is easy to take from this sort of comparison.
On the other hand, those of chaos theory, all things are issues like procedural.
And -- no, these are a lot of arguments.
[50:23 / 24]
But now, however, this course seeks to this natural approach the complexity of these are actually the characters in mind.
This is now attempted to tie together these processes and outline.
It is considered a little while for this file.
So, here are two axes, first of this sort dimensionaalisen complexity axis, and then the structural complexity of the shaft.
Pretty much one could say that this structural complexity, it includes epälineaarisuutta, and on issues like struktuuraalisesti complex things -- if you think that the linear system is here, it is semmoinen simple, perfectly well-known structural and behavior, then what more complex epälineaarisuuksia on, so the longer it is here in the structural complexity axis.
It can be argued that the original habitat is structurally very complex, and built physical models, they are structurally complex, where they can find epälineaarisuudet and encode, but compared to the wild -- in other words, this physical model, it is dimensionaalisesti simple, because there is only a few variables, we pursue this whole natural complexity to connect.
Eli abstrahoimme terribly much of this natural complexity out of a lot of variables out of -- get this close to zero here, ie there is only a few variables to this model is built.
Well, then when it is sufficiently complex model, we do not really no other way but to simulate it -- mathematical methods can not so much epälineaarisuudesta at all to say -- we can simulate it, when we get here in the structural axis of complexity, a simpler, a little bit, but we dimensionaalisen complexity axis low -- origosta farther away, because we have to fiksata all initial states and others -- so that we can simulate the system so all the free parameters must be committed.
They aimed to make in the way that they are the best match of this natural system here.
Once it has been driving this model, as pursued in the mix of different situations -- we get a large amount of data, which is typically always the same shape, and it can be treated with a uniform methodology, ie it is structurally simple, this data.
Typically, it is dimensionaalisesti complicated -- there are a lot of the data.
At best, it can be argued that if we can find data about characters, we can simplify it, reduce the variables, and on the other hand, this structure reduces.
This is the goal, we can find that on issues like the character design.
What do these figures now mean, as we come back to it later, but the idea is that this character model is the one who is able to keep sisässään the other hand, this physical model of behavior that the natural behavior.
Since the data is what we get here, and what mallitamme here once we get this character model -- so it may come directly from nature, these measurements are of nature.
This is of this sort, no at the next -- or the moment will be finding that this is in some ways Kantilaista modeling.
In the sense that we fiksaamaan something in advance -- we are in this case, fiksaamaan this, that the shape of characters to pick from here, but the figure inside the structure is then more or less unambiguous, that how this is a future here the data are interpreted, that the type of characters to be found there.
So, those old philosophers are still valid, yes.
It is not only the human perception mechanism, but also in our machinery of these -- when we want it to automatically mallittamaan something, so the same problems as humans will have been -- that is to do complex data mallittamaan we must have something to which the structure rupeamme to build this model -- that is now these figures will be the basic structure of those on our key components.
[55:44 / 25]
This is another of these cross-intuition -- that is traditionally thought that when the model is built, so we want to simplify, to uniquely identify a single one like a complex system model, and as if looking for the truth of that -- ie the system behavior, without any interference or non-interactions, seeks to find the core of it.
But now -- first, we see or know that all our systems are always truly interact with the environment -- we want more models of the process of interaction with the environment, than the individual system.
Because of its isolated system alone is not interesting -- it is a rabbit, for example, it will die out -- rather than the hare populations modify their environment and the environment is interacting with.
Ie, instead of something Seeking the truth in what the data will never be able to get -- this is yet more to this philosophy to the cross that we must be satisfied with something, tämmöisiin shadows of reality, if you sought the truth -- instead of its relevance if we pick, so it is there in the data shown , ie we can see the data of variables between the interaction effects of the very basis of the data.
It can be assumed that all of what is interesting, it is apparent, ie it must be able to pick up the correct variables in the system behavior, in which case all interest is ultimately to be in the data.
This is the starting point, therefore, that pretty much -- or no does not mention any of the truth, speaks only of relevance mallituksesta, so avoid these philosophical bridge hole.
[58:09 / 26]
If muistelette these planes in the model where the past still had issues like ideaalisekoittimet here, so now may be approaching the issue of the level of characters -- that we come back next time and practice, says that we can approach it mallitusta tämmöisten statistical multivariate methods, and then in certain environments, we can, when it is appropriate , the so-called harvakoodautuneita features found there, so we can even rename them -- we get back tämmöisten funktionaalisuuksien, or symbolic concepts at a level up.
But it pays to remember that although you are now training exercises will deal with this pääkomponenttianalyysia and so on, so next time, at its next session will be to find that instead that we now should be of this sort toolbox what we like -- the force of natural behavior, so will be to detect that this is a multi-variable analytics -- specifically pääkomponenttianalyysi and harvakoodaus -- it emergoituu then the system itself.
Eli is not so, that we should now be a tool box, which is a requirement to be reconciled with nature, and nature were forced then to act on these pre-assumptions.
Here is our statement that we now have a case of this sort Kantilaistyyppinen compromise here.
This now let this sort, only an apostrophe.
If someone has these Kant pure irrational criticisms come across somewhere, so to say that this is true in some sense it is comparable to the -- that we need someone kind teoriaohjautuneisuutta in this case, on the other hand, the theory must then make room for this data.
[1:00:31 / 27]
Well, this was found already that in order for this expected value operator, giving something sensible, we need the data, or their datojen kovariaation, stationaarista be in the way that the dependence on the past relations and future dependency relations are assumed to remain the same.
Well, so this stationaarisuus could be valid, so the system must be stable in the broad sense.
Does not mean stability in the way that it should always fiksautua to a certain point, but it must be stable in such a way that it will be able to answer -- that if the environment becomes even though the disturbances ...
[cassette exchange]
... that is the attempt to balance the dynamic disorder is affected.
It is noted that only if it is sufficiently stable conditions, so this sort of a emergenti kind of phenomenon is able to then emergoitumaan -- because typically these emergent phenomena are very sensitive, so if you would be too drastic conditions, so it was never there emergoituisi.
Well, those returning to myöhemmnin.
It may of course ask whether it is wise to confine stable systems, and, indeed, this course does not focus on all mathematically possible systems or means of mathematical models of potential, but only physically meaningful.
It may be noted that all the physically meaningful models are at some level stable, because if they would be unstable, so they would have exploded a long time ago -- they do not exist anymore.
- Or it can be argued that some processes have been an explosion, but their impact is then spread throughout the universe in the way that we see only with the explosion of the outcome here.
In that sense, is not a transient phenomenon, necessary or useful to model, because we have no data about the behavior, we see only the end result.
Another possible outcome is that it does not explode, but going into extinction, this sort unstable behavior -- just as much we can not be models out of dead animals -- is what is interesting is the one who is able to survive until now, it can perhaps give something really an indication of how our systems could survive.
[1:03:35 / 28]
Such a return to later, but certainly worth remembering that this sort of static and dynamic balance are very different things.
On the contrary they seem quite the same, and typically when you think that balance is not sufficiently strong frame of reference mallitettaessa complex systems, as merely to think about issues like static balance weights.
Instead, the dynamic balance between the apparently stable under the surface happens all the time.
It is precisely the dynamic equilibrium is a balance of tension -- it is interesting that all the time is as if ready to collapse, unless there is something about the effects of consolidation.
In this now as if the extension of this dynamic equilibrium concept, ie it can be argued that this sort termodynaaminen death is, then -- is achieved at the stage when the all-time figures are derivative zeros.
So, something like a contradiction -- in the rupeamme focus on these non-static balance weights -- both of these systems neokyberneettiset gonna focus on these dynamic tasapainohin, and gonna change them, a static equilibrium, and then when it is as if the end kaluttu, this first-time figure of static equilibrium, so aletaankin focus another single figure in the balance, and so forth.
The end result is a sudden, the fact that I should like to thermodynamic equilibrium, where the extreme far this goes kyberneettistä Chain -- ie this can be a very profound analysis, then, finally, returning to it after a while just briefly.
Well, so that these issues like balance the search process would be possible to the local players, so we have to interpret them jonkunlaisina diffuusioprosesseina, that is spoken in generalized diffuusioprosesseista future.
We can pretty much those we like the standard models.
But this generalization here, means that they are also -- those variables may be a way, even on issues like information variables, not just the physical variables, concentrations, or the other.
And they can be multi-dimensional diffuusioprosesseja.
[1:06:32 / 29]
Well once again the cross-intuition.
Of this sort -- and this is semmoinen familiar image that comes with when teaching about these kompleksisuusteorioihin.
It states that it is first of Poker on issues like systems which are not interesting, then, is a little more complex systems periodisia who like to return to the same space, and then there are issues like chaotic systems.
These chaotic systems, they are -- to them we do not, they can not too much to say, they are epäkiinnostavia, but also on issues like periodiset systems, they can we fully know, and they are not in the sense of interest -- these complex systems are a kind of extremely narrow interface in this epäkiinnostavien and epäkiinnostavien between.
It is a little bit of this sort of doubtful, or semmoinen vague place that is what this chaos interface here -- that is how we succeed, how complex the system stays in this interesting region.
This is just the basic fundamental problem in that kompleksisuusteoriassa that this is a very unstable kind of place -- that complex systems place -- ie nearly all of the approaches pullahtavat either side of chaos, or else tämmöisten simple systems side.
How can natural systems will remain as if this automatically, all the time on this interface?
Well, will find that this is a certain kind of attraktori here, and as you kyberneettiset systems are developing, so that the interface moves further chaos in there side.
These classics, such as the Schrödinger, "What Is Life" book, it assumes that all of life or living systems, they are characterized by this that they are as far removed from the balance -- they are very unstable, and in that way -- and it is this intuition, that are very far from equilibrium, so one might almost say that this intuition is false, that Schrödinger was thinking mares static equilibrium, because otherwise he would not have been able to argue that the dynamic equilibrium is death, but the ability to stay in this dynamic balance in this chaos and order at the border , so it is rather characteristic of life.
The second is that Prigogine which assumes that what dissipatiivisempi, the more the system consumes energy, the environment, for example, its lively it is, and that is what the balance further away it is, the more lively.
They all lit up, quite a different point of view.
[1:10:12 / 30]
Well, here now is this sort oriental symbol -- it deserves this place in such a way that even if this interpretation of the oriental static equilibrium of the balance pretty much -- the thought that the human body -- this sort Oriental medicine seeks to balance -- so it does not think the West just as thoroughly as it should think.
It really is not a static balance, but this oriental balance is of this sort -- of this sort mystical, some kind of dynamic equilibrium, in other words, this is also something to steam -- the second interpretation for this is in the steam -- and the second interpretation is järjestymisperiaate.
It is a very profound idea indeed, this basically balance the oriental idea of what this sort of Western interpretation is not able to formalisoimaan.
But on the other hand, it is not even the oriental philosophy, and it will not be able to formalisoimaan this, that when these issues are approached, as it leads to these, tämmöisiin logical paradox, koaneihin.
[1:11:43 / 31]
Well this is like a nutshell, what has come to be established, or what will face or what will be discussed in the future of this course.
Kyberneettiset These structures have a certain way on issues like stabiilisuusrakenteita, semmoisia attraktoreita root, which in the long term, stable, on issues like dynamic constructs, even if they momentarily appear to be very sensitive to semmoisia, fluid.
Rather than talk of dynamic balance alone, so talk to the Balance balance, ie the single chapter balans.
It is precisely in the sense that kyberneettinen system is a multi-level, and emergens has many levels, and multiple.
Kyberneettinen and model of this sort is relevant spectrum of behavior more.
[1:13:00 / 32]
And, it is a model tämmöisten Oct. alien minima over.
In other words, I just found that these NP problems, they usually try to find -- as the passenger's problem -- to find the best solution, so kyberneettisessä framework seeks to identify any kind of pattern to it that what a picture, or what is common in all reasonably good models that are not there are optimal, but which are close to their optima, and eligible.
Indeed, the fact that models of these different options, so it can be intuitively close this Herakleitoksen the idea that when the models for the river, so never the same models for the river, but the river idea.
[1:14:09 / 33]
This is now true again, the same finding, that the models only physically meaningful systems stable.
It is an extremely small class of all possible mathematical systems.
We can easily justify this.
Where is the state variables of the songs, and think that they are dynamic variables -- in terms of the corresponding mode has been thrown into the complex plane at random -- the so-so that the overall system is stable, each of these randomly thrown poles must hit the left half plane, because if there is one right half plane, the polar, or variable, so it means that the overall system is unstable.
It follows that we can more or less intuitive formula leads in this way that it is 1 / 2 ^ n, is the probability that all the poles at random would be thrown in the left half plane.
How many people got this idea from?
However, this has to be justified by the fact that the mathematical mind, this set of models we look at what is an extremely narrow.
But the extremely narrow range of fit in all the systems of interest, however.
This in itself is not so restrictive that it is limited to stable systems, since it is apparent that these complex systems in themselves are in control systems, ie when they are connected to the environment, which may have been originally unstable -- when they are sufficiently closely connected, so they may achieve it, that the whole environment changes stable, and this is the very nature of these root systems kyberneettisille that originally unstable system will change -- when it changes it kyberneettiseksi system -- becomes stable.
[1:16:30 / 34]
It involves the way it is, that when this kyberneettinen system is received at a lower level signals are stabilized, and pushed in practice, the signal variation of the heat death, so the system rupeaakin focus on what still is left, that is a kind of higher-level equilibrium.
Seeks to focus on stabilizing it.
It follows that in the end we end up on issues like higher single chapter balance, which may appoint a thermodynamic heat death.
To return to the course at the end just to the fact that how this idea of these kyberneettiset systems are thermodynamic entirely consistent.
So, even if kyberneettisissä systems typically order to grow, leading to the legislator, this improves, so if the system takes that system and the whole environment, so the overall system, in this environment + system in the variables stabiloituvat better, ie closer to the inflow of heat death, ie Entropia grows.
After all, when the systems limits the right way, so the end result is that, in the same manner as that of simple physical systems, including those kyberneettiset systems tend towards entropian maximum.
[1:18:35 / 35]
Well, then this intuition would each like to say that since these are the tensions on issues like models, after all, so pretty well describes the behavior of these issues like transienttitilanteissa elastic system of intuition, a system of mechanical intuition, that is if they poikkeutetaan about balance, so they are a force to return the balance to .
And also for the electrical side analogioita found, ie it proves -- if the two systems are interacting with each other, so that the power between the maximum move and would not have the power to lose, so the impedance between them have agreed with the country.
This may say e-something men, but a return to these later.
[1:19:37 / 36]
Now, suitably covered jyrinää there -- we are coming to the philosophies and very far really over here in science pens.
In other words, so that we can move forward consistently mallituksessa this, we need on issues like really fundamental prinsiipin, which supports us.
If we can agree on issues like -- here it is now named Pallas Athena hypothesis that -- if this is acceptable, we are quite consistent with the path to move forward at a later date.
But what does this mean this hypothesis, so this is a very good idea kontroversiaalinen itself.
But can you think about it in mind that if this is acceptable.
You know, perhaps the Gaia hypothesis, it is a bit similar to him.
Gaia is the god, and that Lovelock and others have raised semmoisen idea that these processes, all klimatologiset and palaeontological, any processes on Earth is, and even volcanic eruptions, and others, they are very easy to wipe all life on Earth out.
But it appears that the earth, or rather the earth mother, Gaia elikä-goddess, elikä the god, has directed all of the processes as a way to behave, however, that they in some way in support of life, and allow ever more complex sits on life here on earth.
That while this Gaia, or the earth goddess is somehow very semmoinen unstable and mentally a little epäbalanssissa one, so it is, however, made it possible for all disasters is life still exists here.
And, now get this Gaia hypothesis that, in fact, that can lead to very effective, very powerful models, on issues like klimatologisille or to the ground handlers ilmiöille, if you think that they are limited to semmoisiin phenomena that allow for life on Earth.
So, may be limited to all of the potential range of behavior, only those behaviors that are not too drastic.
Ie -- no, you can see this in Gaia hypothesis further.
This is a very questionable theory.
And just as questionable theory is then that Pallas Athena hypothesis which relates to the fact that Pallas Athena -- if Gaia was the goddess, as Pallas Athena was the goddess of science.
Well, this Wiegner and Einstein, are in turn, and all the other scientists in turn have in mind at least have said, and wondered how that could be possible that the math is so strong that it is able to tackle natural phenomena, how it is able to explain it.
And precisely Einstein once said that how is it possible that nature at all mallitettavissa.
How is it possible that can be kompressoida so this world kaikenpuolinen complexity so simple that we can truly understand and even mathematical models for it?
This is really a complete mystery.
But this Pallas Athena, the hypothesis now assumes that this goddess to protect us, and science has not yet been exhausted.
That may be -- science progresses further.
If this hypothesis can be accepted, that science has not yet stopped, as if a large scale -- that is not only just semmoisia to fill gaps left, so when we will be very powerful tools available suddenly.
They return to the moment.
This is a bit like a parallel-axiom, there Euklidisessa geometry.
That we can take on issues like -- or we can assume that this is not valid and, for example, non-linearity is an essential part of all nature, all natural mallitusta.
Then we get quite different paths along the different results.
But if we take seriously the Pallas Athena-hypothesis, then we end up very different world in which pretty much dominates the linearity of the phenomena.
Now this is neokybernetiikka explicitly -- based on the idea that, for example, linearity is dominated by --
[1:25:10 / 37]
Well, before you go to the so-linearity into second intuition, which, as follows from that Pallas Athena hypothesis.
A certain kind of determinism.
So if the measure of data -- data is the only thing we can draft, after all, to detect or collect to get -- the so-so science can develop, as its scientific progress must be based on this data collection, after all.
Where there may be some models to build, then it must be more or less unambiguous, that it is how data is to be interpreted.
Because otherwise the risk of just that post-modern ambiguity.
In other words, data can be interpreted in different ways, and it then hajotaan different directions of interpretation.
So that we would be one of the only interpretation, at least in the broad sense, is valid, then it requires that any kind of non-random nature of systems.
So, in a particular way kyberneettisten these systems, they must have a kind of natural mirror images -- to go into detail more specifically -- but not in the way that they more or less uniquely describe the surrounding world.
[1:26:59 / 38]
Another very semmoinen intuitive idea in itself, is that because this system and environment are strongly married to each other, and the environment, it consists of other systems, so these models have to be symmetrical, in a particular way, that what the model tells the surroundings and what it says a systemic, it is more or less mirrors or rotatable.
[1:27:45 / 39]
Well, then this is really the most questionable, or the most striking objections to the hypothesis in this context, it is this lineaarisuusolettamus.
Well okay, we can imagine always that if the system is in balance, so it is somehow säätynyt to a point where it acts as its operating point for the environment, so it can be linearisoida -- but this is a profound idea in itself, this, because we first have to get there linearisointipisteeseen.
No why this lineaarisuusolettamusta now highlights so far -- and will continue to keep this a guideline in the way that, as far as possible, is the linear, pending the establishment of epälineaarisuuksia -- so why is this done?
No justification is that epälineaaristen the category is so broad and so uncharted, and then never find any consistent yhtenäisteoriaa, or semmoista a single class models, which in some way to cover all possible epälineaarisuudet.
Only lineaariteorian side of this is possible.
[1:29:11 / 40]
Well, this is indeed a very fundamental nature of the starting point of these kompleksisuustutkimuksissa that almost the first sentence always states that the complex phenomena, or emergent phenomena, the following epälineaarisuudesta, a lower level.
So, this is a very, very fundamental difference -- although, if we examine the covariance something else and so, after all, it is the variables of income, ie non-linear function, but it can still be modeled linearly.
And -- no, one of reasons is also this, that if we are not interested in those processes, but only in the final quarters, that the outcome of its process of going into balance, dynamic balance, so that balance analysis can be much easier than that of the review process itself.
So, it is a balance -- there may be sufficient linearity of this sort in itself.
[1:30:16 / 41]
Here are few of these konkluusioita.
So, next time will have to apply the one hand, this balance between the search of a stage, on the other hand, lineaarisuustavoitetta in every phase.
They have a wide range of heuristisia, theoretical and practical arguments -- you can read about that -- but all in all, these starting points to more or less unambiguous suuntaviivaston what direction to go in favor of, or in what direction to go.
[1:30:52 / 42]
Here is one example of what can follow if you have epälineaarisuutta system.
I will run this through quickly -- this is so surprising result, when we have two things in combination, non-linearity and korkeadimensioisuus.
So, in future we will be satisfied korkeadimensioisuuteen and linearity -- because we know that even if the dimension is high, the so-linearity to save us.
It is considered mares, a system that is very close to a linear model, ie it is diskreettiaikainen model in which the next state of s (k +1) is a function of the previous state of s (k).
It is this sort matrix A, which makes lineaarikuvauksen new place, it is just this sort in the non-linearity.
If this is not something that f-seven would be so totally know the quality of how that system behaves, there was a dimension of the problem, huh.
Well, now defined as the non-linearity in this way that it only cut the negative values in the way that if s is positive, or it s the element is positive, then go through the s in.
But if s is negative, the output provides a simple zero.
What do you think is this not a simpler behavior of this system as a linear system, because no variable or the variable element is now able to go negative?
It is only the first kvadraatti, or hyperkvadraatti available for the single-space, and there is a linear model.
Does not seem to narrow the time of this behavior?
However, it proves that this is much more complex behavior than the linear model.
[1:32:53 / 43]
Can be shown that the appropriate A-matrix of choice, brings diskreettiaikainen model capable of simulating any algorithm.
That is the way that the state s is a snapshot of the program, because that way there are the values of variables, and then the program counter.
Take the example of this.
[1:33:20 / 44]
This is a program of this sort, a very simple language described, but that language is a direct translator that is able to reverse the Matlab-code, and on issues like matrix.
So, this is a translation of this phrase here, or in this matrix.
You can see here that the first X in one of the X value and Y has a zero-value, and then the program counter goes into circulation on this -- that if X is even greater than zero, then X is reduced by one, Y, was reared in one , and goes sextuplets, and this is again that, in practice, going all the time 'X' down, so long that X is zero, when pullahdetaan out of the loop, and this program will stop.
But what is this side of on the fact that whenever the X-seven minimized, so is Y was also changed in the way that changed its leader, will be zero.
It follows from the fact that, depending on the x's parity, then Y has an end value, either number one or zero.
So, this is -- one could say that this is a generalized parity function.
If you remember neuraaliverkkoteoriaa, so you know that someone XOR, elikä a kind of parity, just two parameters, has already been quite a test of the problem, and here you can x the value to be any integer, so it always returns the outcome Y in -- either number one or a zero, depending on whether it is an even or odd.
It is just -- this is the standard, this A-matrix, s but keep it sisälläään x: the value of the initial state, and then ohjelmalaskurin here.
And then when it stops the process, enters into the balance, as it always does, then the Y has something other than zero -- or it is zero or unity.
This has now been tested that varied with some of this' X ', and this varied with ohjelmalaskurin value -- because this is just a purely arbitrary, this sort s, the vector can be iteroida this through and to see to it that what it konvergoituu.
[1:35:48 / 45]
Well this is the end result.
See you, that this ohjelmalaskurin value of this axis, and then the initial status, ie the x's value is in this axis.
So, you can see, the classic of this sort parity function is defined only in the integer points, and we can see that if you have a program counter number one initially, and one integer value, then it enters into in the way that it is zero to zero in the inflow, ykkösestä number one, second place to zero, the three number one, and so forth -- ie Y is, or is loppulos, is number one if it is odd that the initial value of x.
But really, we can of this Y's values, Y, by the final values of the draw with the other initial values of Poker in these well-defined values -- in other words, this is this sort generalized parity function here korkeadimensioisessa in space, can not say.
Well, this in itself was the only experiment of this sort,
[1:37:03 / 46]
but it is what this is what the thing is that when it comes to this sort komputationaalinen strength of this mallikehyksellä, then suddenly it is a Pandora's box opened.
In other words, if an arbitrary algorithm can be implemented in this way matrix, so we can take to the so-called universal machine in the form.
Universal Machine is semmoinen, which includes the parameter in the other program codes, by simulating it and return it to the value of what it does inside of a function or algorithm would return.
So, this has to be put into practice now semmoinen a universal machine.
And, it is a universal machine has been used in such a way that that having someone inside the algorithm, and to interpret its results.
This is a little bit complicated, now, but the question is whether
[1:38:17 / 47]
which is also Gödelin epätäydellisyyslauseessa, that is, it proves that if able to do semmoisen the algorithm which is able to say something about that the last film in the system -- from A-matrix -- then it would prove that the overall system -- that is, if you would semmoinen algorithm which is able to say about the system, Will it ever stop at an arbitrary feed or not, so we could submit it to the algorithm, which makes this reasoning, we could give in to this system -- this system works in such a way that if this system itsestänsä says that it is going to stop, so it pistääkin this eternal luuppiin, ie it never stops, and if so semmoinen algorithm which is able to say that it does not stop, it stops abruptly.
This is the pysähtymisteoreemoja, and so forth.
However, the end result is that --
[1:39:32 / 48]
can you report this to look at what it is -- however, it is essential that the system here is semmoinen that although the future systeemiteoreetikot would use most of the time of this analysis, it was never going to be semmoista a method that is able to say all the feeds you, that if this is stable or not, this system, it is a question.
So, some simple non-linearity than what was examined, both when it has a sufficient number of dimension, as if this is more than 300-dimensioinen system -- I mean that we are capable of three hundred to take the dimensions of this universal machine -- so its behavior is qualitatively completely gone -- that nothing will really be able to longer say, this system, since all the algorithms are returned to that pysähtymisongelmaan of this framework.
[1:40:33 / 49]
Well, here is this mallitusstrategia what is implemented on time.
[1:40:41 / 50]
However, here is this idea, in short, that we started as if the circulation of this situation that does not really have to know that what will take the stairs, so now if you are moving these osviittojen in accordance with, then it goes to one of the dark -- we do not know in advance what it will take, but still achieve consistent, according to the forwards.
It is perhaps reasonable to conclude the course watch them again, this film sets, just in the sense that it may bring to mind that the more age than was promised.
And it's all learning, at least kyberneettinen learning, is the root mares iterative learning.
In other words, piled up to the increasing consensus on the new information.
Well, thank you.
[1:41:37 / -]
|
cc4a1e0959517b19 | Schrödinger equation
Alternative Title: Schrödinger wave equation
Schrödinger equation, the fundamental equation of the science of submicroscopic phenomena known as quantum mechanics. The equation, developed (1926) by the Austrian physicist Erwin Schrödinger, has the same central importance to quantum mechanics as Newton’s laws of motion have for the large-scale phenomena of classical mechanics.
Essentially a wave equation, the Schrödinger equation describes the form of the probability waves (or wave functions [see de Broglie wave]) that govern the motion of small particles, and it specifies how these waves are altered by external influences. Schrödinger established the correctness of the equation by applying it to the hydrogen atom, predicting ... (100 of 119 words)
We've Been Delivering Trusted Facts Since 1768
Start Now
Schrödinger equation
• MLA
• APA
• Harvard
• Chicago
You have successfully emailed this.
Error when sending the email. Try again later.
Email this page |
71ba0e91ddd8beea |
Sign up
Here's how it works:
1. Anybody can ask a question
2. Anybody can answer
Consider this penny on my desc. It is a particular piece of metal, well described by statistical mechanics, which assigns to it a state, namely the density matrix $\rho_0=\frac{1}{Z}e^{-\beta H}$ (in the simplest model). This is an operator in a space of functions depending on the coordinates of a huge number $N$ of particles.
The ignorance interpretation of statistical mechanics, the orthodoxy to which all introductions to statistical mechanics pay lipservice, claims that the density matrix is a description of ignorance, and that the true description should be one in terms of a wave function; any pure state consistent with the density matrix should produce the same macroscopic result.
Howewer, it would be very surprising if Nature would change its behavior depending on how much we ignore. Thus the talk about ignorance must have an objective formalizable basis independent of anyones particular ignorant behavior.
On the other hand, statistical mechanics always works exclusively with the density matrix (except in the very beginning where it is motivated). Nowhere (except there) one makes any use of the assumption that the density matrix expresses ignorance. Thus it seems to me that the whole concept of ignorance is spurious, a relic of the early days of statistical mechanics.
Thus I'd like to invite the defenders of orthodoxy to answer the following questions:
(i) Can the claim be checked experimentally that the density matrix (a canonical ensemble, say, which correctly describes a macroscopic system in equilibrium) describes ignorance? - If yes, how, and whose ignorance? - If not, why is this ignorance interpretation assumed though nothing at all depends on it?
(ii) In a though experiment, suppose Alice and Bob have different amounts of ignorance about a system. Thus Alice's knowledge amounts to a density matrix $\rho_A$, whereas Bob's knowledge amounts to a density matrix $\rho_B$. Given $\rho_A$ and $\rho_B$, how can one check in principle whether Bob's description is consistent with that of Alice?
(iii) How does one decide whether a pure state $\psi$ is adequately represented by a statistical mechanics state $\rho_0$? In terms of (ii), assume that Alice knows the true state of the system (according to the ignorance interpretation of statistical mechanics a pure state $\psi$, corresponding to $\rho_A=\psi\psi^*$), whereas Bob only knows the statistical mechanics description, $\rho_B=\rho_0$.
Presumably, there should be a kind of quantitative measure $M(\rho_A,\rho_B)\ge 0$ that vanishes when $\rho_A=\rho_B)$ and tells how compatible the two descriptions are. Otherwise, what can it mean that two descriptions are consistent? However, the mathematically natural candidate, the relative entropy (= Kullback-Leibler divergence) $M(\rho_A,\rho_B)$, the trace of $\rho_A\log\frac{\rho_A}{\rho_B}$, [edit: I corrected a sign mistake pointed out in the discussion below] does not work. Indeed, in the situation (iii), $M(\rho_A,\rho_B)$ equals the expectation of $\beta H+\log Z$ in the pure state; this is minimal in the ground state of the Hamiltonian. But this would say that the ground state would be most consistent with the density matrix of any temperature, an unacceptable condition.
Edit: After reading the paper by E.T. Jaynes pointed to in the discussion below, I can make more precise the query in (iii): In the terminology of p.5 there, the density matrix $\rho_0$ represents a macrostate, while each wave function $\psi$ represents a microstate. The question is then: When may (or may not) a microstate $\psi$ be regarded as a macrostate $\rho_0$ without affecting the predictability of the macroscopic observations? In the above case, how do I compute the temperature of the macrostate corresponding to a particular microstate $\psi$ so that the macroscopic behavior is the same - if it is, and which criterion allows me to decide whether (given $\psi$) this approximation is reasonable?
An example where it is not reasonable to regard $\psi$ as a canonical ensemble is if $\psi$ represents a composite system made of two pieces of the penny at different temperature. Clearly no canonical ensemble can describe this situation macroscopically correct. Thus the criterion sought must be able to decide between a state representing such a composite system and the state of a penny of uniform temperature, and in the latter case, must give a recipe how to assign a temperature to $\psi$, namely the temperature that nature allows me to measure.
The temperature of my penny is determined by Nature, hence must be determined by a microstate that claims to be a complete description of the penny.
I have never seen a discussion of such an identification criterion, although they are essential if one want to give the idea - underlying the ignorance interpretation - that a completely specified quantum state must be a pure state.
Part of the discussion on this is now at:
Edit (March 11, 2012): I accepted Nathaniel's answer as satisfying under the given circumstances, though he forgot to mention a fouth possibility that I prefer; namely that the complete knowledge about a quantum system is in fact described by a density matrix, so that microstates are arbitrary density matrces and a macrostate is simply a density matrix of a special form by which an arbitrary microstate (density matrix) can be well approximated when only macroscopic consequences are of interest. These special density matrices have the form $\rho=e^{-S/k_B}$ with a simple operator $S$ - in the equilibrium case a linear combination of 1, $H$ (and vaiious number operators $N_j$ if conserved), defining the canonical or grand canonical ensemble. This is consistent with all of statistical mechanics, and has the advantage of simplicity and completeness, compared to the ignorance interpretation, which needs the additional qualitative concept of ignorance and with it all sorts of questions that are too imprecise or too difficult to answer.
share|cite|improve this question
Is this not the same problem as the MaxEnt school "runs into" (scare quotes because they don't really) that physics seems to change depending on how much one chooses to ignore? The resolution there is that ultimately one is doing science, so one needs a condition like "this set of control variables is empirically sufficient to control the outputs". – genneth Mar 6 '12 at 15:15
Science must be objective, observer independent, hence it should not depend on choices of an observer. So whatever choices there ar, there should be an objective way of assessing them. - I analyzed Max Entropy in Section 10.7 of my book Classical and Quantum Mechanics via Lie algebras, and found it wanting:If you choose to ignore things that you shouldn't (such as the energy content) you get completely wrong results in clear contradiciton to experiment. To get a correct theory you must choose to know at least everything that makes a difference to the system! – Arnold Neumaier Mar 6 '12 at 15:24
@ArnoldNeumaier yes, but "everything that makes a difference to the system [as measured by macroscopic instruments]" != everything. MaxEnt is founded precisely on ignoring the microscopic details that do not make any difference to the macroscopic state, while not ignoring anything that does. Ignoring things that don't make any difference is good, because it means you don't have to calculate them! – Nathaniel Mar 6 '12 at 18:27
Arnold, perhaps this a minor point, but use of the canonical ensemble implies to me the penny is in thermal equilibrium with an environment. This would mean that the penny is entangled with the environment and therefore it would not be described by a pure state. Your questions do not seem to be as sharp if they are posed to the microcanonical ensemble. – BebopButUnsteady Mar 6 '12 at 19:19
@BebopButUnsteady: The penny is by assumption in thermal equilibrium, but need not be in equilibrium with the environment (e.g, if I just opened the window, thereby changing the environment.) - But any macroscopic body (not only a penny, and not only in a canonical ensemble, and even if far from equilibrium) is always entangled with its environment. The consequence is that no macroscopic object can be assigned a pure state, not even in principle. But this flatly contradicts the ignorance interpretation of statistical mechanics. Thus more things to defend for the upholders of orthodoxy! – Arnold Neumaier Mar 6 '12 at 19:36
up vote 16 down vote accepted
I wouldn't say the ignorance interpretation is a relic of the early days of statistical mechanics. It was first proposed by Edwin Jaynes in 1957 (see, papers 9 and 10, and also number 36 for a more detailed version of the argument) and proved controversial up until fairly recently. (Jaynes argued that the ignorance interpretation was implicit in the work of Gibbs, but Gibbs himself never spelt it out.) Until recently, most authors preferred an interpretation in which (for a classical system at least) the probabilities in statistical mechanics represented the fraction of time the system spends in each state, rather than the probability of it being in a particular state at the present time. This old interpretation makes it impossible to reason about transient behaviour using statistical mechanics, and this is ultimately what makes switching to the ignorance interpretation useful.
In response to your numbered points:
(i) I'll answer the "whose ignorance?" part first. The answer to this is "an experimenter with access to macroscopic measuring instruments that can measure, for example, pressure and temperature, but cannot determine the precise microscopic state of the system." If you knew precisely the underlying wavefunction of the system (together with the complete wavefunction of all the particles in the heat bath if there is one, along with the Hamiltonian for the combined system) then there would be no need to use statistical mechanics at all, because you could simply integrate the Schrödinger equation instead. The ignorance interpretation of statistical mechanics does not claim that Nature changes her behaviour depending on our ignorance; rather, it claims that statistical mechanics is a tool that is only useful in those cases where we have some ignorance about the underlying state or its time evolution. Given this, it doesn't really make sense to ask whether the ignorance interpretation can be confirmed experimentally.
(ii) I guess this depends on what you mean by "consistent with." If two people have different knowledge about a system then there's no reason in principle that they should agree on their predictions about its future behaviour. However, I can see one way in which to approach this question. I don't know how to express it in terms of density matrices (quantum mechanics isn't really my thing), so let's switch to a classical system. Alice and Bob both express their knowledge about the system as a probability density function over $x$, the set of possible states of the system (i.e. the vector of positions and velocities of each particle) at some particular time. Now, if there is no value of $x$ for which both Alice and Bob assign a positive probability density then they can be said to be inconsistent, since every state that Alice accepts the system might be in Bob says it is not, and vice versa. If any such value of $x$ does exist then Alice and Bob can both be "correct" in their state of knowledge if the system turns out to be in that particular state. I will continue this idea below.
(iii) Again I don't really know how to convert this into the density matrix formalism, but in the classical version of statistical mechanics, a macroscopic ensemble assigns a probability (or a probability density) to every possible microscopic state, and this is what you use to determine how heavily represented a particular microstate is in a given ensemble. In the density matrix formalism the pure states are analogous to the microscopic states in the classical one. I guess you have to do something with projection operators to get the probability of a particular pure state out of a density matrix (I did learn it once but it was too long ago), and I'm sure the principles are similar in both formalisms.
I agree that the measure you are looking for is $D_\textrm{KL}(A||B) = \sum_i p_A(i) \log \frac{p_A(i)}{p_B(i)}$. (I guess this is $\mathrm{tr}(\rho_A (\log \rho_A - \log \rho_B))$ in the density matrix case, which looks like what you wrote apart from a change of sign.) In the case where A is a pure state, this just gives $-\log p_B(i)$, the negative logarithm of the probability that Bob assigns to that particular pure state. In information theory terms, this can be interpreted as the "surprisal" of state $i$, i.e. the amount of information that must be supplied to Bob in order to convince him that state $i$ is indeed the correct one. If Bob considers state $i$ to be unlikely then he will be very surprised to discover it is the correct one.
If B assigns zero probability to state $i$ then the measure will diverge to infinity, meaning that Bob would take an infinite amount of convincing in order to accept something that he was absolutely certain was false. If A is a mixed state, this will happen as long as A assigns a positive probability to any state to which B assigns zero probability. If A and B are the same then this measure will be 0. Therefore the measure $D_\textrm{KL}(A||B)$ can be seen as a measure of how "incompatible" two states of knowledge are. Since the KL divergence is asymmetric I guess you also have to consider $D_\textrm{KL}(B||A)$, which is something like the degree of implausibility of B from A's perspective.
I'm aware that I've skipped over some things, as there was quite a lot to write and I don't have much time to do it. I'll be happy to expand it if any of it is unclear.
Edit (in reply to the edit at the end of the question): The answer to the question "When may (or may not) a microstate $\phi$ be regarded as a macrostate $\rho_0$ without affecting the predictability of the macroscopic observations?" is "basically never." I will address this is classical mechanics terms because it's easier for me to write in that language. Macrostates are probability distributions over microstates, so the only time a macrostate can behave in the same way as a microstate is if the macrostate happens to be a fully peaked probability distribution (with entropy 0, assigning $p=1$ to one microstate and $p=0$ to the rest), and to remain that way throughout the time evolution.
You write in a comment "if I have a definite penny on my desk with a definite temperature, how can it have several different pure states?" But (at least in Jaynes' version of the MaxEnt interpretation of statistical mechanics), the temperature is not a property of the microstate but of the macrostate. It is the partial differential of the entropy with respect to the internal energy. Essentially what you're doing is (1) finding the macrostate with the maximum (information) entropy compatible with the internal energy being equal to $U$, then (2) finding the macrostate with the maximum entropy compatible with the internal energy being equal to $U+dU$, then (3) taking the difference and dividing by $dU$. When you're talking about microstates instead of macrostates the entropy is always 0 (precisely because you have no ignorance) and so it makes no sense to do this.
Now you might want to say something like "but if my penny does have a definite pure state that I happen to be ignorant of, then surely it would behave in exactly the same way if I did know that pure state." This is true, but if you knew precisely the pure state then you would (in principle) no longer have any need to use temperature in your calculations, because you would (in principle) be able to calculate precisely the fluxes in and out of the penny, and hence you'd be able to give exact answers to the questions that statistical mechanics can only answer statistically.
Of course, you would only be able to calculate the penny's future behaviour over very short time scales, because the penny is in contact with your desk, whose precise quantum state you (presumably) do not know. You will therefore have to replace your pure-state-macrostate of the penny with a mixed one pretty rapidly. The fact that this happens is one reason why you can't in general simply replace the mixed state with a single "most representative" pure state and use the evolution of that pure state to predict the future evolution of the system.
Edit 2: the classical versus quantum cases. (This edit is the result of a long conversation with Arnold Neumaier in chat, linked in the question.)
In most of the above I've been talking about the classical case, in which a microstate is something like a big vector containing the positions and velocities of every particle, and a macrostate is simply a probability distribution over a set of possible microstates. Systems are conceived of as having a definite microstate, but the practicalities of macroscopic measurements mean that for all but the simplest systems we cannot know what it is, and hence we model it statistically.
In this classical case, Jaynes' arguments are (to my mind) pretty much unassailable: if we lived in a classical world, we would have no practical way to know precisely the position and velocity of every particle in a system like a penny on a desk, and so we would need some kind of calculus to allow us to make predictions about the system's behaviour in spite of our ignorance. When one examines what an optimal such calculus would look like, one arrives precisely at the mathematical framework of statistical mechanics (Boltzmann distributions and all the rest). By considering how one's ignorance about a system can change over time one arrives at results that (it seems to me at least) would be impossible to state, let alone derive, in the traditional frequentist interpretation. The fluctuation theorem is an example of such a result.
In a classical world there would be no reason in principle why we couldn't know the precise microstate of a penny (along with that of anything it's in contact with). The only reasons for not knowing it are practical ones. If we could overcome such issues then we could predict the microstate's time-evolution precisely. Such predictions could be made without reference to concepts such as entropy and temperature. In Jaynes' view at least, these are purely macroscopic concepts and don't strictly have meaning on the microscopic level. The temperature of your penny is determined both by Nature and by what you are able to measure about Nature (which depends on the equipment you have available). If you could measure the (classical) microstate in enough detail then you would be able to see which particles had the highest velocities and thus be able to extract work via a Maxwell's demon type of apparatus. Effectively you would be partitioning the penny into two subsystems, one containing the high-energy particles and one containing the lower-energy ones; these two systems would effectively have different temperatures.
My feeling is that all of this should carry over on to the quantum level without difficulty, and indeed Jaynes presented much of his work in terms of the density matrix rather than classical probability distributions. However there is a large and (I think it's fair to say) unresolved subtlety involved in the quantum case, which is the question of what really counts as a microstate for a quantum system.
One possibility is to say that the microstate of a quantum system is a pure state. This has a certain amount of appeal: pure states evolve deterministically like classical microstates, and the density matrix can be derived by considering probability distributions over pure states. However the problem with this is distinguishability: some information is lost when going from a probability distribution over pure states to a density matrix. For example, there is no experimentally distinguishable difference between the mixed states $\frac{1}{2}(\mid \uparrow \rangle \langle \uparrow \mid + \mid \downarrow \rangle \langle \downarrow \mid)$ and $\frac{1}{2}(\mid \leftarrow \rangle \langle \leftarrow \mid + \mid \rightarrow \rangle \langle \rightarrow \mid)$ for a spin-$\frac{1}{2}$ system. If one considers the microstate of a quantum system to be a pure state then one is committed to saying there is a difference between these two states, it's just that it's impossible to measure. This is a philosophically difficult position to maintain, as it's open to being attacked with Occam's razor.
However, this is not the only possibility. Another possibility is to say that even pure quantum states represent our ignorance about some underlying, deeper level of physical reality. If one is willing to sacrifice locality then one can arrive at such a view by interpreting quantum states in terms of a non-local hidden variable theory.
Another possibility is to say that the probabilities one obtains from the density matrix do not represent our ignorance about any underlying microstate at all, but instead they represent our ignorance about the results of future measurements we might make on the system.
I'm not sure which of these possibilities I prefer. The point is just that on the philosophical level the ignorance interpretation is trickier in the quantum case than in the classical one. But in practical terms it makes very little difference - the results derived from the much clearer classical case can almost always be re-stated in terms of the density matrix with very little modification.
share|cite|improve this answer
Thanks for the clarification on the origins. The problem with your answer to (iii) is that in the particular case mentioned in my edited statement on (iii), the ground state would be the most consistent pure state, irrespective of temperature. Thus the K/L measure doesn't allow me to assess whether treating the pure state $\psi$ as a canonical example (if I am only interested in macroscopic consequences) is or isn't acceptable. – Arnold Neumaier Mar 6 '12 at 17:33
The only lesson to draw from this is that it isn't always sensible to try and take a single "most representative" pure state from a probability distribution and expect it to have similar properties. If you're interested in macroscopic properties you should be calculating expectations. If there is a pure state whose properties (or at least the ones you're interested in) behave similarly to expectations calculated from the density matrix then you'd be justified in what you're triyng to do. I agree that the KL measure by itself doesn't tell you this, of course. – Nathaniel Mar 6 '12 at 18:09
But if I have a definite penny on my desk with a definite temperature, how can it have several different pure states? Either this penny has a particular wave function $\psi$ which gives its complete quantum mechanical description (even though we are never able to say which one it is), then this state must somehow have an associated temperature , since Nature knows this temperature, and the description is complete. - Or such a unique $\psi$ doesn't exist, in which case the concept of microstates breaks down, and there is only the density matrix to describe the system. – Arnold Neumaier Mar 6 '12 at 18:15
In Jaynes' view, the macrostate is a probability distribution over the microstates, and the temperature is a property of the macrostate, not the microstate. $T=\partial S/\partial U$, where $S$ is the entropy of the macrostate. If we completely knew the microstate we would be talking about a probability distribution where one state has $p=1$ and the rest 0. There would be no entropy, and hence no temperature. – Nathaniel Mar 6 '12 at 18:30
In ignorance terms, $\partial S/\partial U$ means something like "if I added a little bit more energy to this penny, how much more ignorance would I then have about its microstate?" I will update my answer to make some of this clearer. – Nathaniel Mar 6 '12 at 18:32
I'll complete @Natahniel's answer with the fact that 'knowledge' can have physical implication linked with the behaviour of nature. The problem goes back to Maxwell's demon, who converts his knowledge of the system into work. Recent works (like arXiv:0908.0424 The work value of information) shows that the information theoretical entropies defining the knowledge of the system is connected to the work which is extractable in the same way than the physical entropies are.
To sum al this into a few words, "Nature [does not] change its behaviour depending on how much we ignore", but "how much we ignore" changes the amount of work we can extract fro Nature.
share|cite|improve this answer
Indeed. And to see a really great example of how our knowledge of a natural system can affect our ability to extract work from it, read this paper (by Edwin Jaynes): – Nathaniel Mar 6 '12 at 16:05
Thanks for the reference. – Frédéric Grosshans Mar 6 '12 at 16:56
@Frederic: Then you might also be interested in Chapter 10.1 of my book Classical and Quantum Mechanics via Lie algebras, where I discuss the Gibbs paradox without any reference to anyone's knowledge. – Arnold Neumaier Mar 6 '12 at 17:28
@ArnoldNeumaier : Thanks for the reference. I've just read the chapter 10.1. For me (but I'm biased towards information theory), the choice of a description level is precisely what is related to the physicist's knowledge. But I agree that it is a (useful) philosophical debate, and the whole question is linked to the study of the model choice itself. – Frédéric Grosshans Mar 7 '12 at 18:21
By the way, the paper linked to in my answer is not directly related to Gibbs paradox, but is a computation of the work which can (probabilistically) be extracted from a system on which we have a partial knowledge (quantified by Shannon/Smooth-Rényi entropies) – Frédéric Grosshans Mar 7 '12 at 18:21
When it comes to discussion of these matters, I make a following comment witch starts with the citation fom Landau-Lifshitz, book 5, chapter 5:
The averaging by means of the statisitcal matrix ... has a twofold nature. It comprises both the averaging due to the probalistic nature of the quantum description (even when as complete as possible) and the statistical averaging necessiated by the incompleteness of our information concerning the object considered.... It must be borne in mind, however, that these constituents cannot be separated; the whole averaging procedure is carried out as a single operation, and cannot be represented as the result of succesive averagings, one purely quantum-mechanical and the other purely statistical.
... and the following ...
It must be emphasized that the averaging over various $\psi$ states, which we have used in order to illustrate the transition from a complete to an incomplete quantum-mechanical description has only a very formal significance. In particular, it would be quite incorrect to suppose that the description by means of the density matrix signifies that the subsystem can be in various $\psi$ states with various probabilities and that the average is over these probabilities. Such a treatment would be in conflict with the basic principles of quantum mechanics.
So we have two statements:
Statement A: You cannot "untie" quantum mechanical and statistical uncertainty in density matrix.
(It is just a restatement of the citations above.)
Statement B: Quantum mechanical uncertainty cannot be expressed in terms of mere "ignorance" about a system.
(I'm sure that this is self-evident from all that we know about quantum mechanics.)
Therefore: Uncertainty in density matrix cannot be expressed in terms of mere "ignorance" about a system.
share|cite|improve this answer
The conclusion does not follow from the premises. I could just as easily say "1. quantum and statistical uncertainty cannot be untied in the density matrix formalism. 2. the uncertainty in a density matrix cannot be expressed as mere 'quantum' uncertainty (otherwise it would be a pure state). Therefore, 3. uncertainty in the density matrix cannot be expressed in terms of mere 'quantum' uncertainty." A much more reasonable conclusion is that some of the uncertainty in the density matrix is quantum and some is statistical; it's just impossible to untie them. – Nathaniel Mar 6 '12 at 16:16
@Nathaniel I agree with your statement 3 and see no problem with it. It doesn't contradict anything. And also it doesn't in any way refute my statement. While the "much more reasonable conclusion" is just restatement of statement 1. – Kostya Mar 6 '12 at 16:25
@Nathaniel: Why should your point 2 in your comment be true? Surely a density matrix is a quantum object and expresses quanutm uncertainty. The success of statistical mechanics together with the fact that you cannot untie the information in a density matrix rather suggests that the density matrix is the irreducible and objective quantum information, and the pure state is only a very special, rarely realized case. – Arnold Neumaier Mar 6 '12 at 17:18
@Kostya, sorry - in that case I misunderstood - I interpreted you as saying that none of the uncertainty in the density matrix can be expressed in terms of ignorance. If you were only saying that some of it can't then no problem. (Though having said that, for someone who supports a non-local hidden variable interpretation, it can all be expressed as ignorance. Some people might find that more palatable than abandoning locality; I'm not sure whether I do or not.) – Nathaniel Mar 6 '12 at 17:52
@ArnoldNeumaier consider a machine that mechanically flips a coin, then based on the result prepares an electron in one pure state (call it $|A\rangle$) or another ($|B\rangle$). To model the state of an electron from this machine you would use the density matrix $\frac{1}{2}\left( |A\rangle\langle A| + |B \rangle \langle B| \right)$. Surely this represents both the quantum uncertainty inherent in the pure states and your classical uncertainty about the outcome of a (hidden) coin flip. So at least in some situations some of the density matrix's uncertainty is ignorance. – Nathaniel Mar 6 '12 at 18:01
The ignorance interpretation of the density matrix was introduced by vonNeuman in close analogy to the ignorance interpretation in earlier classical statistical mechanics, where probabilities were associated to ignorance. But in quantum theory probabilities are intrinsic not related to our ignorance.
Dirac and Landau introduced the quantum mechanical interpretation of the density matrix as the more general description of the quantum state of a system. Feynman reasoned that the usual wavefunction theory works only when one considers isolated systems and ignores the rest of the universe.
Prigogine and other members of the Brussels school have shown that wavefunction theory only applies to stable quantum systems but not to unstable ones, which require density matrices outside the Hilbert space.
In this modern perspective, the old supposition that the quantum state is given by some $|\Psi \rangle$ is merely a supposition based in ignorance and approximations on the underlying micro state given by a density matrix. Regarding your three questions:
(i) No it can't, even if we ignore the recent results on unstable quantum systems and focus on simpler systems. The old ignorance interpretation is a non-scientific hypothesis, because it first assumes the existence of a pure state and next claims that this hypothetical pure state cannot be measured. This is not different from hidden variables approaches to QM or from parodies of religion based in the famous invisible pink unicorn.
(ii) $Tr\{ O \rho_A \} = Tr\{ O \rho_B \}$ for any observable $O$, if both descriptions are mutually consistent, which implies either $\rho_A = \rho_B$ or that some of them has redundant information. See (iii) for some detail.
(iii) The problem in quantum statistical mechanics is that two completely different interpretations of the density matrix are usually confused in the literature. There are two kind of contractions of the description of an atomic-molecular system: exact and inexact. Suppose that the vector $(\mathbf{n})$ of variables describes the state of a given system. We can split this into two sets $(\mathbf{n}_1,\mathbf{n}_2)$; suppose now that we contract the description using only $(\mathbf{n}_1)$ and ignoring the rest. If the dynamical description of the system is unchanged by using this contracted description $(\mathbf{n}_1)$ instead of the whole $(\mathbf{n})$ then the contraction is exact and $(\mathbf{n}_2)$ are redundant variables; otherwise $(\mathbf{n}_1)$ only provides an approximated description of the system. The redundant variables denote relaxed variables in the scale of time chosen to study the system. Let me a simple example.
Consider a simple nonequilibrium gas (constant composition) described by a $\rho(t_0)$. Writing down the equation of motion, we can check that there exists a hierarchy of time scales $t_1 < t_2 < t_3 ...$ for which different sets of variables relax, achieve equilibrium values, and no more participate in the dynamical description. For instance, there exists a scale $t_C$, which is roughly of the order of duration of a collision so that for $t \gg t_C$, the binary correlations $g_2$ take an equilibrium value and the corresponding integral, in the exact equation of motion, vanishes (doing unneeded its computation). There exists a scale $t_R \gg t_C$, which is roughly of the order of relaxation time so that for $t \gg t_R$, the deviations of $\rho$ from its equilibrium value vanish and the corresponding relaxation kernel in the exact equation of motion vanishes (doing unneeded its computation). This hierarchy of contractions explains why an equilibrium gas systems can be described by a highly contracted description: e.g. using only pressure and temperature at equilibrium. The existence of contracted descriptions is not related to ignorance but to the dynamical survival of the more 'robust' mode of evolution for the characteristic times scales.
In fact, the old ignorance interpretation of statistical mechanics does not explain why we can use $p$ and $T$, and ignore the rest of variables, for an equilibrium gas at equilibrium, but we cannot use the same contracted description for the same gas in a turbulent regime. Evidently it has nothing to see with ignorance and/or the ability to measure.
For instance, we can measure fields $p(x,t)$, $n(x,t)$, and $T(x,t)$ for both equilibrium and nonequilibrium regimes. However, those fields are completely redundant for an equilibrium gas, whereas the same fields describe the gas for not too far nonequilibrium regimes, and miserably fail for far-from-equilibrium turbulent regimes. The modern interpretation explains why. For turbulent regimes the fast variables did not relax still and you need to use a broad set of variables to describe the system (precisely extended thermodynamics adds an extended set of variables to the above fields for describing far from equilibrium regimes). For linear non-equilibrium regimes the fast variables did relax and can be ignored, providing a contracted description where you only need $p(x,t)$, $n(x,t)$, and $T(x,t)$ to describe the system. Finally, at equilibrium, all the variables did relax and $p(x,t)$, $n(x,t)$, and $T(x,t)$ is contracted again to $p$ and $T$.
Regarding your (ii) Alice could use $p(x,t)$, $n(x,t)$, and $T(x,t)$ whereas Bob use only $p$ and $T$ for a gas at equilibrium and both would agree on the description of the same system (except Alice would describe the system redundantly with lots of non-useful information).
P. S: I would add that the old ignorance interpretation generates many problems and paradoxes both in quantum and classical contexts.
share|cite|improve this answer
von Neumann carefully distinguished between the intrinsic, and essentially new, quantum probabilities which inhere even to a pure state, and the ignorance-based probabilities which were in analogy with those in Classical Stat Mech, which are added on top of the QM probabilities, to produce the density matrix. He was also aware, and commented on it, that what I have carelessly called «added on top» was also something new to Quantum Stat Mech, it was not exactly like the way ignorance probabilities were constructed in Classical Stat Mech, and this point is all that Landau is getting at. – joseph f. johnson Feb 12 '13 at 17:34
Landau never expresses himself clearly on foundational matters, the way von Neumann always did. Yet Landau was incomparably the greater physicist, I would in fact deny that von Neumann was a physicist at all, not even a mathematical physicist. He was superb at foundational considerations in maths, and logic, but had no physical intuition at all. You are also wrong about Dirac: Dirac had not the slightest intention to make the density matrix a description of the quantum state of a system. It was, for him, a description of the mixed quantum state in analogy to classical mixed state. – joseph f. johnson Feb 12 '13 at 17:38
@josephf.johnson Von Neumann introduced the statistical interpretation for mixed states in 1927. This would not be confused with the statistical interpretation of the wavefunction (pure states) due to Born (1925). The quantum mechanical interpretation of the density matrix was first introduced by Landau in 1927 and latter (1930,1931) by Dirac who also discussed the statistical interpretation and even normalized each $\rho$ in a different way. – juanrga Feb 15 '13 at 20:06
@josephf.johnson No. It is not required to introduce "ignorance-based probabilities [...] to produce the density matrix." There is no ignorance for pure density matrices $\rho^2 =1$, neither there is ignorance for a mixture $\rho^2 \neq 1$ in the Landau/Dirac approach. The ignorance interpretation is exclusive to the von Neumann's statistical approach. The quantum mechanical interpretation is the basis for the Brussels School approach to LPSs and foundational issues of QM. – juanrga Feb 15 '13 at 20:13
See and my comment on it. Landau says «The averaging by means of the statistical matrix ... has a twofold nature. It comprises both the averaging due to the probabilistic nature of the quantum description (even when as complete as possible) and the statistical averaging necessitated by the incompleteness of our information concerning the object considered...It must be borne in mind, however, that these constituents cannot be separated...» I.e., Landau explicitly says the use of the density matrix is due to the incompleteness of our knowledge. – joseph f. johnson Feb 16 '13 at 2:25
Your Answer
|
ec4f78454882ec9c | Sign up ×
I am an electronics and communication engineer, specializing in signal processing. I have some touch with the mathematics concerning communication systems and also with signal processing. I want to utilize this knowledge to study and understand Quantum Mechanics from the perspective of an engineer. I am not interested in reading about the historical development of QM and i am also not interested in the particle formalism. I know things have started from the wave-particle duality but my current interests are not study QM from that angle. What I am interested is to start studying a treatment of QM from very abstract notions such as, 'what is an observable ? (without referring to any particular physical system)' and 'what is meant by incompatibility observables ?' and then go on with what is a state vector and its mathematical properties. I am okay to deal with the mathematics and abstract notions but I some how do not like the notion of a particle, velocity and momentum and such physical things as they directly contradict my intuition which is based on classical mechanics ( basic stuff and not the mathematical treatment involving phase space as i am not much aware of it).
I request you to give some suggestions on advantages and pitfalls in venturing into such a thing. I also request you to provide me good reference books or text books which give such a treatment of QM without assuming any previous knowledge of QM.
share|cite|improve this question
Why not sakurai? – user7757 Feb 3 '13 at 12:08
Possible duplicate: – Qmechanic May 23 '13 at 19:51
7 Answers 7
up vote 1 down vote accepted
Try "Mathematical Foundations of Quantum Mechanics" by George Mackey. It is about 130 pages with a chapter on classical mechanics. The author is a very well know mathematician, and I think the books is what you are looking for. Also on higher level is the book "Quantum mechanics for mathematicians" by Leon Takhtadzhian. The book by Folland on Quantum Field Theory has a chapter on Quantum Mechanics, which can be read independently of the rest of the book.
Edit: Since this came up on the first page, I'll add one more. F.Strocchi "An Introduction to the Mathematical Structures of Quantum Mechanics. A Short Course for Mathematicians."
share|cite|improve this answer
An unconventional approach would be to study quantum computation or quantum information theory first.
What is 'unusual' about quantum mechanics is the mathematical underpinnings, which is essentially a generalization of probability theory. (I have heard more than one colleague say that quantum mechanics is simply physics which involves 'non-commutative probability', i.e. in testing whether some collection of events are realized for some sample space, there is a pertinent sense of the order in which one tests those events.) To the extent that this is true, it is not important to be learning the actual physics alongside that mathematical underpinning, so long as you can learn about something evolving e.g. under the Schrödinger equation or collapsing under measurement.
Studying quantum information evolving under a computational process is one way you could achieve that. Because the narrative of the field is less about the crisis in physics in the 20s–40s, and more about physicists and computer scientists struggling to find a common language, the development is clearer and there is a better record of justifying the elements of the formalism from a fundamental standpoint. By studying quantum information and/or quantum computation, you will be able to decouple the learning of the underpinnings from the learning of the physics, and thereby get to the heart of any conceptual troubles you may have; and it will give you a tidier sandbox in which to play with ideas.
To this end, I recommend "Nielsen & Chuang", which is the standard introductory text of the field. It's suitable as an introduction both for those coming from a computer science background, and from a quantum physics background; so apart from learning some of the formalism, you can get some exposure to some of the physics as well. There are other texts which I have not read, though; and about a bazillion pages of lecture notes floating around on the web.
share|cite|improve this answer
I strongly advise you Quantum Theory: Concepts and Methods by Asher Peres.
I think this book is answering the questions you're asking. Like 'what is an observable ?'
share|cite|improve this answer
That's quite an unusual request i.e. to start with an abstract formulation of quantum mechanics, especially for someone in a profession so closely connected with the "real world". However, to answer your question, I think what you're looking for is an axiomatic approach to quantum mechanics. Such treatments will keep the physical examples to the minimum and skip to the mathematics straight away.
You could start with this reference (just for a chatty treatment of what the postulates look like !), and maybe search for quantum mechanics textbooks with "axiomatic" in the title.
For many people, the fact that the quantum mechanical predictions of physical quantities are sometimes counterintuitive is precisely what gives the subject its appeal.
Hope this helps.
Edit: It appears not so easy to find "Axiomatic Quantum Mechanics" in textbook titles ! However Google returns a few articles featuring those words.
Of course there is also Von Neumann's "Mathematical Foundations of Quantum Mechanics".
share|cite|improve this answer
You might find articles by Leon Cohen of interest. He has considered the relationship between classical and quantum theory from a signal processing since the 1960s. For example, PROCEEDINGS OF THE IEEE, VOL. 77, NO. 7, JULY 1989, "Time-Frequency Distributions-A Review". This concentrates on the relationship between the Wigner function in quantum theory and various concepts in signal processing.
This might not answer your question so much as point to something that you might find more broadly helpful because of its signal processing provenance. The mathematics of Hilbert spaces only enters into a small fraction of the signal processing literature, but the vast majority of it could be put into such mathematical terms (signal processing is, after all, preoccupied with Fourier and other integral transforms).
share|cite|improve this answer
I always recommend Tony Sudbery's Quantum Mechanics and the Particles of Nature, don't be put off by the bad word in the title: he is fairly axiomatic and has both the abstract part and the concrete part. I recommend them more highly than either Mackey, already cited, or Varadarajan, both of which are idiosyncratic. Prof. Sudbery is an expert in Quantum Information Theory but does not take a biassed or idiosyncratic approach in his text.
share|cite|improve this answer
Here is a little book by a physicist trained as an engineer in fluid dynamics applied to aircraft:
"Foundations of Quantum Physics" by Toyoki Koga (1912-2010), Wood and Jones, Pasadena, CA, 1980.
This book has forewords by Henry Margenau and Karl Popper.
Another book by him is "Inquiries into Foundations of Quantum Physics", 1983.
share|cite|improve this answer
Your Answer
|
1fc2b54732cf3da5 | Take the 2-minute tour ×
Could someone experienced in the field tell me what the minimal math knowledge one must obtain in order to grasp the introductory Quantum Mechanics book/course?
I do have math knowledge but I must say, currently, kind of a poor one. I did a basic introductory course in Calculus, Linear algebra and Probability Theory. Perhaps you could suggest some books I have to go through before I can start with QM?
share|improve this question
It's easier to learn something if you have a need for it, so you might use your interest in QM to inspire yourself to learn the math. – Mike Dunlavey Dec 15 '11 at 1:39
Related Math.SE question: math.stackexchange.com/q/758502/11127 – Qmechanic Apr 20 at 7:16
There are many different mathematical levels at which one can learn quantum mechanics. You can learn quantum mechanics with nothing more than junior high school algebra; you just won't be learning it at the same level of mathematical depth and sophistication. – Ben Crowell Sep 24 at 23:11
4 Answers 4
up vote 14 down vote accepted
I depends on the book you've chosen to read. But usually some basics in Calculus, Linear Algebra, Differential equations and Probability theory is enough. For example, if you start with Griffiths' Introduction to Quantum Mechanics, the author kindly provides you with the review of Linear Algebra in the Appendix as well as with some basic tips on probability theory in the beginning of the first Chapter. In order to solve Schrödinger equation (which is (partial) differential equation) you, of course, need to know the basics of Differential equations. Also, some special functions (like Legendre polynomials, Spherical Harmonics, etc) will pop up in due course. But, again, in introductory book, such as Griffiths' book, these things are explained in detail, so there should be no problems for you if you're careful reader. This book is one of the best to start with.
share|improve this answer
+1 for the book recommendation. This was the one I was taught with and it provided an excellent starting point. – qubyte Dec 15 '11 at 16:56
You don't need any probability: the probability used in QM is so basic that you pick it up just from common sense.
You need linear algebra, but sometimes it is reviewed in the book itself or an appendix.
QM seems to use functional analysis, i.e., infinite dimensional linear algebra, but the truth is that you will do just fine if you understand the basic finite dimensional linear algebra in the usual linear algebra course and then pretend it is all true for Hilbert Spaces, too.
It would be nice if you had taken a course in ODE but the truth is, most ODE courses these days don't do the only topic you need in QM, which is the Frobenius theory for eq.s with a regular singular point, so most QM teachers re-do the special case of that theory needed for the hydrogen atom anyway, sadly but wisely assuming that their students never learned it. An ordinary Calculus II course covers ODE basics like separation of variables and stuff. Review it.
I suggest using Dirac's book on QM! It uses very little maths, and a lot of physical insight. The earlier edition of David Park is more standard and easy enough and can be understood with one linear algebra course and Calc I, CalcII, and CalcIII.
share|improve this answer
Dirac's book is readable with no prior knowledge, +1, and it is still the best, but it has no path integral, and the treatment of the Dirac equation (ironically) is too old fasioned. I would recommend learning matrix mechanics, which is reviewed quickly on Wikipedia. The prerequisite is Fourier transforms. Sakurai and Gottfried are good, as is Mandelstam/Yourgrau for path integrals. – Ron Maimon Dec 6 '11 at 22:37
There is a story about Dirac. When it was proved that parity was violated, someone asked him what he thought about that. He replied "I never said anything about it in my book." The things you mention that are left out of his book are things it is a good idea to omit. Path integrals are ballyhooed but are just a math trick and give no physical insight, in fact, they are misleading. Same for matrix mechanics. Those are precisely why I still recommend Dirac for beginners... I would not even be surprised if his treatment of QED in the second edition proved more durable than Feynman's..... – joseph f. johnson Dec 7 '11 at 0:38
Matrix mechanics is good because it gives you intuition for matrix elements, for example, you immediately understand that an operator with constant frequency is a raising/lowering operator. You also understand the semiclassical interpretation of off-diagonal matrix elements, they are just stunted Fourier transforms of classical motions. You also understand why the dipole matrix element gives the transition rate without quantizing the photon field, just semiclassically. These are all important intuitions, which have been lost because Schrodinger beat Heisenberg in mass appeal. – Ron Maimon Dec 7 '11 at 5:20
Your comment about path integrals is silly. The path integral gives a unification of Heisenberg and Schrodinger in one formalism, that is automatically relativistic. It gives analytic continuation to imaginary time, which gives results like CPT, relativistic regulators, stochastic renormalization, second order transitions, Fadeev Popov ghosts, supersymmetry, and thousands of other things that would be practically impossible without it. The particle path path integral is the source of the S-matrix formulation and string theory, of unitarity methods, and everything modern. – Ron Maimon Dec 7 '11 at 5:29
@RonMaimon I have had to teach stochastic processes and integrals to normal, untalented folks. IMHO, stohastic processes count as probability theory, one of the trickiest parts, and path integrals are no help for beginners here either. It is still better for the beginning student to not take a course in probability and let what they learn about the physics of QM be their introduction to stochastic processes...I mean, besides what they already learned about stochastic processes from playing Snakes and Ladders. This is part of my theme: learn the physics first, and mathematical tricks later – joseph f. johnson Dec 15 '11 at 17:52
There is a nice book with an extremely long title: Quantum Physics of Atoms, Molecules, Solids, Nuclei, and Particles. It does the basics pretty well. Griffith's would be the next logical step. After that there is Shankar.
share|improve this answer
Try Schaum's Outlines: Quantum Mechanics, ISBN 0-07-054018-7. You'll see the math there, but you'll need to do the deep background studies on all the math from Chapter 2.
share|improve this answer
Your Answer
|
cd6cbf3a99b223e7 | Login or Register
800.334.5551 Live Chat (offline)
Surviving and Thriving in the AP* Chemistry Curriculum, Part 2
Adrian Dingle
Chemistry Teacher and Author
Click here to read "Surviving and Thriving in the AP* Chemistry Curriculum, Part 1."
Math is not my strong point. Frankly, that puts me in good company with a number of AP Chemistry students. For those kids, a lack of math acumen can undermine what could otherwise be some very good progress in chemistry. What can we do to prevent that shortcoming from damaging the AP scores of those students?
When I was an undergraduate I took a course entitled Math for Chemists. It was for those of us who were strictly "chemists" but in need of some targeted pointers that might help us to overcome some of the mathematical challenges associated with physical chemistry. Math for Chemists helped me to navigate some scary moments amidst wave functions, the Schrödinger equation, nasty derivatives, and Hamiltonian operators. Ever since that experience, I've always felt that some similar, stealthy tips, albeit at a much lower level, have the potential to be really quite useful for the AP Chemistry student who, like me, will perhaps never learn to love mathematics.
Don't be alarmed, I'm not about to enter into a discussion about the mathematical niceties of quantum mechanics or integrals here, but what I will do is offer my "Top 10" math pointers that even I can give to students if they are struggling with a particular quantitative aspect. The Top 10 is not "teaching math" by any stretch of the imagination, nor will it necessarily lead to any kind of enhanced mathematical understanding, but it does represent a list of mental shortcuts that just might unlock some points for a few students on the AP Chemistry Exam.
Nothing here is groundbreaking or perhaps things that you haven't read in the appendix of a chemistry textbook, but a gentle reminder never hurt; and you may be surprised how, that by reinforcing of a few of these simple relationships, one can go a long way to saving chemistry points on the AP exam. Beyond that, and in terms of multiple-choice questions, some of these tips remain invaluable since the students are bereft of a calculator and estimation and mental arithmetic remain crucial skills on that part of the exam. What's shown below is a list for students—I know that you know this stuff. Some of it falls under the heading of general math tips, and some is more specifically related to AP Chemistry, but all should prove useful in the areas suggested, and perhaps beyond.
Top 10 math pointers
1. Logs. What's a log? It's a button on the calculator, a "function" if you will. No more, no less, it converts 1 number to a different number.
–log (1 × 10–4) = 4, –log (1 × 10–3) = 3, etc.
This means that the –log of a number, such as 5 × 10–4, that's somewhere between those 2 values (bigger than the first but smaller than the second), is between 3 and 4. (Acids, bases, and buffers)
"p" simply means "–log." (Acids, bases, and buffers)
The log of a number less than 1 is negative, and that of a number greater than 1 is positive. (Yes, I know Nernst has gone, but this could still be useful in a Henderson-Hasselbalch calculation.) (Acids, bases, and buffers)
2. Add exponents when multiplying, and subtract when dividing. (Equilibrium)
3. Reversing a chemical equation at equilibrium creates a new K value that's the reciprocal of the original. (Equilibrium)
4. Multiplying the stoichiometric coefficients of a chemical equation at equilibrium creates a new K value that's the original raised to the power of the multiplier. (Equilibrium)
5. Kw = [ H+ ][ OH ] = 1 × 10–14 is the "same" equation as pKw = pH + pOH = 14 because of #1 and #2 above and when taking logs of things multiplied together, they become summed. The same is true of the relationship between the acid dissociation constant Ka and the Henderson-Hasselbalch equation. (Acids, bases, and buffers)
6. Units matter! You had better realize that 4.95 × 10–7 m = 495 nm, etc. For example, where a wavelength has been calculated in m, but multiple-choice answers have been reported in nm (and vice-versa). This is a simple but important thing to remember. (Atomic structure/electrons and of course all quantitative aspects of the course)
7. Units matter! Delta H values are usually recorded in units of kJ mol –1 and delta S values are usually presented in units of
J mol –1 K –1. This matters when using ΔG = ΔHΤΔS. Make sure that you have converted 1 of the values to the units of the other before calculating ΔG. (Entropy, enthalpy and Gibbs free energy, and of course all quantitative aspects of the course)
8. Units matter! They can help you to determine which value of R to use (there are 3 on the Equations and Constants sheet) in different situations. Use 8.314 J mol –1 K–1 when dealing with "energy situations," and 0.08206 L atm mol –1 K–1 or 62.36 L torr mol –1 K–1 (depending on the units of pressure) when gases are involved. (Thermochemistry, electrochemistry, gases, and of course all quantitative aspects of the course)
9. Dimensional analysis can help a great deal when keeping track of units. For example, in #8 above, when using P V = n R T to calculate the temperature of a certain number of moles of an ideal gas, with a pressure given in atm and a volume given in L, the only R that makes sense in terms of units is 0.08206 L atm mol–1 K–1. Why? Well, because in terms of units the following is true:
10. Estimation remains an important skill. On the multiple-choice section of the exam you do not have access to a calculator but can still be asked questions that involve calculations. For example, using estimation to realize that (99.8)(1.01) / (0.08206)(350) is approximately the same as (100)/(30), which in turn is equal to a number between 3 and 4, can help you to choose from a list of potential answers without having to do any calculations. That's a lot easier than calculating the actual value of (99.8)(1.01) / (0.08206)(350) in your head.
Of course, I could go on about significant figures and rounding and checking answers, too, but then I'd be getting a long way away from AP Chemistry.
Math still matters
Some may say that, because AP Chemistry has arguably moved away from the more quantitative aspects of the past, these tips are perhaps less important than they once were; I see it differently. Firstly, not all of the math has gone away. Logs, exponents, and the ability to estimate are still very relevant. Secondly, by continuing to use some of the old, quantitative relationships that have actually been removed from the Equations and Constants sheet, one can actually aid the understanding of concepts that have shifted entirely to a qualitative treatment.
Two such examples are root mean square speed and Graham's law of effusion and diffusion. These equations are no longer given with the exam and this means that a quantitative treatment of them is not something that we should expect to see in future exam questions, but qualitative aspects of them are definitely still in play. What does that mean for these 2 examples? Well, not much more than knowing that with a greater molar mass, urms decreases; with increasing temperature, urms increases; and that heavier particles tend to effuse and diffuse more slowly than lighter ones.
The argument that lab work helps to illustrate theory extends and evolves into math work can help to illustrate theory. It's all well and good saying that heavier particles move more slowly, but (even with what some might criticize as being no more than mindless plugging and chugging) we can aid that understanding and cement it with some calculations. Plug the molar masses of 2 different gases into either of those mathematical formulas, pick up a calculator, and you'll see that the resultant numbers bear out what the theory says. For that reason alone I'll continue to use some of the depreciated quantitative equations in my AP classes.
Count on Carolina's Support
Flexibility to teach AP Chemistry YOUR way
You May Also Like
Carolina Biological Supply
© Carolina Biological Supply Company
2700 York Road, Burlington, NC 27215-3398 • 800.334.5551 |
642bed2cfb733f56 | Saturday, March 25, 2017 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere
An isolated standard model contradicts nothing we know
Today, the Moriond 2017 particle physics conference ends. Especially the CMS has presented the newest results – analyses of some 35 inverse femtobarns of the data collected at the two protons' total energy of \(13\TeV\).
Almost a decade ago, I made an asymmetric bet against Adam Falkowski, a particle phenomenologists now in Paris. He claimed that supersymmetry wouldn't be found before a deadline and I claimed it could be. If it were found, I would have won $10,000. If it weren't found, I would pay $100. So it was a 100-to-1 bet, basically implying the consensus probability of the early enough supersymmetry discovery at 1%. I accepted the bet because my subjective probability of a SUSY discovery was much higher than 1% and I still think it was reasonable – and an analogous assumption is still reasonable for the next collider.
The deadline was defined a bit arbitrarily – but it was "after the results of at least 30/fb of the data at design energy are collected". The design energy was \(14\TeV\) and \(8\TeV\) is clearly lower – the collisions at this lower energy may produce SUSY particles about 10 times less frequently than those at \(14\TeV\) – but \(14\TeV\) is close enough to \(13\TeV\) so it's obvious that those 35/fb at \(13\TeV\) that we have are basically equivalent to 30/fb at \(14\TeV\). So right now it's the ideal balanced moment that almost exactly agrees with the conditions of our bet, I think, and because supersymmetry hasn't been discovered yet, I should pay $100 to Adam.
As I have already mentioned, this lost bet is a technicality for me and doesn't change my belief that supersymmetry somewhere in Nature, beneath the Planck scale, is very likely and SUSY around the corner is always a possibility. I am sure that many of you agree that the opposite result would be way more interesting – from the financial viewpoint, from the viewpoint of our TRF community, and because of the excitement it would create among physicists.
Friday, March 24, 2017 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere
Learning about the laws of physics isn't a "yes we can" pissing contest
After Sabine Hossenfelder wrote her critique of "the world is a simulation" paradigm, I was a bit jealous about one apparent phenomenon: that her readers seemed to agree with her. Well, it didn't last long. After Scott Aaronson vented his absolutely stupid ideas about the same problem, many of his computer-science-worshiping but otherwise uneducated readers were apparently redirected to Hossenfelder's blog and started to give her a hard time.
The most obnoxious troll that repeatedly posted at Backreaction is nicknamed _Shorty, a man from the British Columbia who loves his air gun, guitar, and video games. For some reasons, this self-evident mediocre know-nothing thinks that it's very important for the world to hear what he thinks about the character of the physical law. It wouldn't be too hard to predict what an interaction between a physicist, even one such as Hossenfelder, and a stupid yet aggressive man who is "into the computer games" is going to look like.
Thursday, March 23, 2017 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere
What mathematical thinking looks like and why schools should teach it
Go to the Character of the Mathematical Thought list...
A week ago, Doug K. sent me an essay
Why We Should Reduce Skills Teaching in the Math Class
by Dr Keith Devlin, a British American set theorist and mathematics teacher.
Like many postmodern promoters of feel-good education, Devlin argues that we should reduce the teaching of all hard mathematics at school. After all, almost no one actually needs mathematics in his life so it's fine. This change will reduce the math anxieties and math phobia in the society, make the world a better place, and so on. At the same time, most people will understand what is mathematics, how and where it is used, they will have a positive attitude to it, and they will be ready to learn it as soon as they need some because math phobia won't be deterring them.
Please, give me a break.
Wednesday, March 22, 2017 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere
Aaronson's delusions about the universe as a simulation
Four days ago, I praised Sabine Hossenfelder's remarks about the hypothesis that our Universe is a simulation. It's rather clear that complexity theorist Scott Aaronson disagrees on some fundamental issues, as he wrote in his
Your yearly dose of is-the-universe-a-simulation,
and Aaronson is just completely wrong about all these points. Some of these two folks' views were mentioned at Gizmodo. Aaronson summarized the core of his opinion as follows:
In short: blame it for being unfalsifiable rather than for being falsified!
He claims that it's not a problem to reconcile the universe-as-a-computer with the Lorentz invariance, too. On the other hand, Hossenfelder (like your humble correspondent) emphasizes that all the predictions similar to "certain computer-like glitches, such as the failure of accuracy or continuity and deja vu cats" seem to be falsified. So at some imperfect but high confidence level, the "simulation hypothesis" has been ruled out. Aaronson doesn't like it and he's wrong.
Tuesday, March 21, 2017 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere
Antiviruses: when the cure is worse than the disease
In the morning, my antivirus software suddenly told me that my main defragmenter is a virus.
Just to be specific: I have used the German AVIRA software (web) with the red umbrella icon for over 15 years. It's probably not the most patriotic thing to do because Czechia has turned into an antivirus superpower largely thanks to Avast which recently devoured its competitor AVG (for $1.3 bn) and the company's headquarters stayed in Prague. Avast actually has more employees than Avira etc. Avast was founded as a communist-era co-op in 1988, AVIRA is two years older. Almost all people on the Avast board are non-Czech today, however.
I think that AVIRA does a good job and I've seen some reports that it's among the antiviruses that don't slow down the PC too much.
The other part of the story is that I believe that fragmentation of files slows down PC and I am running a defragmentation periodically. I've tried many but Auslogics Disk Defrag Free seems like the best choice on the market – it's much faster than most others and it visualizes things appropriately and gives you all the information about the fragmented files, the number of fragments, and other things.
Monday, March 20, 2017 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere
Germans should be ashamed of their candidate Martin Schulz
Off-topic: I know that many ex-fans have already grown tired of The Big Bang Theory but I haven't and for folks like me, CBS has approved the 11th and 12th seasons of TBBT. Via syndication, the show has earned over $1 billion for Warner, I haven't been sent a penny (let alone Penny) yet.
In the recent decade, the German politician elite has drifted towards the arrogant, politically correct far left corner. Recall that Angela Merkel's predecessor was the social democrat Gerhard Schröder.
This 2002 parody of a famous Spanish ketchup pop song, "The Tax Song", still showed the innocent politics that Western politics had known for decades. Schröder was a social democrat and it was therefore sensible to assume that he wants too high taxes, too many taxes (I can't even tell you with any certainty whether high taxes were characteristic for his tenure), and he's making fun of the citizens who probably don't like to pay this much. The only other theme of the song I can identify are the accusations that Schröder had to color his hair, otherwise they couldn't have been so youthful.
Although Merkel's CDU should be more conservative than Schröder's SPD, I find it obvious that Merkel is more left-wing than Schröder was. He was really a guy with some common sense who was immune towards most of the insanities – and he's still resistant towards e.g. the postmodern Russophobia that is largely driven by Vladimir Putin's being too conservative for the self-anointed progressive ideologues who have multiplied like locusts in the West.
Sunday, March 19, 2017 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere
Do you really think the Moon is a planet, Kirby? informs us about lots of legitimate news but sometimes it loves to spread hype about some absolute nonsense. When it switches to the nonsense mode, it usually promotes the craziest articles to the "featured" category. On Friday, they posted a crazy article about a topic that everyone should be able to understand,
Scientists make the case to restore Pluto's planet status
Pluto is a hero of the title but this very fact is ludicrous. Some people feel sad about the downgraded status of a piece of rock they have never seen with their eyes. But there's something else that the title doesn't convey: The people who want to redefine a "planet" again intend to make sure that there are over 100 planets in the Solar System so that the list would include the Earth's Moon – where some TRF readers have been – among many others.
Two Plutos, taken from the article about a Daesh astronomer who wants to rename Pluto to the Moon of Mohammed LOL. See also ISIS plans to carry attacks on Pluto.
The main proponent of the new definition is Mr Kirby Runyon (and "Mr" should be understood in the same way as when Dr Gablehauser talks to Mr Howard Wolowitz), a graduate student at John Hopkins, a Christian, and an owner of a cat. Quite some credentials.
Saturday, March 18, 2017 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere
Hossenfelder sensibly critical of our "simulated" world
Sabine Hossenfelder writes a lot of wrong texts, especially about issues that depend on some nontrivial calculation. But she is often reasonable when she discusses certain conceptual issues, including the general properties of quantum mechanics (and the absence of non-local influences in QFT etc.).
The latest example of the penetrating texts is
No, we probably don’t live in a computer simulation
I've discussed the proposals that "our world has been programmed by our overlord, Ms Simulator" in 2011, 2013, 2016, aside from other moments.
But let's look primarily at the comments by Hossenfelder and her readers – who surprisingly seem to agree.
Particles' wave functions always spread superluminally
It's been almost a week since we discussed Jacques Distler's confusion about some basics of quantum field theory. He posts several blog posts a year, a quantum field theory course is probably the only one he teaches, and he was "driven up the wall" by a point that almost every good introductory textbook makes at the very beginning. I expected that within a day or two, he would post a detailed text with the derivations saying "Oops, I've been silly [for 50 years]".
It just didn't happen. He still insists that the one-particle truncation of a quantum field theory is perfectly consistent and causal. In particular, he repeated many times in his blog post (search for the word "superluminal") that the relativistically modified Schrödinger's equation for one particle (with a square root) guarantees that the wave packets never spread faster than the speed of light. Oops, it's just too bad.
Friday, March 17, 2017 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere
Budget 2018: America will eliminate funding for climate hysteria
For more than a decade, I've been urging the responsible people to stop their support and especially government funding for the climate hysteria, a political movement that pretends to be all about science even though it brutally violates even the basic principles of the scientific method and threatens the integrity of the institutionalized science, prosperity of whole countries, and the freedom of their citizens.
There have been partial victories that have made us smile at one moment or another. But up to 2007, it seemed clear that the movement was growing and after 2007-2009, whatever the exact date of the Peak Climate Alarm was, it still seemed extremely likely that the climate alarmists were here to stay and consolidate their influence – much like we thought that communists were here to stay in Czechoslovakia in the late 1980s.
Well, the victory of Donald Trump was the first event that seems to change the big picture and reverse the trends in major ways – the first sign that the climate hysteria could be unsustainable, after all, much like Nazism, eugenics, communism, and other fads currently residing at the dumping ground of history. We didn't know whether Ivanka Trump and Rex Tillerson would "allow" the U.S. president to do something that has been a not so negligible part of the campaign. But things look better again.
Thursday, March 16, 2017 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere
Why research at Czech institutions sucks
Yesterday, a Czech expert in spintronics and nanoelectronics Mr Tomáš Jungwirth has provoked some naive Czech patriots who think that their homeland is very good in things like science:
Researcher: Czech science is average, wins few ERC grants (Prague Monitor, widely discussed in Czech press)
Jungwirth is a member of the European Research Council. Well, I think that I was still a high school student when I was pretty much decided that the Czech contributions to science in general and physics in particular are pretty much negligible. In fact, before I came to the college, I was already worried whether there could be someone in our homeland who could teach me/us things needed for the cutting-edge physics etc.
Just to be sure, the Czech education bringing you up to the early 1970s or so is very good, I still think. But at the research level, the numbers speak clearly:
Researchers from other EU countries submit two or three times more applications for ERC grants than those from the Czech Republic, Jungwirth said. Moreover, 12 percent of the grant applications are successful on average, while Czech projects succeed only in 5 percent of cases. Czech projects have won ERC 25 grants worth 41 million euros since 2007, while Austrian and Hungarian projects have won 189 and 54 grants, respectively.
Austrian and Hungary - totally comparable countries – have won 7.6 and 2.2 times more grants than Czechia, respectively. The deviation of these numbers from 1 obviously cannot be considered noise and – despite the EU's numerous fundamental shortcomings – I don't think that it's an effort of the evil EU organs to hurt Czechia, either.
LHCb discovers five \(css\) bound states at once
The LHCb detector is way smaller and cheaper than its fat ATLAS and CMS siblings. But it doesn't mean that it can't discover cool things – and many things. The letter \(b\) refers to the bottom quark. It's often said that the bottom quark is the best path towards the research of CP-violation and similar things.
But for some reasons, the LHCb managed to discover five new particles without any bottom quark – at once:
The collaboration proudly tweeted about the new discovery and linked to their new paper,
Observation of five new narrow \(\Omega^0_c\) states decaying to \(\Xi^+_c K^−\)
You may count the new peaks on the graph above. If you haven't forgotten some rather rudimentary number theory, you know that the counting goes as follows: One, two, three, four, five. TRF contains new stuff to learn for everybody, including those who would consider any mathematics exam unconstitutional and inhuman. ;-)
Wednesday, March 15, 2017 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere
This Nye's monologue is no "big think" was founded in 2007 and Larry Summers and Peter Thiel were among the initial financial and intellectual investors in the project. I am confident that it used to interview many exceptionally intelligent people and they were talking about nontrivial topics and arguments. Five years ago, I mentioned an interview with Lisa Randall about string theory.
If you look at the recent videos at the BigThink YouTube channel, they look like rather lame pop scientific and pseudoscientific topics that you find everywhere on the Internet. You don't need a pedigree of famous founders for such a website.
The 4-minute monologue of Bill Nye is a great example of the intellectual deterioration of in recent years. The diatribe seems to be a response to a Fox News exchange between Tucker Carlson and Bill Nye. Recall that Carlson mainly wanted Nye to say to what extent the humans have driven climate change. Nye wasn't capable to say a damn thing that would be relevant in that 9-minute-long Fox News interview. He had weeks to "think big" about these matters and now, when he added a 4-minute monologue, he still failed to say anything that would be relevant or at least intelligent.
Tuesday, March 14, 2017 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere
Czechs vow defiance after irrational EU gun ban
The country in the heart of Europe is terrified by the counterproductive, treacherous approach of the EU apparatchiks to gun laws
Czech political parties experienced a somewhat rare wave of unity today which was unfortunately not shared by most of the European Union. The European Parliament voted 491yes-197no-28abstain to ban the sales of new semi-automatic guns.
The largest community in my homeland that is affected are the owners of Model 58. It's known by the Czechoslovak acronym Vz 58 and "vz" stands for "vzor" i.e. "template". After Kalashnikovs appeared, all socialist countries were basically forced to adopt the exact Soviet design. Czechoslovakia got an exception because if a country with this somewhat legendary arms industry were forced to accept the Soviet technology, it would be rather offensive. Vz 58 appeared as a Czechoslovak answer to the Kalashnikovs. It's a full replacement but all the parts are actually different and the Czechoslovak rifle is arguably better than the Soviet competitor.
Monday, March 13, 2017 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere
Should mathematics exams be required at the end of high school?
In recent weeks, I was involved in various discussions about the education of mathematics in Czechia. One of the topics was the "playful" Hejný method (a long CZ thread) to teach mathematics to kids which may be fun and useful but it's simply not a legitimate replacement for mathematics as I define it.
Yesterday, someone asked me to solve one page of undergraduate problems in mathematical statistics. Compute the averages, variances and standard deviations, medians, quantiles, draw some histograms, use computer software to do a quadratic fit. And also compute the probability that you get all 4 kings out of 32 cards in a pile of 7. An hour of work. I did consider the problems nicely chosen and adequate for someone who should have background in any experimental science etc.
But they were taken from an exam (a take-home exam?) for mostly female students who want to get a bachelor degree and become nurses. That's tough because I do think that most nurses just can't do a big majority of these things. But the statistics course is mandatory and right now, unlimited nurses do need the bachelor degree. It looks like an anomaly: Ways to deal with a senior who urinated himself could be more useful for them than the calculations of the residual variance of a quadratic fit. ;-) Some lawmakers are preparing a reform that will allow nurses to work without the bachelor degree – the high school plus a year of a "higher school" will be enough. But it's not reality yet.
At the end, however, I have big sympathies for the instructor who is trying hard to convince the students to learn these things. If you asked me, I would probably agree that people with college degrees in science-related disciplines – and medicine is one of them – should be able to do most of these things, at least in principle. It's not possible for most people to know such things and again, I do agree that nurses shouldn't necessarily be "college-educated folks".
The mathematics instructor is universally hated by his students, of course. This is the level that primarily determines my emotions. I just couldn't support the students in their bitter jihad against the noble man. The fact that some soon-to-be-nurses are being pushed to learn things they don't need is one thing. But this guy was hired to teach college-level mathematical statistics and it's simply right to do it right. It's in no way insane to expect the college students majoring in a science-based discipline to know how to do these standard things after two semesters of statistics!
Sunday, March 12, 2017 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere
Jacques Distler vs some QFT lore
Young physicists in Austin, be careful about some toxic junk in your city
Three weeks ago, in the article titled
physicist Jacques Distler of UT Austin mentioned a statement by Sasha Polyakov that he was "responsible" for quantum field theory. That comment was particularly relevant when Distler taught an undergraduate particle physics course and was frustrated by the following:
The textbooks (and I mean all of them) start off by “explaining” that relativistic quantum mechanics (e.g. replacing the Schrödinger equation with Klein-Gordon) make no sense (negative probabilities and all that …). And they then proceed to use it anyway (supplemented by some Feynman rules pulled out of thin air).
This drives me up the fúçkïñg wall. It is precisely wrong.
There is a perfectly consistent quantum mechanical theory of free particles. The problem arises when you want to introduce interactions.
Did the following text defend the legitimacy of Distler's frustration? Well, partly... but I would pick the answer No if I had to.
A stringy interview with Petr Hořava
Giotis has pointed out that the Czech Public Radio recorded a 15-minute English-language interview with Czech string theorist Petr Hořava while he was visiting his old homeland.
I hope that this cutely simple HTML5 audio tag with the MP3 file works for everybody.
For years, Petr has been working at Berkeley. He's well-known as the co-author of the Hořava-Witten "M-theory on spaces with boundaries" that carry the \(E_8\) gauge supermultiplet, as they demonstrated.
Friday, March 10, 2017 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere
Selection of climate model survivors isn't the scientific method
I was surprised that several TRF readers (Marthe, Abbyyorker, John Moore, and perhaps others) don't understand why the methodology keeping "ensembles of inequivalent models" that have survived some tests isn't science i.e. why Scott Adams is right in the recommendation #1 to climate fearmongers.
On Monday, Scott Adams actually dedicated a special blog post exactly to this problem. He wrote that when some media promote an old paper from the 1980s that apparently made rather accurate predictions of the climate for the following decades, it doesn't mean anything because it was one paper among many and we're not told about the number of similar models whose predictions were wrong. So everything he knows is compatible with the assumption that the successful model was just one that was right by chance – it was cherry-picked but there doesn't have to be any reason to think that its authors know something that others don't. They were just lucky. Adams mentioned analogies dealing with financial scams. If you send thousands of e-mails with various investment recommendations, it's almost unavoidable that one of them will be successful thrice in a row. If you later cherry-pick this successful recommendation and sell it as a proof of your prophetic skills, then you are a crook and your clients are gullible morons.
Some people apparently really believe that it's an example of OK science when the climate modelers are working with an ensemble of mutually inequivalent models, sometimes eliminate some of them, and they implicitly if not explicitly say that all the "survivors" in their ensemble of models are simultaneously or collectively right. Well, different theories just cannot be simultaneously right and this process of mindless selection of "packages that seem to work well" just isn't science. When we're trying to address a physical system in which many factors matter at the same moment, it's obvious that we must still try to answer questions separately.
I embedded the Feynman monologue above because he says that many activities try to pretend to be scientific but they're pseudosciences. These pseudosciences – social sciences are examples – haven't gotten anywhere (yet). They didn't get any laws. This is exactly true for the "model ensemble enterprise" in the climate science, too. They're not proposing and separately testing any actual laws or statements. People who are doing these things just play with some complex mushed potatoes and when they have a sufficient number of moving parts, it's unavoidable that for some choices of these moving parts, a good enough agreement – within any pre-agreed error margins – will be achieved for some of them.
Thursday, March 09, 2017 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere
Scott Adams sees through 15 of 20 main alarmists' tricks, still calls himself a believer
Eclectikus told us that Dilbert's creator Scott Adams – who has correctly predicted Trump's triumph and described a psychological theory behind Trump's victory – has written a wonderful guide telling the climate alarmist propagandists
How to Convince Skeptics that Climate Change is a Problem.
It's basically a detailed list of 14-15 features in the alarmists' talk – or their interactions with the skeptics – that obviously look fishy to a rational person such as himself. Nevertheless, at the top, he still introduces himself as a believer in the claims of the currently (and for a few more months?) dominant (i.e. alarmist) climate scientists. Some alarmists have reacted angrily. Some of them claimed that Adams doesn't actually believe the alarmists and he doesn't actually want to help them.
I tend to agree with this "insight into Adams' skull". It seems hard to imagine that someone would understand these "15 things that are fishy about the alarmists' claims" so clearly and he would still take the alarmists' statements seriously. In fact, I think that Adams' isolation of the problems, clarity of his understanding of these problems, and the comprehensiveness of his list places him above most of the "amateur climate skeptics" whom I have met. If he understands some of the skeptics' arguments more clearly than most of the skeptics, is it plausible that he ends up as an alarmist?
It's plausible. I just find it very unlikely. It seems much more likely to me that he is just playfully rewriting his identity, much like when John Cook was signing 3% of the comments on his server as Luboš Motl. ;-)
The climate lynch mob at MIT
Last week, Charles Murray, a prominent sociologist known for his analyses of the IQ distributions (he co-wrote "The Bell Curve" with a Harvard colleague) was planning to give a talk at the Middlebury College in Vermont.
This 43-minute-long video shows what happened. Before his speech was supposed to begin, several officials were explaining how important it was for the university to listen and participate in peaceful discussions, even about unpopular views.
It didn't help. Around 19:10 in the video, after Murray articulated his first sentence, a mob composed of young people began to chant and do mess – the last 25 minutes in the video – and prevented Murray from saying anything. They were chanting all those primitive far left extremist slogans which were not only offensive but also proved that the young people didn't have a clue what Murray's work is all about. So the lecture was cancelled. The professor Ms Allison Stanger who accompanied Murray was physically attacked and had to be hospitalized, despite two big bodyguards who generally tried to protect these two.
Jay Parini and other professors at that college realize that some basic rules of Free Speech 101 were grossly neglected. However, the wild young people keep on calling themselves "college students" and they are basically dictating the atmosphere – and what is possible and what is impossible – on that college.
I am sorry but the officials at that college should dismiss these students. The fact that it hasn't taken place indicates that the college president is either incompetent or a coward. These young people are obviously not intelligent, disciplined, and ethical enough to be college students. We often talk about the decreasing standards of the college education but sometimes this deterioration shows up clearly in front of our eyes.
A zoo would be a much more appropriate place to keep these young people than a college. Let me emphasize that I recommend this habitat to the participants of that protest regardless of their race, gender, or ethnic background.
Wednesday, March 08, 2017 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere
No, energy non-conservation is a lousy approach to the cosmological constant problem
In mid January, Chad Orzel didn't like some hype about a "proposed solution to the cosmological constant problem":
An article in the Physics World promoted an April 2016 paper by Josset, Perez, and Sudarsky recently published in PRL
Dark energy as the weight of violating energy conservation
that has claimed the the apparently observed cosmological constant is just the accumulated amount of energy that was created when Nature violated the energy conservation law – and that's supposed to make things more natural.
The 97% crackpot Lee Smolin praised the idea as a speculative approach in the best possible sense that is revolutionary if true. The 60% crackpot George Ellis said that the proposal was viable and no more fanciful than what's being explored by contemporary theoretical physicists – his English isn't as good as mine so I had to improve this man's prose.
Orzel found these comments too diplomatic and, as a "progressive" (a far left whacko), he decided to look for the best possible debunker with the only politically correct number of penises (zero) who should debunk this stuff: Sabine Hossenfelder.
Proof of RH from Hurwitz eigenstates
Under my previous (QM-on-graphs) blog post about the Riemann Hypothesis, Dilaton was forgiven for having brought us some cute internet banalities ;-)
while Akhmeteli pointed out a paper that seems even more promising than my most recent specific attacks:
Hamiltonian for the zeros of the Riemann zeta function
In their PRL paper, Carl M. Bender, Dorje C. Brody, and Markus P. Müller (BBM) actually constructed a Hamiltonian whose eigenvalues seem to be the zeroes of the zeta function and that seems to be Hermitian, after some straightforward change of the metric. How does it work?
Tuesday, March 07, 2017 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere
Czechs produce a graphene-based magnet
First non-metallic magnet at room temperatures
One month ago, I mentioned that Harvard's Isaac Silvera and his collaborator claimed to have developed metallic hydrogen. Unfortunately, four weeks later, the small piece of this possibly amazing matter was eaten by Ike's dog, in an incident that resembles the burning of the microscopic Japanese art by a magnifying glass during a vernissage in the Czech 1974 comedy "Joachim, throw him to the machine". and UPI were among the first English-language sources that revealed a result that could be equally cool and more controllable (because larger) – a non-metallic magnet:
Room temperature organic magnets derived from \(sp^3\) functionalized graphene (Nature Communications)
Mr Jiří Tuček [George Smallfat] was the lead author of the 12-member collaboration in a regional material center at the Palacký University in Olomouc, Moravia, Czech Republic. Belgian and Japanese colleagues have already joined the search for applications and better theoretical descriptions.
Sunday, March 05, 2017 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere
NYT: Randall reviews Rovelli's oversalted book
Lisa Randall has argued in her review in The New York Times that
A Physicist’s Crash Course in Unpeeling the Universe
that reality isn't always what it seems to those who read Carlo Rovelli's book, Reality Is Not What It Seems, a popular text that was successful in Europe, translated to English, and that I discussed in January.
Randall says that the best popular books bring something both to the beginners as well as the readers who already know something. However, Rovelli only chose the audiences without any physics background and adjusted his writing appropriately. He nicely communicated the grandiose revolutionary changes that took place in the recent century or so. Because of the adjustments and other things, the result isn't great.
Saturday, March 04, 2017 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere
Quantum mechanics on graphs and Riemann hypothesis
I still like to spend some time with the Riemann Hypothesis. In this 2016 blog post, I explained that the Riemann zeta zeroes roughly appear in a Fourier transform of delta-functions located at places \(\ln(n)\) or \(\ln(p)\) where \(n\in\ZZ\) or \(p\) are primes.
Is there a way to prove that all the nontrivial zeroes \(s\) of the zeta function, i.e. values of \(s\) obeying \(\zeta(s)=0\), satisfy \(s=1/2+it\) where \(t\in \RR\)? Riemann thought he could prove that theorem but the proof wasn't ever found and it seems likely now that he didn't have one.
Friday, March 03, 2017 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere
Interference patterns of two entangled photons
Ahmed Adel Emara was asking interesting questions about the delayed choice quantum eraser and various modifications of it.
In the experiment, see this chart, a photon first goes through the double slit. Right behind both slits, a "BBO" makes sure that the photon gets split into an entangled pair.
The upper photon is encouraged to land on a photographic plate, D0, where a single photon normally contributes to an interference pattern. The entangled lower partner, the idler photon, goes to some mirrors and undergoes another treatment. In the delayed choice quantum eraser experiment, it ultimately lands in one of the detectors D1,D2,D3,D4. It's designed in such a way that if the detection of the idler photon occurs in D3 or D4, the which-slit information can be extracted, so the interference pattern is gone for the upper photon as well (the slit is the same for both photons). If the idler photon lands in D1 or D2, respectively, the which-slit information cannot be extracted, and the upper photons in these cases do create an interference pattern in D0, but only if you treat the D1 and D2 cases separately – these two interference patterns are "complementary" to each other.
One of the questions that Ahmed basically asked was whether there would be an interference pattern if you replaced all the detectors D1,D2,D3,D4 for the idler by another photographic plate D0' (dee-zero-prime).
Thursday, March 02, 2017 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere
Totalitarian MEPs steal immunity from Le Pen to strengthen their monopoly on power
To say the least, Marine Le Pen is one of the three top (bold face) candidates in the April-and-May presidential elections in France. All the other candidates are elementary particles: Fillon, Proton, Macron, Neutron, Hamon, Meson, Mélenchon, and Positron. It's an eye-catching sign of the lack of diversity that all of the men who run have names that end with an "-on" given the fact that none of the men in the list Bidault, Blum, Auriol, Coty, De Gaulle, Poher, d'Estaing, Mitterrand, Chirac, Sarkozy, and Hollande did. You may be forced to go back to Napoleon to find the most recent previous elementary particle that led France. ;-)
She's too complex for them and the elementary particles and their allies do everything they can to get rid of their competitor who is so different.
Two days ago, a committee of the European Parliament voted 18-to-3 to strip her of her immunity in the case of some tweets. Today, the whole Parliament has confirmed that decision by an "overwhelming majority" – we were not told what the numbers actually were. Thankfully, my MEP voted against the proposal. He wrote that "he doesn't like when the political contest is waged through the criminalization of the competitors". Exactly.
Wednesday, March 01, 2017 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere
Chinese "quantum radar" is a thing that cannot exist long as it is defined as presented...
Petr N., a Czech guy who also sent me the first e-mails about the 9/11 attacks 30 minutes before my PhD defense in New Jersey began at 9:30 a.m. in 2001, informed me about a wonderful new story in numerous media, a story about the Chinese quantum radar.
For example, some journalists in New Zealand boldly claim:
China's claim it has 'quantum' radar may leave $17 billion F-35 obsolete
Donald Trump has already hit an overpriced F-35 project with a thermonuclear tweet. Before he tweets again and demands the Lockheed-Martin bosses to commit harakiri because of the amazing achievement by the Chinese, I urge him to think twice and read this blog post.
Czech L-159's – used by the Iraqi Air Force along with some F-16's – are almost an order of magnitude cheaper than F-35's but they're still credible aggressor fighters. Too bad that the Donald can't import things from his first wife's homeland.
There could be better radars that could be called "quantum radars" for one reason or another but the claims about the "quantum radar" turn out to be based on a paper written by authors who completely misunderstand quantum mechanics, e.g. crackpots. Because the authors are Chinese, they must be classified as Chinese crackpots.
U.S. is rich, but maybe not wise, enough to introduce guaranteed income
Guaranteed minimum income or basic/unconditional/universal income is a policy in which a country pays every citizen (that's at least in the "universal" case) a certain fixed amount of money.
It's an alternative, and in my view far more efficient and natural, method to deal with welfare, poverty, tax exemptions per taxpayer, and many other things. It's basically equivalent to the negative income tax that was defended by Milton Friedman (and tested in North America in the 1960s and 1970s) – click at the link in this sentence to see his arguments in favor of it (I basically share all of his thinking).
The rule is simple. At least when the income is small enough (modifications may reflect progressive taxation), a citizen that earns \(X\) dollars per year will pay\[
R \times X - BI
\] to the government. It's a simple linear function. When the result is negative, the government pays something to the citizen (his income tax is negative, if you wish). In particular, if the citizen earns nothing, he will still get \(BI\) dollars (it stands for "basic income") a year from the government. On the contrary, the high earners pay the percentage \(R\) of their income.
Special exceptions should apply when \(X\lt 0\). People who make a "loss" should better not be refunded too much (or at all), otherwise people would start to invent tricks how to report a loss. |
95bcd729c8c82033 | Eigenvalues and eigenvectors
From Wikipedia, the free encyclopedia - View original article
(Redirected from Eigenvalue, eigenvector and eigenspace)
Jump to: navigation, search
In this shear mapping the red arrow changes direction but the blue arrow does not. The blue arrow is an eigenvector of this shear mapping, and since its length is unchanged its eigenvalue is 1.
An eigenvector of a square matrix A is a non-zero vector v that, when the matrix is multiplied by v, yields a constant multiple of v, the multiplier being commonly denoted by \lambda. That is:
A v = \lambda v
(Because this equation uses post-multiplication by v, it describes a right eigenvector.)
The number \lambda is called the eigenvalue of A corresponding to v.[1]
In analytic geometry, for example, a three-element vector may be seen as an arrow in three-dimensional space starting at the origin. In that case, an eigenvector v is an arrow whose direction is either preserved or exactly reversed after multiplication by A. The corresponding eigenvalue determines how the length of the arrow is changed by the operation, and whether its direction is reversed or not, determined by whether the eigenvalue is negative or positive.
In abstract linear algebra, these concepts are naturally extended to more general situations, where the set of real scalar factors is replaced by any field of scalars (such as algebraic or complex numbers); the set of Cartesian vectors \mathbb{R}^n is replaced by any vector space (such as the continuous functions, the polynomials or the trigonometric series), and matrix multiplication is replaced by any linear operator that maps vectors to vectors (such as the derivative from calculus). In such cases, the "vector" in "eigenvector" may be replaced by a more specific term, such as "eigenfunction", "eigenmode", "eigenface", or "eigenstate". Thus, for example, the exponential function f(x) = a^x is an eigenfunction of the derivative operator " {}' ", with eigenvalue \lambda = \ln a, since its derivative is f'(x) = (\ln a)a^x = \lambda f(x).
The set of all eigenvectors of a matrix (or linear operator), each paired with its corresponding eigenvalue, is called the eigensystem of that matrix.[2] Any multiple of an eigenvector is also an eigenvector, with the same eigenvalue. An eigenspace of a matrix A is the set of all eigenvectors with the same eigenvalue, together with the zero vector.[1] An eigenbasis for A is any basis for the set of all vectors that consists of linearly independent eigenvectors of A. Not every matrix has an eigenbasis, but every symmetric matrix does.
The terms characteristic vector, characteristic value, and characteristic space are also used for these concepts. The prefix eigen- is adopted from the German word eigen for "self-" or "unique to", "peculiar to", or "belonging to."
Eigenvalues and eigenvectors have many applications in both pure and applied mathematics. They are used in matrix factorization, in quantum mechanics, and in many other areas.
Eigenvectors and eigenvalues of a real matrix[edit]
In many contexts, a vector can be assumed to be a list of real numbers (called elements), written vertically with brackets around the entire list, such as the vectors u and v below. Two vectors are said to be scalar multiples of each other (also called parallel or collinear) if they have the same number of elements, and if every element of one vector is obtained by multiplying each corresponding element in the other vector by the same number (known as a scaling factor, or a scalar). For example, the vectors
u = \begin{bmatrix}1\\3\\4\end{bmatrix}\quad\quad\quad and \quad\quad\quad v = \begin{bmatrix}-20\\-60\\-80\end{bmatrix}
are scalar multiples of each other, because each element of v is −20 times the corresponding element of u.
A vector with three elements, like u or v above, may represent a point in three-dimensional space, relative to some Cartesian coordinate system. It helps to think of such a vector as the tip of an arrow whose tail is at the origin of the coordinate system. In this case, the condition "u is parallel to v" means that the two arrows lie on the same straight line, and may differ only in length and direction along that line.
If we multiply any square matrix A with n rows and n columns by such a vector v, the result will be another vector w = A v , also with n rows and one column. That is,
\begin{bmatrix} v_1 \\ v_2 \\ \vdots \\ v_n \end{bmatrix} \quad\quad is mapped to \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n \end{bmatrix} \;=\; \begin{bmatrix} A_{1,1} & A_{1,2} & \ldots & A_{1,n} \\ A_{2,1} & A_{2,2} & \ldots & A_{2,n} \\ \vdots & \vdots & \ddots & \vdots \\ A_{n,1} & A_{n,2} & \ldots & A_{n,n} \\ \end{bmatrix} \begin{bmatrix} v_1 \\ v_2 \\ \vdots \\ v_n \end{bmatrix}
where, for each index i,
w_i = A_{i,1} v_1 + A_{i,2} v_2 + \cdots + A_{i,n} v_n = \sum_{j = 1}^{n} A_{i,j} v_j
In general, if v_j are not all zeros, the vectors v and A v will not be parallel. When they are parallel (that is, when there is some real number \lambda such that A v = \lambda v) we say that v is an eigenvector of A. In that case, the scale factor \lambda is said to be the eigenvalue corresponding to that eigenvector.
In particular, multiplication by a 3×3 matrix A may change both the direction and the magnitude of an arrow v in three-dimensional space. However, if v is an eigenvector of A with eigenvalue \lambda, the operation may only change its length, and either keep its direction or flip it (make the arrow point in the exact opposite direction). Specifically, the length of the arrow will increase if |\lambda| > 1, remain the same if |\lambda| = 1, and decrease it if |\lambda|< 1. Moreover, the direction will be precisely the same if \lambda > 0, and flipped if \lambda < 0. If \lambda = 0, then the length of the arrow becomes zero.
An example[edit]
The transformation matrix \bigl[ \begin{smallmatrix} 2 & 1\\ 1 & 2 \end{smallmatrix} \bigr] preserves the angle of arrows parallel to the lines from the origin to \bigl[ \begin{smallmatrix} 1 \\ 1 \end{smallmatrix} \bigr] (in blue) and to \bigl[ \begin{smallmatrix} 1 \\ -1 \end{smallmatrix} \bigr] (in violet). The points that lie on a line through the origin and an eigenvector remain on the line after the transformation. The arrows in red are not parallel to such a line, therefore their angle is altered by the transformation. See also: An extended version, showing all four quadrants.
For the transformation matrix
A = \begin{bmatrix} 3 & 1\\1 & 3 \end{bmatrix},
the vector
v = \begin{bmatrix} 4 \\ -4 \end{bmatrix}
is an eigenvector with eigenvalue 2. Indeed,
A v = \begin{bmatrix} 3 & 1\\1 & 3 \end{bmatrix} \begin{bmatrix} 4 \\ -4 \end{bmatrix} = \begin{bmatrix} 3 \cdot 4 + 1 \cdot (-4) \\ 1 \cdot 4 + 3 \cdot (-4) \end{bmatrix} = \begin{bmatrix} 8 \\ -8 \end{bmatrix} = 2 \cdot \begin{bmatrix} 4 \\ -4 \end{bmatrix}.
On the other hand the vector
v = \begin{bmatrix} 0 \\ 1 \end{bmatrix}
is not an eigenvector, since
\begin{bmatrix} 3 & 1\\1 & 3 \end{bmatrix} \begin{bmatrix} 0 \\ 1 \end{bmatrix} = \begin{bmatrix} 3 \cdot 0 + 1 \cdot 1 \\ 1 \cdot 0 + 3 \cdot 1 \end{bmatrix} = \begin{bmatrix} 1 \\ 3 \end{bmatrix},
and this vector is not a multiple of the original vector v.
Another example[edit]
For the matrix
A= \begin{bmatrix} 1 & 2 & 0\\0 & 2 & 0\\ 0 & 0 & 3\end{bmatrix},
we have
A \begin{bmatrix} 1\\0\\0 \end{bmatrix} = \begin{bmatrix} 1\\0\\0 \end{bmatrix} = 1 \cdot \begin{bmatrix} 1\\0\\0 \end{bmatrix},\quad\quad
A \begin{bmatrix} 0\\0\\1 \end{bmatrix} = \begin{bmatrix} 0\\0\\3 \end{bmatrix} = 3 \cdot \begin{bmatrix} 0\\0\\1 \end{bmatrix}.\quad\quad
Therefore, the vectors [1,0,0]^\mathsf{T} and [0,0,1]^\mathsf{T} are eigenvectors of A corresponding to the eigenvalues 1 and 3 respectively. (Here the symbol {}^\mathsf{T} indicates matrix transposition, in this case turning the row vectors into column vectors.)
Trivial cases[edit]
The identity matrix I (whose general element I_{i j} is 1 if i = j, and 0 otherwise) maps every vector to itself. Therefore, every vector is an eigenvector of I, with eigenvalue 1.
More generally, if A is a diagonal matrix (with A_{i j} = 0 whenever i \neq j), and v is a vector parallel to axis i (that is, v_i \neq 0, and v_j = 0 if j \neq i), then A v = \lambda v where \lambda = A_{i i}. That is, the eigenvalues of a diagonal matrix are the elements of its main diagonal. This is trivially the case of any 1 ×1 matrix.
General definition[edit]
The concept of eigenvectors and eigenvalues extends naturally to abstract linear transformations on abstract vector spaces. Namely, let V be any vector space over some field K of scalars, and let T be a linear transformation mapping V into V. We say that a non-zero vector v of V is an eigenvector of T if (and only if) there is a scalar \lambda in K such that
T(v)=\lambda v.
This equation is called the eigenvalue equation for T, and the scalar \lambda is the eigenvalue of T corresponding to the eigenvector v. Note that T(v) means the result of applying the operator T to the vector v, while \lambda v means the product of the scalar \lambda by v.[3]
The matrix-specific definition is a special case of this abstract definition. Namely, the vector space V is the set of all column vectors of a certain size n×1, and T is the linear transformation that consists in multiplying a vector by the given n\times n matrix A.
Some authors allow v to be the zero vector in the definition of eigenvector.[4] This is reasonable as long as we define eigenvalues and eigenvectors carefully: If we would like the zero vector to be an eigenvector, then we must first define an eigenvalue of T as a scalar \lambda in K such that there is a nonzero vector v in V with T(v) = \lambda v . We then define an eigenvector to be a vector v in V such that there is an eigenvalue \lambda in K with T(v) = \lambda v . This way, we ensure that it is not the case that every scalar is an eigenvalue corresponding to the zero vector.
Eigenspace and spectrum[edit]
If v is an eigenvector of T, with eigenvalue \lambda, then any scalar multiple \alpha v of v with nonzero \alpha is also an eigenvector with eigenvalue \lambda, since T(\alpha v) = \alpha T(v) = \alpha(\lambda v) = \lambda(\alpha v). Moreover, if u and v are eigenvectors with the same eigenvalue \lambda, then u+v is also an eigenvector with the same eigenvalue \lambda. Therefore, the set of all eigenvectors with the same eigenvalue \lambda, together with the zero vector, is a linear subspace of V, called the eigenspace of T associated to \lambda.[5][6] If that subspace has dimension 1, it is sometimes called an eigenline.[7]
The geometric multiplicity \gamma_T(\lambda) of an eigenvalue \lambda is the dimension of the eigenspace associated to \lambda, i.e. number of linearly independent eigenvectors with that eigenvalue.
The eigenspaces of T always form a direct sum (and as a consequence any family of eigenvectors for different eigenvalues is always linearly independent). Therefore the sum of the dimensions of the eigenspaces cannot exceed the dimension n of the space on which T operates, and in particular there cannot be more than n distinct eigenvalues.[8]
The set of eigenvalues of T is sometimes called the spectrum of T.
An eigenbasis for a linear operator T that operates on a vector space V is a basis for V that consists entirely of eigenvectors of T (possibly with different eigenvalues). Such a basis exists precisely if the direct sum of the eigenspaces equals the whole space, in which case one can take the union of bases chosen in each of the eigenspaces as eigenbasis. The matrix of T in a given basis is diagonal precisely when that basis is an eigenbasis for T, and for this reason T is called diagonalizable if it admits an eigenbasis.
Generalizations to infinite-dimensional spaces[edit]
The definition of eigenvalue of a linear transformation T remains valid even if the underlying space V is an infinite dimensional Hilbert or Banach space. Namely, a scalar \lambda is an eigenvalue if and only if there is some nonzero vector v such that T(v) = \lambda v.
A widely used class of linear operators acting on infinite dimensional spaces are the differential operators on function spaces. Let D be a linear differential operator in on the space \mathbf{C^\infty} of infinitely differentiable real functions of a real argument t. The eigenvalue equation for D is the differential equation
D f = \lambda f
The functions that satisfy this equation are commonly called eigenfunctions of D. For the derivative operator d/dt, an eigenfunction is a function that, when differentiated, yields a constant times the original function. The solution is an exponential function
f(t) = Ae^{\lambda t} ,
including when \lambda is zero when it becomes a constant function. Eigenfunctions are an essential tool in the solution of differential equations and many other applied and theoretical fields. For instance, the exponential functions are eigenfunctions of the shift operators. This is the basis of Fourier transform methods for solving problems.
Spectral theory[edit]
If \lambda is an eigenvalue of T, then the operator T-\lambda I is not one-to-one, and therefore its inverse (T-\lambda I)^{-1} does not exist. The converse is true for finite-dimensional vector spaces, but not for infinite-dimensional ones. In general, the operator T - \lambda I may not have an inverse, even if \lambda is not an eigenvalue.
For this reason, in functional analysis one defines the spectrum of a linear operator T as the set of all scalars \lambda for which the operator T-\lambda I has no bounded inverse. Thus the spectrum of an operator always contains all its eigenvalues, but is not limited to them.
Associative algebras and representation theory[edit]
More algebraically, rather than generalizing the vector space to an infinite dimensional space, one can generalize the algebraic object that is acting on the space, replacing a single operator acting on a vector space with an algebra representation – an associative algebra acting on a module. The study of such actions is the field of representation theory.
A closer analog of eigenvalues is given by the representation-theoretical concept of weight, with the analogs of eigenvectors and eigenspaces being weight vectors and weight spaces.
Eigenvalues and eigenvectors of matrices[edit]
Characteristic polynomial[edit]
The eigenvalue equation for a matrix A is
A v - \lambda v = 0,
which is equivalent to
(A-\lambda I)v = 0,
where I is the n\times n identity matrix. It is a fundamental result of linear algebra that an equation M v = 0 has a non-zero solution v if, and only if, the determinant \det(M) of the matrix M is zero. It follows that the eigenvalues of A are precisely the real numbers \lambda that satisfy the equation
\det(A-\lambda I) = 0
The left-hand side of this equation can be seen (using Leibniz' rule for the determinant) to be a polynomial function of the variable \lambda. The degree of this polynomial is n, the order of the matrix. Its coefficients depend on the entries of A, except that its term of degree n is always (-1)^n\lambda^n. This polynomial is called the characteristic polynomial of A; and the above equation is called the characteristic equation (or, less often, the secular equation) of A.
For example, let A be the matrix
A = \begin{bmatrix} 2 & 0 & 0 \\ 0 & 3 & 4 \\ 0 & 4 & 9 \end{bmatrix}
The characteristic polynomial of A is
\det (A-\lambda I) \;=\; \det \left(\begin{bmatrix} 2 & 0 & 0 \\ 0 & 3 & 4 \\ 0 & 4 & 9 \end{bmatrix} - \lambda \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}\right) \;=\; \det \begin{bmatrix} 2 - \lambda & 0 & 0 \\ 0 & 3 - \lambda & 4 \\ 0 & 4 & 9 - \lambda \end{bmatrix}
which is
(2 - \lambda) \bigl[ (3 - \lambda) (9 - \lambda) - 16 \bigr] = -\lambda^3 + 14\lambda^2 - 35\lambda + 22
The roots of this polynomial are 2, 1, and 11. Indeed these are the only three eigenvalues of A, corresponding to the eigenvectors [1,0,0]', [0,2,-1]', and [0,1,2]' (or any non-zero multiples thereof).
In the real domain[edit]
Since the eigenvalues are roots of the characteristic polynomial, an n\times n matrix has at most n eigenvalues. If the matrix has real entries, the coefficients of the characteristic polynomial are all real; but it may have fewer than n real roots, or no real roots at all.
For example, consider the cyclic permutation matrix
A = \begin{bmatrix} 0 & 1 & 0\\0 & 0 & 1\\ 1 & 0 & 0\end{bmatrix}
This matrix shifts the coordinates of the vector up by one position, and moves the first coordinate to the bottom. Its characteristic polynomial is 1 - \lambda^3 which has one real root \lambda_1 = 1. Any vector with three equal non-zero elements is an eigenvector for this eigenvalue. For example,
A \begin{bmatrix} 5\\5\\5 \end{bmatrix} = \begin{bmatrix} 5\\5\\5 \end{bmatrix} = 1 \cdot \begin{bmatrix} 5\\5\\5 \end{bmatrix}
In the complex domain[edit]
The fundamental theorem of algebra implies that the characteristic polynomial of an n\times n matrix A, being a polynomial of degree n, has exactly n complex roots. More precisely, it can be factored into the product of n linear terms,
\det(A-\lambda I) = (\lambda_1 - \lambda )(\lambda_2 - \lambda)\cdots(\lambda_n - \lambda)
where each \lambda_i is a complex number. The numbers \lambda_1, \lambda_2, ... \lambda_n, (which may not be all distinct) are roots of the polynomial, and are precisely the eigenvalues of A.
Even if the entries of A are all real numbers, the eigenvalues may still have non-zero imaginary parts (and the elements of the corresponding eigenvectors will therefore also have non-zero imaginary parts). Also, the eigenvalues may be irrational numbers even if all the entries of A are rational numbers, or all are integers. However, if the entries of A are algebraic numbers (which include the rationals), the eigenvalues will be (complex) algebraic numbers too.
The non-real roots of a real polynomial with real coefficients can be grouped into pairs of complex conjugate values, namely with the two members of each pair having the same real part and imaginary parts that differ only in sign. If the degree is odd, then by the intermediate value theorem at least one of the roots will be real. Therefore, any real matrix with odd order will have at least one real eigenvalue; whereas a real matrix with even order may have no real eigenvalues.
In the example of the 3×3 cyclic permutation matrix A, above, the characteristic polynomial 1 - \lambda^3 has two additional non-real roots, namely
\lambda_2 = -1/2 + \mathbf{i}\sqrt{3}/2\quad\quad and \quad\quad\lambda_3 = \lambda_2^* = -1/2 - \mathbf{i}\sqrt{3}/2,
where \mathbf{i}= \sqrt{-1} is the imaginary unit. Note that \lambda_2\lambda_3 = 1, \lambda_2^2 = \lambda_3, and \lambda_3^2 = \lambda_2. Then
A \begin{bmatrix} 1 \\ \lambda_2 \\ \lambda_3 \end{bmatrix} = \begin{bmatrix} \lambda_2\\ \lambda_3 \\1 \end{bmatrix} = \lambda_2 \cdot \begin{bmatrix} 1\\ \lambda_2 \\ \lambda_3 \end{bmatrix} \quad\quad and \quad\quad A \begin{bmatrix} 1 \\ \lambda_3 \\ \lambda_2 \end{bmatrix} = \begin{bmatrix} \lambda_3 \\ \lambda_2 \\ 1 \end{bmatrix} = \lambda_3 \cdot \begin{bmatrix} 1 \\ \lambda_3 \\ \lambda_2 \end{bmatrix}
Therefore, the vectors [1,\lambda_2,\lambda_3]' and [1,\lambda_3,\lambda_2]' are eigenvectors of A, with eigenvalues \lambda_2, and \lambda_3, respectively.
Algebraic multiplicities[edit]
Let \lambda_i be an eigenvalue of an n\times n matrix A. The algebraic multiplicity \mu_A(\lambda_i) of \lambda_i is its multiplicity as a root of the characteristic polynomial, that is, the largest integer k such that (\lambda - \lambda_i)^k divides evenly that polynomial.
Like the geometric multiplicity \gamma_A(\lambda_i), the algebraic multiplicity is an integer between 1 and n; and the sum \boldsymbol{\mu}_A of \mu_A(\lambda_i) over all distinct eigenvalues also cannot exceed n. If complex eigenvalues are considered, \boldsymbol{\mu}_A is exactly n.
It can be proved that the geometric multiplicity \gamma_A(\lambda_i) of an eigenvalue never exceeds its algebraic multiplicity \mu_A(\lambda_i). Therefore, \boldsymbol{\gamma}_A is at most \boldsymbol{\mu}_A.
If \gamma_A(\lambda_i) = \mu_A(\lambda_i), then \lambda_i is said to be a semisimple eigenvalue.
For the matrix: A= \begin{bmatrix} 2 & 0 & 0 & 0 \\ 1 & 2 & 0 & 0 \\ 0 & 1 & 3 & 0 \\ 0 & 0 & 1 & 3 \end{bmatrix},
the characteristic polynomial of A is \det (A-\lambda I) \;=\; \det \begin{bmatrix} 2- \lambda & 0 & 0 & 0 \\ 1 & 2- \lambda & 0 & 0 \\ 0 & 1 & 3- \lambda & 0 \\ 0 & 0 & 1 & 3- \lambda \end{bmatrix}= (2 - \lambda)^2 (3 - \lambda)^2 ,
being the product of the diagonal with a lower triangular matrix.
The roots of this polynomial, and hence the eigenvalues, are 2 and 3. The algebraic multiplicity of each eigenvalue is 2; in other words they are both double roots. On the other hand, the geometric multiplicity of the eigenvalue 2 is only 1, because its eigenspace is spanned by the vector [0,1,-1,1], and is therefore 1 dimensional. Similarly, the geometric multiplicity of the eigenvalue 3 is 1 because its eigenspace is spanned by [0,0,0,1]. Hence, the total algebraic multiplicity of A, denoted \mu_A, is 4, which is the most it could be for a 4 by 4 matrix. The geometric multiplicity \gamma_A is 2, which is the smallest it could be for a matrix which has two distinct eigenvalues.
Diagonalization and eigendecomposition[edit]
If the sum \boldsymbol{\gamma}_A of the geometric multiplicities of all eigenvalues is exactly n, then A has a set of n linearly independent eigenvectors. Let Q be a square matrix whose columns are those eigenvectors, in any order. Then we will have A Q = Q\Lambda , where \Lambda is the diagonal matrix such that \Lambda_{i i} is the eigenvalue associated to column i of Q. Since the columns of Q are linearly independent, the matrix Q is invertible. Premultiplying both sides by Q^{-1} we get Q^{-1}A Q = \Lambda. By definition, therefore, the matrix A is diagonalizable.
Conversely, if A is diagonalizable, let Q be a non-singular square matrix such that Q^{-1} A Q is some diagonal matrix D. Multiplying both sides on the left by Q we get A Q = Q D . Therefore each column of Q must be an eigenvector of A, whose eigenvalue is the corresponding element on the diagonal of D. Since the columns of Q must be linearly independent, it follows that \boldsymbol{\gamma}_A = n. Thus \boldsymbol{\gamma}_A is equal to n if and only if A is diagonalizable.
If A is diagonalizable, the space of all n-element vectors can be decomposed into the direct sum of the eigenspaces of A. This decomposition is called the eigendecomposition of A, and it is preserved under change of coordinates.
A matrix that is not diagonalizable is said to be defective. For defective matrices, the notion of eigenvector can be generalized to generalized eigenvectors, and that of diagonal matrix to a Jordan form matrix. Over an algebraically closed field, any matrix A has a Jordan form and therefore admits a basis of generalized eigenvectors, and a decomposition into generalized eigenspaces
Further properties[edit]
Let A be an arbitrary n\times n matrix of complex numbers with eigenvalues \lambda_1, \lambda_2, ... \lambda_n. (Here it is understood that an eigenvalue with algebraic multiplicity \mu occurs \mu times in this list.) Then
\operatorname{tr}(A) = \sum_{i=1}^n A_{i i} = \sum_{i=1}^n \lambda_i = \lambda_1+ \lambda_2 +\cdots+ \lambda_n.
\operatorname{det}(A) = \prod_{i=1}^n \lambda_i=\lambda_1\lambda_2\cdots\lambda_n.
Left and right eigenvectors[edit]
The use of matrices with a single column (rather than a single row) to represent vectors is traditional in many disciplines. For that reason, the word "eigenvector" almost always means a right eigenvector, namely a column vector that must be placed to the right of the matrix A in the defining equation
A v = \lambda v.
There may be also single-row vectors that are unchanged when they occur on the left side of a product with a square matrix A; that is, which satisfy the equation
u A = \lambda u
Any such row vector u is called a left eigenvector of A.
The left eigenvectors of A are transposes of the right eigenvectors of the transposed matrix A^\mathsf{T}, since their defining equation is equivalent to
A^\mathsf{T} u^\mathsf{T} = \lambda u^\mathsf{T}
It follows that, if A is Hermitian, its left and right eigenvectors are complex conjugates. In particular if A is a real symmetric matrix, they are the same except for transposition.
Computing the eigenvalues[edit]
The eigenvalues of a matrix A can be determined by finding the roots of the characteristic polynomial. Explicit algebraic formulas for the roots of a polynomial exist only if the degree n is 4 or less. According to the Abel–Ruffini theorem there is no general, explicit and exact algebraic formula for the roots of a polynomial with degree 5 or more.
It turns out that any polynomial with degree n is the characteristic polynomial of some companion matrix of order n. Therefore, for matrices of order 5 or more, the eigenvalues and eigenvectors cannot be obtained by an explicit algebraic formula, and must therefore be computed by approximate numerical methods.
Efficient, accurate methods to compute eigenvalues and eigenvectors of arbitrary matrices were not known until the advent of the QR algorithm in 1961. [9] Combining the Householder transformation with the LU decomposition results in an algorithm with better convergence than the QR algorithm.[citation needed] For large Hermitian sparse matrices, the Lanczos algorithm is one example of an efficient iterative method to compute eigenvalues and eigenvectors, among several other possibilities.[9]
Computing the eigenvectors[edit]
A = \begin{bmatrix} 4 & 1\\6 & 3 \end{bmatrix}
we can find its eigenvectors by solving the equation A v = 6 v, that is
\begin{bmatrix} 4 & 1\\6 & 3 \end{bmatrix}\begin{bmatrix}x\\y\end{bmatrix} = 6 \cdot \begin{bmatrix}x\\y\end{bmatrix}
This matrix equation is equivalent to two linear equations
\left\{\begin{matrix} 4x + {\ }y &{}= 6x\\6x + 3y &{}=6 y\end{matrix}\right. \quad\quad\quad that is \left\{\begin{matrix} -2x+ {\ }y &{}=0\\+6x-3y &{}=0\end{matrix}\right.
Both equations reduce to the single linear equation y=2x. Therefore, any vector of the form [a,2a]', for any non-zero real number a, is an eigenvector of A with eigenvalue \lambda = 6.
The matrix A above has another eigenvalue \lambda=1. A similar calculation shows that the corresponding eigenvectors are the non-zero solutions of 3x+y=0, that is, any vector of the form [b,-3b]', for any non-zero real number b.
In the meantime, Liouville studied eigenvalue problems similar to those of Sturm; the discipline that grew out of their work is now called Sturm–Liouville theory.[14] Schwarz studied the first eigenvalue of Laplace's equation on general domains towards the end of the 19th century, while Poincaré studied Poisson's equation a few years later.[15]
At the start of the 20th century, Hilbert studied the eigenvalues of integral operators by viewing the operators as infinite matrices.[16] He was the first to use the German word eigen to denote eigenvalues and eigenvectors in 1904, though he may have been following a related usage by Helmholtz. For some time, the standard term in English was "proper value", but the more distinctive term "eigenvalue" is standard today.[17]
The first numerical algorithm for computing eigenvalues and eigenvectors appeared in 1929, when Von Mises published the power method. One of the most popular methods today, the QR algorithm, was proposed independently by John G.F. Francis[18] and Vera Kublanovskaya[19] in 1961.[20]
Eigenvalues of geometric transformations[edit]
scalingunequal scalingrotationhorizontal shearhyperbolic rotation
illustrationEqual scaling (homothety)Vertical shrink () and horizontal stretch () of a unit square.Rotation by 50 degrees
Horizontal shear mapping
matrix \begin{bmatrix}k & 0\\0 & k\end{bmatrix}
\begin{bmatrix}k_1 & 0\\0 & k_2\end{bmatrix}
\begin{bmatrix}c & -s \\ s & c\end{bmatrix}
\begin{bmatrix}1 & k\\ 0 & 1\end{bmatrix}
\begin{bmatrix} c & s \\ s & c \end{bmatrix}
c=\cosh \varphi
s=\sinh \varphi
\ (\lambda - k)^2(\lambda - k_1)(\lambda - k_2)\lambda^2 - 2c\lambda + 1\ (\lambda - 1)^2\lambda^2 - 2c\lambda + 1
eigenvalues \lambda_i\lambda_1 = \lambda_2 = k\lambda_1 = k_1
\lambda_2 = k_2
\lambda_1 = e^{\mathbf{i}\theta}=c+s\mathbf{i}
\lambda_2 = e^{-\mathbf{i}\theta}=c-s\mathbf{i}
\lambda_1 = \lambda_2 = 1\lambda_1 = e^\varphi
\lambda_2 = e^{-\varphi},
algebraic multipl.
\mu_1 = 2\mu_1 = 1
\mu_2 = 1
\mu_1 = 1
\mu_2 = 1
\mu_1 = 2\mu_1 = 1
\mu_2 = 1
geometric multipl.
\gamma_i = \gamma(\lambda_i)
\gamma_1 = 2\gamma_1 = 1
\gamma_2 = 1
\gamma_1 = 1
\gamma_2 = 1
\gamma_1 = 1\gamma_1 = 1
\gamma_2 = 1
eigenvectorsAll non-zero vectorsu_1 = \begin{bmatrix}1\\0\end{bmatrix}
u_2 = \begin{bmatrix}0\\1\end{bmatrix}
u_1 = \begin{bmatrix}{\ }1\\-\mathbf{i}\end{bmatrix}
u_2 = \begin{bmatrix}{\ }1\\ +\mathbf{i}\end{bmatrix}
u_1 = \begin{bmatrix}1\\0\end{bmatrix}u_1 = \begin{bmatrix}{\ }1\\{\ }1\end{bmatrix}
u_2 = \begin{bmatrix}{\ }1\\-1\end{bmatrix}.
Note that the characteristic equation for a rotation is a quadratic equation with discriminant D = -4(\sin\theta)^2, which is a negative number whenever \theta is not an integer multiple of 180°. Therefore, except for these special cases, the two eigenvalues are complex numbers, \cos\theta \pm \mathbf{i}\sin\theta; and all eigenvectors have non-real entries. Indeed, except for those special cases, a rotation changes the direction of every nonzero vector in the plane.
Schrödinger equation[edit]
The wavefunctions associated with the bound states of an electron in a hydrogen atom can be seen as the eigenvectors of the hydrogen atom Hamiltonian as well as of the angular momentum operator. They are associated with eigenvalues interpreted as their energies (increasing downward: n=1,2,3,\ldots) and angular momentum (increasing across: s, p, d, ...). The illustration shows the square of the absolute value of the wavefunctions. Brighter areas correspond to higher probability density for a position measurement. The center of each figure is the atomic nucleus, a proton.
An example of an eigenvalue equation where the transformation T is represented in terms of a differential operator is the time-independent Schrödinger equation in quantum mechanics:
H\psi_E = E\psi_E \,
where H, the Hamiltonian, is a second-order differential operator and \psi_E, the wavefunction, is one of its eigenfunctions corresponding to the eigenvalue E, interpreted as its energy.
However, in the case where one is interested only in the bound state solutions of the Schrödinger equation, one looks for \psi_E within the space of square integrable functions. Since this space is a Hilbert space with a well-defined scalar product, one can introduce a basis set in which \psi_E and H can be represented as a one-dimensional array and a matrix respectively. This allows one to represent the Schrödinger equation in a matrix form.
Bra-ket notation is often used in this context. A vector, which represents a state of the system, in the Hilbert space of square integrable functions is represented by |\Psi_E\rangle. In this notation, the Schrödinger equation is:
H|\Psi_E\rangle = E|\Psi_E\rangle
where |\Psi_E\rangle is an eigenstate of H. It is a self adjoint operator, the infinite dimensional analog of Hermitian matrices (see Observable). As in the matrix case, in the equation above H|\Psi_E\rangle is understood to be the vector obtained by application of the transformation H to |\Psi_E\rangle.
Molecular orbitals[edit]
Geology and glaciology[edit]
The output for the orientation tensor is in the three orthogonal (perpendicular) axes of space. The three eigenvectors are ordered v_1, v_2, v_3 by their eigenvalues E_1 \geq E_2 \geq E_3;[24] v_1 then is the primary orientation/dip of clast, v_2 is the secondary and v_3 is the tertiary, in terms of strength. The clast orientation is defined as the direction of the eigenvector, on a compass rose of 360°. Dip is measured as the eigenvalue, the modulus of the tensor: this is valued from 0° (no dip) to 90° (vertical). The relative values of E_1, E_2, and E_3 are dictated by the nature of the sediment's fabric. If E_1 = E_2 = E_3, the fabric is said to be isotropic. If E_1 = E_2 > E_3, the fabric is said to be planar. If E_1 > E_2 > E_3, the fabric is said to be linear.[25]
Principal components analysis[edit]
PCA of the multivariate Gaussian distribution centered at (1,3) with a standard deviation of 3 in roughly the (0.878,0.478) direction and of 1 in the orthogonal direction. The vectors shown are unit eigenvectors of the (symmetric, positive-semidefinite) covariance matrix scaled by the square root of the corresponding eigenvalue. (Just as in the one-dimensional case, the square root is taken because the standard deviation is more readily visualized than the variance.
Vibration analysis[edit]
1st lateral bending (See vibration for more types of vibration)
Eigenvalue problems occur naturally in the vibration analysis of mechanical structures with many degrees of freedom. The eigenvalues are used to determine the natural frequencies (or eigenfrequencies) of vibration, and the eigenvectors determine the shapes of these vibrational modes. In particular, undamped vibration is governed by
m\ddot x + kx = 0
m\ddot x = -k x
that is, acceleration is proportional to position (i.e., we expect x to be sinusoidal in time).
In n dimensions, m becomes a mass matrix and k a stiffness matrix. Admissible solutions are then a linear combination of solutions to the generalized eigenvalue problem
-k x = \omega^2 m x
where \omega^2 is the eigenvalue and \omega is the angular frequency. Note that the principal vibration modes are different from the principal compliance modes, which are the eigenvectors of k alone. Furthermore, damped vibration, governed by
m\ddot x + c \dot x + kx = 0
leads to what is called a so-called quadratic eigenvalue problem,
(\omega^2 m + \omega c + k)x = 0.
This can be reduced to a generalized eigenvalue problem by clever use of algebra at the cost of solving a larger system.
Eigenfaces as examples of eigenvectors
In image processing, processed images of faces can be seen as vectors whose components are the brightnesses of each pixel.[26] The dimension of this vector space is the number of pixels. The eigenvectors of the covariance matrix associated with a large set of normalized pictures of faces are called eigenfaces; this is an example of principal components analysis. They are very useful for expressing any face image as a linear combination of some of them. In the facial recognition branch of biometrics, eigenfaces provide a means of applying data compression to faces for identification purposes. Research related to eigen vision systems determining hand gestures has also been made.
Tensor of moment of inertia[edit]
Stress tensor[edit]
Eigenvalues of a graph[edit]
In spectral graph theory, an eigenvalue of a graph is defined as an eigenvalue of the graph's adjacency matrix A, or (increasingly) of the graph's Laplacian matrix (see also Discrete Laplace operator), which is either T - A (sometimes called the combinatorial Laplacian) or I - T^{-1/2}A T^{-1/2} (sometimes called the normalized Laplacian), where T is a diagonal matrix with T_{i i} equal to the degree of vertex v_i, and in T^{-1/2}, the ith diagonal entry is \sqrt{\operatorname{deg}(v_i)}. The kth principal eigenvector of a graph is defined as either the eigenvector corresponding to the kth largest or kth smallest eigenvalue of the Laplacian. The first principal eigenvector of the graph is also referred to merely as the principal eigenvector.
Basic reproduction number[edit]
See Basic reproduction number
The basic reproduction number (R_0) is a fundamental number in the study of how infectious diseases spread. If one infectious person is put into a population of completely susceptible people, then R_0 is the average number of people that one typical infectious person will infect. The generation time of an infection is the time, t_G, from one person becoming infected to the next person becoming infected. In a heterogenous population, the next generation matrix defines how many people in the population will become infected after time t_G has passed. R_0 is then the largest eigenvalue of the next generation matrix.[27][28]
See also[edit]
1. ^ a b Wolfram Research, Inc. (2010) Eigenvector. Accessed on 2010-01-29.
2. ^ William H. Press, Saul A. Teukolsky, William T. Vetterling, Brian P. Flannery (2007), Numerical Recipes: The Art of Scientific Computing, Chapter 11: Eigensystems., pages=563–597. Third edition, Cambridge University Press. ISBN 9780521880688
5. ^ Shilov 1977, p. 109
6. ^ Lemma for the eigenspace
7. ^ Schaum's Easy Outline of Linear Algebra, p. 111
10. ^ See Hawkins 1975, §2
13. ^ See Kline 1972, p. 673
14. ^ See Kline 1972, pp. 715–716
15. ^ See Kline 1972, pp. 706–707
16. ^ See Kline 1972, p. 1063
17. ^ See Aldrich 2006
18. ^ Francis, J. G. F. (1961), "The QR Transformation, I (part 1)", The Computer Journal 4 (3): 265–271, doi:10.1093/comjnl/4.3.265 and Francis, J. G. F. (1962), "The QR Transformation, II (part 2)", The Computer Journal 4 (4): 332–345, doi:10.1093/comjnl/4.4.332
19. ^ Kublanovskaya, Vera N. (1961), "On some algorithms for the solution of the complete eigenvalue problem", USSR Computational Mathematics and Mathematical Physics 3: 637–657 . Also published in: Zhurnal Vychislitel'noi Matematiki i Matematicheskoi Fiziki 1 (4), 1961: 555–570
21. ^ Graham, D.; Midgley, N. (2000), "Graphical representation of particle shape using triangular diagrams: an Excel spreadsheet method", Earth Surface Processes and Landforms 25 (13): 1473–1477, doi:10.1002/1096-9837(200012)25:13<1473::AID-ESP158>3.0.CO;2-C
22. ^ Sneed, E. D.; Folk, R. L. (1958), "Pebbles in the lower Colorado River, Texas, a study of particle morphogenesis", Journal of Geology 66 (2): 114–150, doi:10.1086/626490
23. ^ Knox-Robinson, C; Gardoll, Stephen J (1998), "GIS-stereoplot: an interactive stereonet plotting module for ArcView 3.0 geographic information system", Computers & Geosciences 24 (3): 243, doi:10.1016/S0098-3004(97)00122-2
24. ^ Stereo32 software
26. ^ Xirouhakis, A.; Votsis, G.; Delopoulus, A. (2004), Estimation of 3D motion and structure of human faces (PDF), Online paper in PDF format, National Technical University of Athens
27. ^ Diekmann O, Heesterbeek JAP, Metz JAJ (1990), "On the definition and the computation of the basic reproduction ratio R0 in models for infectious diseases in heterogeneous populations", Journal of Mathematical Biology 28 (4): 365–382, doi:10.1007/BF00178324, PMID 2117040
28. ^ Odo Diekmann and J. A. P. Heesterbeek (2000), Mathematical epidemiology of infectious diseases, Wiley series in mathematical and computational biology, West Sussex, England: John Wiley & Sons
• Korn, Granino A.; Korn, Theresa M. (2000), "Mathematical Handbook for Scientists and Engineers: Definitions, Theorems, and Formulas for Reference and Review", New York: McGraw-Hill (1152 p., Dover Publications, 2 Revised edition), Bibcode:1968mhse.book.....K, ISBN 0-486-41147-8 .
• Lipschutz, Seymour (1991), Schaum's outline of theory and problems of linear algebra, Schaum's outline series (2nd ed.), New York, NY: McGraw-Hill Companies, ISBN 0-07-038007-4 .
• Friedberg, Stephen H.; Insel, Arnold J.; Spence, Lawrence E. (1989), Linear algebra (2nd ed.), Englewood Cliffs, NJ 07632: Prentice Hall, ISBN 0-13-537102-3 .
• Aldrich, John (2006), "Eigenvalue, eigenfunction, eigenvector, and related terms", in Jeff Miller (Editor), Earliest Known Uses of Some of the Words of Mathematics, retrieved 2006-08-22
• Strang, Gilbert (1993), Introduction to linear algebra, Wellesley-Cambridge Press, Wellesley, MA, ISBN 0-9614088-5-5 .
• Strang, Gilbert (2006), Linear algebra and its applications, Thomson, Brooks/Cole, Belmont, CA, ISBN 0-03-010567-6 .
• Bowen, Ray M.; Wang, Chao-Cheng (1980), Linear and multilinear algebra, Plenum Press, New York, NY, ISBN 0-306-37508-7 .
• Cohen-Tannoudji, Claude (1977), "Chapter II. The mathematical tools of quantum mechanics", Quantum mechanics, John Wiley & Sons, ISBN 0-471-16432-1 .
• Fraleigh, John B.; Beauregard, Raymond A. (1995), Linear algebra (3rd ed.), Addison-Wesley Publishing Company, ISBN 0-201-83999-7 (international edition) Check |isbn= value (help) .
• Golub, Gene H.; Van Loan, Charles F. (1996), Matrix computations (3rd Edition), Johns Hopkins University Press, Baltimore, MD, ISBN 978-0-8018-5414-9 .
• Hawkins, T. (1975), "Cauchy and the spectral theory of matrices", Historia Mathematica 2: 1–29, doi:10.1016/0315-0860(75)90032-4 .
• Horn, Roger A.; Johnson, Charles F. (1985), Matrix analysis, Cambridge University Press, ISBN 0-521-30586-1 (hardback), ISBN 0-521-38632-2 (paperback) Check |isbn= value (help) .
• Kline, Morris (1972), Mathematical thought from ancient to modern times, Oxford University Press, ISBN 0-19-501496-0 .
• Meyer, Carl D. (2000), Matrix analysis and applied linear algebra, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, ISBN 978-0-89871-454-8 .
• Brown, Maureen (October 2004), Illuminating Patterns of Perception: An Overview of Q Methodology .
• Golub, Gene F.; van der Vorst, Henk A. (2000), "Eigenvalue computation in the 20th century", Journal of Computational and Applied Mathematics 123: 35–65, doi:10.1016/S0377-0427(00)00413-1 .
• Akivis, Max A.; Vladislav V. Goldberg (1969), Tensor calculus, Russian, Science Publishers, Moscow .
• Gelfand, I. M. (1971), Lecture notes in linear algebra, Russian, Science Publishers, Moscow .
• Alexandrov, Pavel S. (1968), Lecture notes in analytical geometry, Russian, Science Publishers, Moscow .
• Carter, Tamara A.; Tapia, Richard A.; Papaconstantinou, Anne, Linear Algebra: An Introduction to Linear Algebra for Pre-Calculus Students, Rice University, Online Edition, retrieved 2008-02-19 .
• Roman, Steven (2008), Advanced linear algebra (3rd ed.), New York, NY: Springer Science + Business Media, LLC, ISBN 978-0-387-72828-5 .
• Shilov, Georgi E. (1977), Linear algebra (translated and edited by Richard A. Silverman ed.), New York: Dover Publications, ISBN 0-486-63518-X .
• Hefferon, Jim (2001), Linear Algebra, Online book, St Michael's College, Colchester, Vermont, USA .
• Kuttler, Kenneth (2007), An introduction to linear algebra (PDF), Online e-book in PDF format, Brigham Young University .
• Demmel, James W. (1997), Applied numerical linear algebra, SIAM, ISBN 0-89871-389-7 .
• Beezer, Robert A. (2006), A first course in linear algebra, Free online book under GNU licence, University of Puget Sound .
• Lancaster, P. (1973), Matrix theory, Russian, Moscow, Russia: Science Publishers .
• Halmos, Paul R. (1987), Finite-dimensional vector spaces (8th ed.), New York, NY: Springer-Verlag, ISBN 0-387-90093-4 .
• Pigolkina, T. S. and Shulman, V. S., Eigenvalue (in Russian), In:Vinogradov, I. M. (Ed.), Mathematical Encyclopedia, Vol. 5, Soviet Encyclopedia, Moscow, 1977.
• Greub, Werner H. (1975), Linear Algebra (4th Edition), Springer-Verlag, New York, NY, ISBN 0-387-90110-8 .
• Larson, Ron; Edwards, Bruce H. (2003), Elementary linear algebra (5th ed.), Houghton Mifflin Company, ISBN 0-618-33567-6 .
• Curtis, Charles W., Linear Algebra: An Introductory Approach, 347 p., Springer; 4th ed. 1984. Corr. 7th printing edition (August 19, 1999), ISBN 0-387-90992-3.
• Shores, Thomas S. (2007), Applied linear algebra and matrix analysis, Springer Science+Business Media, LLC, ISBN 0-387-33194-8 .
• Sharipov, Ruslan A. (1996), Course of Linear Algebra and Multidimensional Geometry: the textbook, arXiv:math/0405323, ISBN 5-7477-0099-5 .
• Gohberg, Israel; Lancaster, Peter; Rodman, Leiba (2005), Indefinite linear algebra and applications, Basel-Boston-Berlin: Birkhäuser Verlag, ISBN 3-7643-7349-0 .
External links[edit]
Online calculators
Demonstration applets |
9b40e8cb98471bf3 | Professor (b. 1968)
B.S., 1989, Calvin College; Ph.D., 1994, University of California at Los Angeles
President's Postdoctoral Fellow, 1994; Fulbright Junior Researcher, 1995; NSF CAREER Award, 1998; Research Corporation Innovation Award, 1998; Alfred P. Sloan Research Fellow, 1999; Beckman Young Investigator, 1999; Packard Fellowship in Science and Engineering, 1999; Dreyfus Foundation Teacher-Scholar, 2000; Helen Corley Petit Professor, 2002; UIUC University Scholar, 2004; John D. and Catherine T. MacArthur Foundation Fellow, 2005; American Physical Society Fellow, 2005; American Association for the Advancement of Science Fellow, 2006; Gutgsell Chair in Chemistry, 2006
Chemistry Research Area:
Chemistry Research Area:
Principal Research Interests
Quantum chemistry traditionally solves the time-independent, zero temperature electronic Schrödinger equation, assuming separability of the electronic and nuclear degrees of freedom. This provides potential energy surfaces for use in molecular dyna-mics simulations to understand finite temperature and time-dependent effects. We take a different approach-extending quantum chemistry into the time domain, bridging the gap between traditional molecular dynamics (what are the atoms doing?) and quantum chemistry (what are the electrons doing?). We include quantum mechanical effects on the behavior of the electrons and the atoms by simultaneously solving the electronic and nuclear Schrödinger equations. This "ab initio multiple spawning" (AIMS) method opens exciting possibilities in modeling chemistry. Rearrangement of chemical bonds, tunneling, and dynamics on multiple electronic states are all treated correctly without ad hoc assumptions.
We are especially interested in electronic excited states, where the assumption of electron-nuclear separability breaks down. Using AIMS, we investigated fundamental photochemical reactions-quenching of excited metal atoms, cis-trans isomerization in ethylene and butadiene, and ring-opening of cyclobutene. In each case we found that conventional explanations required modification. This research furthers the understanding of complex molecular dynamics on multiple electronic states during photochemical reactions. Our goal is AIMS for reactions in complex environments, whether they be normal solvents (e.g., water), solid cages (e.g.,zeolites), or portions of a protein. We are developing methods to address solvent effects on photochemistry and spectroscopy, with ultimate application to biologically relevant molecules such as visual pigments.
Because of tunneling effects, proton transfer reactions require quantum treatment of the nuclei. We recently performed the first ab initio molecular dynamics simulation of real-time tunneling dynamics, simulating intramolecular proton transfer in malonaldehyde. Future directions include AIMS studies of coupled electron and proton transfer reactions, which are important in biological systems and possibly for designing molecular electronic devices.
Finally, we use novel quantum chemistry methods to elucidate the function of biologically relevant metallo-proteins-currently, the reaction mechanism of cytochrome c oxidase. This final enzyme in the respiratory cycle reduces oxygen. We investigate the nature of spin coupling between transition metal centers, the role of tyrosyl radicals in the mechanism, and the coupling of electron transfer and proton transfer in the enzyme active site.
Representative Publications
1) "Force-induced Activation of Covalent Bonds in Mechanoresponsive Polymeric Materials," D.A. Davis, A. Hamilton, J. Yang, L.D. Cremar, D. Van Gough, S.L. Potisek, M.T. Ong, P.V. Braun, T.J. Martínez, J.S. Moore, S.R. White, and N.R. Sottos, Nature459, 68-72 (2009).
2)" Photodynamics in Complex Environments: Ab Initio Multiple Spawning Quantum Mechanical/Molecular Mechanical Dynamics," A.M. Virshup, C. Punwong, T.V. Pogorelov, B. Lindquist, C. Ko, and T.J. Martínez, J. Phys. Chem., Invited centennial feature article, 113B, 3280-3291 (2009).
3) "Quantum Chemistry on Graphical Processing Units. 2. Direct Self-Consistent Field Implementation," I.S. Ufimtsev and T.J. Martínez, J. Chem. Theo. Comp.5, 1004-1015 (2009).
4) "First Principles Dynamics and Minimum Energy Pathways for Mechanochemical Ring-Opening of Cyclobutene," M.T. Ong, J. Leiding, H. Tao, A.M. Virshup, and T.J. Martínez, J. Amer. Chem. Soc.131, 6377-6379 (2009).
5) "Graphical Processing Units for Quantum Chemistry," I.S. Ufimtsev and T.J. Martínez, Comp. in Sci. Eng.10, 26-34 (2008).
6) "Electrostatic Control of Photoisomerization in the Photoactive Yellow Protein Chromophore: Ab Initio Multiple Spawning Dynamics," C. Ko, A. Virshup, and T.J. Martínez, Chem. Phys. Lett.460, 272-277 (2008).
7) "Conformationally controlled chemistry: Excited state dynamics dictate ground state dissociation," M.H. Kim, L. Shen, H. Tao, T.J. Martínez, and A.G. Suits, Science315, 1561 (2007).
8) "QTPIE: Charge Transfer with Polarization Current Equalization. A fluctuating charge model with correct asymptotics," J. Chen and T.J. Martínez, Chem. Phys. Lett.438, 315 (2007).
9) "Isomerization Through Conical Intersections," B.G. Levine and T.J. Martínez, Ann. Rev. Phys. Chem.58, 613 (2007).
10) "Insights for Light-Driven Molecular Devices from Ab Initio Multiple Spawning Excited-State Dynamics of Organic and Biological Chromophores," T.J. Martínez, Acc. Chem. Res.39, 119-126 (2006).
11) "Using Meta-Conjugation to Enhance Charge Separation versus Charge Recombination in Phenylacetylene Donor-Bridge-Acceptor Complexes," A.L. Thompson, T.-S. Ahn, K. R.J. Thomas, S. Thayumanavan, T.J. Martínez, and C.J. Bardeen, J. Amer. Chem. Soc.127, 16348-16349 (2005). |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.